Skip to content

[feature] Unary extend#6675

Open
crafcat7 wants to merge 11 commits intoTencent:masterfrom
crafcat7:unary-extend
Open

[feature] Unary extend#6675
crafcat7 wants to merge 11 commits intoTencent:masterfrom
crafcat7:unary-extend

Conversation

@crafcat7
Copy link
Copy Markdown
Contributor

This PR extends UnaryOp with additional unary operators and wires them through the conversion pipeline so the new ops behave consistently across backend execution and frontend import/export.

The added operators are:

  • SIGN
  • EXPM1
  • SINH
  • ASINH
  • COSH
  • ACOSH
  • ATANH
  • LOG1P

The main goal is to keep UnaryOp coverage aligned across the generic CPU path, SIMD-specialized backends, Vulkan shader execution, and model conversion tools.

Implementation details are as follows:

  • Backend: add the new UnaryOp::OperationType values and implement them in the generic CPU path
  • SIMD backends: add matching support for x86, arm, riscv, mips, and loongarch UnaryOp
  • Vulkan: extend src/layer/vulkan/shader/unaryop.comp for the new unary cases
  • onnx2ncnn: map ONNX Sign, Sinh, Asinh, Cosh, Acosh, Atanh, Expm1, and Log1p to UnaryOp
  • pnnx: extend ONNX import, ncnn expression expansion, and generated python emission for the new unary expressions

This PR also adds regression coverage for both backend execution and conversion behavior:

  • tests/test_unaryop.cpp
  • tools/pnnx/tests/test_torch_expm1.py
  • tools/pnnx/tests/test_torch_log1p.py
  • tools/pnnx/tests/onnx/test_onnx_unary_ops_ext.py

Some unary ops require tighter valid input domains during testing, so the UnaryOp regression coverage also updates the randomized input range for:

  • ACOSH with input >= 1
  • ATANH with input in (-1, 1)
  • LOG1P with input > -1

Testing was performed locally with the following commands:

cmake -DNCNN_BUILD_TESTS=ON ..
cmake --build . -j2 --target test_unaryop
./tests/test_unaryop
python tools/pnnx/tests/test_torch_expm1.py
python tools/pnnx/tests/test_torch_log1p.py
python tools/pnnx/tests/onnx/test_onnx_unary_ops_ext.py

This keeps the new unary operators covered both as direct ncnn layer behavior and as end-to-end converted model expressions.

Summary:

  Add sign, expm1, log1p, and hyperbolic unary support to the generic, SIMD, and Vulkan UnaryOp backends. This keeps the new op_type values behaving consistently across CPU paths and shader execution.

  Also updates test_unaryop input domains for acosh, atanh, and log1p so the added operations are covered with valid regression inputs.
Summary:

  Map sign, sinh, asinh, cosh, acosh, and atanh to UnaryOp in onnx2ncnn and pnnx expression expansion. This lets the new unary family convert through the ONNX pipeline without falling back to unsupported expression handling.

  Also adds an ONNX regression that exercises the extended UnaryOp conversion path end to end.
Summary:

  Teach onnx2ncnn and pnnx to preserve expm1 and log1p as UnaryOp expressions during conversion. This keeps both operators aligned with the new backend support instead of dropping them during graph translation.

  Also fixes generated pnnx python to emit torch.expm1 and torch.log1p calls, and adds torch regressions for both operators.
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 13, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 94.01%. Comparing base (086dda4) to head (eb70515).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6675      +/-   ##
==========================================
+ Coverage   93.65%   94.01%   +0.35%     
==========================================
  Files         930      930              
  Lines      296508   299383    +2875     
==========================================
+ Hits       277688   281453    +3765     
+ Misses      18820    17930     -890     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@nihui nihui closed this Apr 14, 2026
@nihui nihui reopened this Apr 14, 2026
Summary:

  Add missing #include <math.h> to provide expm1f, log1pf and other
  float math function declarations. These functions are used in
  unary_op_expm1 and unary_op_log1p but were not explicitly included,
  causing build failures on some toolchains (e.g. musl-based systems).
@crafcat7 crafcat7 closed this Apr 14, 2026
@crafcat7 crafcat7 reopened this Apr 14, 2026
Summary:

  Add missing expm1f declaration and implementation to simplemath.h/cpp.
  This fixes build failures when NCNN_SIMPLEMATH is enabled on platforms
  like aarch64-linux-gnu, where unary_op_expm1 directly calls expm1f
  without including system math.h (to avoid conflicting with simplemath
  declarations).

  Related to: 8bdc2f0 (add expm1 and log1p unary conversion coverage)
Summary:

  Tighten the atanh unary test input range so bf16 casting cannot round into
  the +/-1 singularity, relax the torch_expm1 pnnx comparison tolerance for
  fp16 execution, and export the ONNX unary-ops-ext test with dynamo disabled
  to keep it on a pnnx-compatible ONNX path.
@github-actions github-actions Bot added the core label Apr 14, 2026
Summary:

  Add expm1/log1p support to pnnx expression fusion so generated Python code no
  longer falls back to raw aten calls for these unary ops. Align the expm1 test
  input transform with the traced model and skip unary tests on torch versions
  where required ops are unavailable or the legacy ONNX exporter cannot emit
  aten::sinh to opset 19.
Summary:

  Drop the onnx_unary_ops_ext test because it primarily exercises PyTorch ONNX
  exporter compatibility instead of pnnx behavior, and it fails across multiple
  supported torch version ranges for reasons unrelated to the converter.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants