Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 2 additions & 0 deletions .gersemirc
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
indent: 2
line_length: 100
27 changes: 27 additions & 0 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,30 @@ jobs:
node_type: "cpu4"
script: "ci/build_docs.sh"
sha: ${{ inputs.sha }}
wheel-build-gersemi-rapids-cmake:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/wheels-build.yaml@main
with:
build_type: ${{ inputs.build_type || 'branch' }}
branch: ${{ inputs.branch }}
sha: ${{ inputs.sha }}
date: ${{ inputs.date }}
script: ci/build_wheel_gersemi_rapids_cmake.sh
package-name: gersemi-rapids-cmake
package-type: python
pure-wheel: true
append-cuda-suffix: false
# This selects "ARCH=amd64 + the earliest supported Python + CUDA".
matrix_filter: map(select(.ARCH == "amd64")) | min_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]) | [.]
wheel-publish-gersemi-rapids-cmake:
needs: wheel-build-gersemi-rapids-cmake
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/wheels-publish.yaml@main
with:
build_type: ${{ inputs.build_type || 'branch' }}
branch: ${{ inputs.branch }}
sha: ${{ inputs.sha }}
date: ${{ inputs.date }}
package-name: gersemi-rapids-cmake
package-type: python
publish_to_pypi: true
24 changes: 24 additions & 0 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ jobs:
- checks
- conda-cpp-tests
- docs-build
- wheel-build-gersemi-rapids-cmake
- wheel-tests-gersemi-rapids-cmake
- telemetry-setup
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/pr-builder.yaml@main
Expand Down Expand Up @@ -50,6 +52,28 @@ jobs:
arch: "amd64"
container_image: "rapidsai/ci-conda:26.04-latest"
script: "ci/build_docs.sh"
wheel-build-gersemi-rapids-cmake:
needs: telemetry-setup
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
needs: telemetry-setup
needs: checks

This suggestion isn't only about consistency... it could also prevent CI being blocked. Note that telemetry-setup here has continue-on-error: true, so that it could fail and not block CI.

I think it being in a needed: entry would mean it being skipped would cause wheel-build-gersemi-rapids-cmake to be skipped, which would fail pr-builder, which would block CI.

telemetry-setup won't be required here if you accept the suggestion to remove rapids-telemetry-record from the wheel-building script. Let's rely on checks as most other RAPIDS wheel-build-* workflows do.

secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/wheels-build.yaml@main
with:
build_type: pull-request
script: ci/build_wheel_gersemi_rapids_cmake.sh
package-name: gersemi-rapids-cmake
package-type: python
pure-wheel: true
append-cuda-suffix: false
# This selects "ARCH=amd64 + the earliest supported Python + CUDA".
matrix_filter: map(select(.ARCH == "amd64")) | min_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]) | [.]
wheel-tests-gersemi-rapids-cmake:
needs: wheel-build-gersemi-rapids-cmake
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/wheels-test.yaml@main
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
uses: rapidsai/shared-workflows/.github/workflows/wheels-test.yaml@main
uses: rapidsai/shared-workflows/.github/workflows/custom-job.yaml@main

This doesn't appear to require GPUs, isn't sensitive to CPU architecture, and doesn't need to be kept fully in sync with the RAPIDS support matrix. As long as it'll run in the range of Python versions supported by RAPIDS, that should be sufficient.

I think this should be done with custom-job and target CPU runners, which are cheaper and much less scarce.

I recommend adopting something like https://github.com/rapidsai/cuvs/blob/efe7c978d0aac628cbbf2222de26c525d5393311/.github/workflows/pr.yaml#L180-L204 to cover the oldest and latest Python version that RAPIDS supports.

with:
build_type: pull-request
script: ci/test_wheel_gersemi_rapids_cmake.sh
# This selects "ARCH=amd64 + the earliest supported Python + CUDA".
matrix_filter: map(select(.ARCH == "amd64")) | min_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]) | [.]
telemetry-summarize:
# This job must use a self-hosted runner to record telemetry traces.
runs-on: linux-amd64-cpu4
Expand Down
26 changes: 26 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,29 @@ docs/_build

# vim
*.sw[a-z]

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
96 changes: 70 additions & 26 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# SPDX-FileCopyrightText: Copyright (c) 2023-2025, NVIDIA CORPORATION.
# SPDX-FileCopyrightText: Copyright (c) 2023-2026, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0

repos:
Expand All @@ -18,6 +18,27 @@ repos:
- id: check-json
- id: pretty-format-json
args: ["--autofix", "--no-sort-keys"]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.8
hooks:
- id: ruff-check
args: [--fix]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
args: [--fix]
args: [--fix, --config, "pyproject.toml"]

Let's make it explicit that this should use configuration from pyproject.toml. I don't know it's still the case but in the past I've found that that doesn't happen by default with ruff.

And even if it did, being explicit is helpful for understanding how the check is working.

exclude: |
(?x)
^docs/conf[.]py$
Comment on lines +26 to +28
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
exclude: |
(?x)
^docs/conf[.]py$

Instead of excluding the entire file, I recommend using [tool.ruff.lint.per-file-ignores] in pyproject.toml to ignore specific ruff errors that you don't want applied to conf.py.

That way, ruff still has a chance to catch other types of correctness issues in that file.

Example: https://github.com/microsoft/LightGBM/blob/6af94aadf37b183fe6ad671141d4eeafdcc8c1e3/python-package/pyproject.toml#L186

- id: ruff-format
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- id: ruff-format
- id: ruff-format
args: [--config, "pyproject.toml"]

exclude: |
(?x)
^docs/conf[.]py$
Comment on lines +30 to +32
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
exclude: |
(?x)
^docs/conf[.]py$

I think we should just let ruff auto-format docs/conf.py, to keep all the Python code in the repo looking the same.

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.19.0
hooks:
- id: mypy
additional_dependencies:
- gersemi
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- gersemi
- &gersemi gersemi==0.25.1

Let's use the same pinning through all of these checks that's used in the package itself, so a new release of gersemi won't break CI unnecessarily.

(this implies switching the existing &gersemi line to *gersemi)

exclude: |
(?x)
^docs/conf[.]py$
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v20.1.4
hooks:
Expand All @@ -39,19 +60,28 @@ repos:
hooks:
- id: rapids-dependency-file-generator
args: ["--clean", "--warn-all", "--strict"]
- repo: https://github.com/rapidsai/pre-commit-hooks
rev: v1.2.1
hooks:
- id: verify-copyright
args: [--fix, --spdx]
files: |
(?x)
[.](cmake|cpp|cu|cuh|h|hpp|sh|pxd|py|pyx)$|
CMakeLists[.]txt$|
meta[.]yaml$|
^testing/export/write_language-multiple-nested-enables/A/B/static[.]not_cu$|
dependencies[.]yaml$|
pyproject[.]toml$|
^[.]pre-commit-config[.]yaml$|
[.]stubs$
exclude: |
(?x)
^python/gersemi-rapids-cmake/gersemi_rapids_cmake_detail/stubs/rapids/rapids-cmake/generated/
- id: verify-codeowners
args: [--fix, --project-prefix=rapids, --no-cpp, --no-python]
- repo: local
hooks:
- id: cmake-format
name: cmake-format
entry: ./ci/checks/run-cmake-format.sh cmake-format
language: python
types: [cmake]
# Note that pre-commit autoupdate does not update the versions
# of dependencies, so we'll have to update this manually.
additional_dependencies:
- cmakelang==0.6.13
verbose: true
require_serial: true
- id: cmake-lint
name: cmake-lint
entry: ./ci/checks/run-cmake-format.sh cmake-lint
Expand All @@ -67,22 +97,36 @@ repos:
(?x)^(
^testing/.*$
)
- repo: https://github.com/rapidsai/pre-commit-hooks
rev: v1.2.1
hooks:
- id: verify-copyright
args: [--fix, --spdx]
- id: regenerate-stubs
name: regenerate-stubs
entry: python3 ci/regenerate_stubs.py
language: python
types: [cmake]
additional_dependencies:
- &gersemi gersemi==0.25.1
pass_filenames: false
- id: gersemi-rapids-cmake
name: gersemi-rapids-cmake
entry: ci/checks/gersemi.sh -i --warnings-as-errors --extensions rapids_cmake_detail --
language: python
types: [cmake]
files: |
(?x)
[.](cmake|cpp|cu|cuh|h|hpp|sh|pxd|py|pyx)$|
CMakeLists[.]txt$|
meta[.]yaml$|
^testing/export/write_language-multiple-nested-enables/A/B/static[.]not_cu$|
dependencies[.]yaml$|
pyproject[.]toml$|
^[.]pre-commit-config[.]yaml$
- id: verify-codeowners
args: [--fix, --project-prefix=rapids, --no-cpp, --no-python]
^rapids-cmake/
additional_dependencies:
- *gersemi
require_serial: true
- id: gersemi-testing
name: gersemi-testing
entry: ci/checks/gersemi.sh -i --warnings-as-errors --extensions rapids_cmake_detail --definitions testing/ --
language: python
types: [cmake]
files: |
(?x)
^testing/
additional_dependencies:
- *gersemi
require_serial: true
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
Expand Down
46 changes: 46 additions & 0 deletions ci/build_wheel_gersemi_rapids_cmake.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash
# SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0

set -euo pipefail

source rapids-init-pip

rapids-logger "Generating build requirements"

rapids-dependency-file-generator \
--output requirements \
--file-key "py_build_gersemi_rapids_cmake" \
--matrix "" \
| tee /tmp/requirements-build.txt

rapids-logger "Installing build requirements"
rapids-pip-retry install \
-v \
--prefer-binary \
-r /tmp/requirements-build.txt

Comment on lines +9 to +22
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
rapids-logger "Generating build requirements"
rapids-dependency-file-generator \
--output requirements \
--file-key "py_build_gersemi_rapids_cmake" \
--matrix "" \
| tee /tmp/requirements-build.txt
rapids-logger "Installing build requirements"
rapids-pip-retry install \
-v \
--prefer-binary \
-r /tmp/requirements-build.txt

Since the wheel is getting build with build isolation (where a dedicated virtualenv is set up using the build dependencies declares in pyproject.toml), I think all of this is unnecessary and should be removed.

rapids-generate-version > ./VERSION
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I re-read @vyasr 's comments at rapidsai/pre-commit-hooks#62 (comment) today and it reminded me... we should have a plan to deal with the following scenario:

  1. you've run pre-commit run on your clone of cuDF early in 26.04 development
  2. some API-breaking changes are merged in rapids-cmake 26.04
  3. you run pre-commit run on your clone of cuDF again
  4. gersemi checks break, because pre-commit matches the cached gersemi-rapids-cmake==26.04.00, which is inconsistent with the tip of rapids-cmake

I'd consider this blocking only because the answer might be "don't use pre-commit and a Python package".

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crap. We might have to take the same approach we took with rapids-metadata... and I've heard of people and/or software packages (don't remember which) getting grumpy about verify-alpha-spec making internet requests.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One possibility is, in cases where a breaking change was made to rapids-cmake, we update the gersemi-rapids-cmake dependency to >=26.04.00a123 across the board... but I don't like that solution either.

Copy link
Copy Markdown
Member

@jameslamb jameslamb Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might have to take the same approach we took with rapids-metadata... and I've heard of people and/or software packages (don't remember which) getting grumpy about verify-alpha-spec making internet requests.

Yeah I'd love to avoid introducing more network requests if we could. Especially unauthenticated requests to anything owned by GitHub, which face lower rate limits than authenticated requests.

On the other hand, the solution we have for formatting CMake code today already relies on fetching something from GitHub at runtime

FORMAT_FILE_URL="https://raw.githubusercontent.com/rapidsai/rapids-cmake/${RAPIDS_BRANCH}/cmake-format-rapids-cmake.json"
export RAPIDS_CMAKE_FORMAT_FILE=/tmp/rapids_cmake_ci/cmake-format-rapids-cmake.json
mkdir -p "$(dirname "${RAPIDS_CMAKE_FORMAT_FILE}")"
wget -O ${RAPIDS_CMAKE_FORMAT_FILE} "${FORMAT_FILE_URL}"

(rapidsai/cudf - ci/check_style.sh)

Though that's a single small-ish file, probably smaller than a tarball of all the stubs here would be. And that being in check_style.sh means it's not executed by pre-commit run locally, which leads to that annoying problem of "pre-commit checks pass locally but fail in CI because there are cmake-format errors".

That download might be causing problems too if it was run every time anyone did pre-commit run locally.

in cases where a breaking change was made to rapids-cmake, we update the gersemi-rapids-cmake dependency to >=26.04.00a123 across the board... but I don't like that solution either.

I agree, this would solve the problem but also introduces some new process and maintenance burden that may not be worth it.


I can think of some other options, none are that great but maybe they'll spark some ideas.

Option 1: system hook to update pre-commit's cache

I have a weird idea... could we hack in another system hook running before this one that updates pre-commit's cache? It could query the available versions like this (using cudf-cu13 just to have a working example, but this would be for gersemi-rapids-cmake):

pip index versions \
    --index-url https://pypi.anaconda.org/rapidsai-wheels-nightly/simple \
    --pre \
    --json \
    cudf-cu13 \
| jq -r '."latest"'
# 26.4.0a37

And compare that to whatever's installed in pre-commit's cache (I don't know how to do that, but maybe it's possible).

And then try to update the cache (just for gersemi-rapids-cmake) if there's a newer published gersemi-rapids-cmake available?

I don't know how complex it would be to get that script right and to have it work correctly with different RAPIDS versions (e.g. on release/26.02 during burndown, it should get the latest 26.02 version not the absolute latest which would be 26.04.*), the default_language_version in .pre-commit-config.yaml, and everything else that goes into populating that virtualenv.

And this is definitely way out in "unsupported" territory so I don't know how much risk that introduces of this being broken by future changes in pre-commit.

But maybe that'd give us a path to what we want without needing new PRs to all repos?

Option 2: manage the venv outside of pre-commit

Just make this a system hook instead of a python hook and always re-update gersemi-rapids-cmake when it's run.

With a script like this:

#!/bin/bash

VERSION=$(cat ./VERSION)
venv_dir=$(mktemp -d)
python -m virtualenv ${venv_dir}
source ${venv_dir}/bin/activate
python -m pip install \
   --extra-index-url https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/ \
   "gersemi-rapids-cmake>=${VERSION}a0"

gersemi -i, --extensions, rapids_cmake, -- '${@}"
- id: gersemi-format-cmake
  name: gersemi-format-cmake
  entry: ./ci/checks/run-gersemi.sh
  args: [-i, --extensions, rapids_cmake, --]
  files: '(\.cmake$|CMakeLists\.txt$)'
  types_or: [file]
  language: system
  pass_filenames: true

That would still do a bunch of network requests to install Python packages, but those should get cached in our package proxy so shouldn't be a problem in CI. And locally, pip install-ing a few things from pypi.org / the RAPIDS nightly index on every pre-commit run shouldn't be a huge problem.

That script is a naive implementation just to show the idea. It could be updated with things to make it less expensive locally (like checking if the venv already exists and checking if an update of gersemi-rapids-cmake is actually needed).

Would still require a script to be checked into every repo but that's already the case with cmake-format so I think that's acceptable if it gets us the behavior we want.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to go spend time on other things for a bit, hopefully these bad ideas inspire better ones haha

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have another idea. I propose:

Option 3: Add a cache: false option to pre-commit

This is the second time that pre-commit's caching has gotten in our way. I propose adding a cache option to .pre-commit-config.yaml to disable caching the environment for a hook.

I've opened pre-commit/pre-commit#3611 to propose this design change.

Copy link
Copy Markdown
Contributor

@bdice bdice Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My view is that we should start with the simplest possible solution (GitHub network requests) and step back into more complexity from there. Every build of our software already hammers GitHub for rapids-cmake's fetching. I know this can and does bite us from time to time, but it'd be worth trying as a starting point.

If that fails in production, then I would recommend option 2 (custom script / custom management of the gersemi rapids-cmake extension package). I do not like that options 1 and 3 require some hacking around the behavior of pre-commit itself.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-commit/pre-commit#3611 was a washout.

My other thinking is to do something similar to what the existing scripts are already doing: search locally to see if rapids-cmake is already in a build tree somewhere and use that. I'd be open to specifics on how to carry this out.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume resolving the caching issue is why this work stalled?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

rapids-generate-version > ./python/gersemi-rapids-cmake/gersemi_rapids_cmake_detail/VERSION
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
rapids-generate-version > ./python/gersemi-rapids-cmake/gersemi_rapids_cmake_detail/VERSION

It looks to me like this is a symlink pointing to the top-level VERSION. It shouldn't need a manual update like this.


cd ./python/gersemi-rapids-cmake

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Builds here currently have the pip deprecation warning that @mmccarty just fixed everywhere else in RAPIDS:

DEPRECATION: Setting PIP_CONSTRAINT will not affect build constraints in the future, pip 26.2 will enforce this behaviour change. A possible replacement is to specify build constraints using --build-constraint or PIP_BUILD_CONSTRAINT. To disable this warning without any build constraints set --use-feature=build-constraint or PIP_USE_FEATURE="build-constraint".
Installing build dependencies: started

(build link)

Let's apply the fix that was applied everywhere else in rapidsai/build-planning#242

Suggested change
RAPIDS_PIP_WHEEL_ARGS=(
-w "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}"
-v
--no-deps
--disable-pip-version-check
)
# unset PIP_CONSTRAINT (set by rapids-init-pip)... it doesn't affect builds as of pip 25.3, and
# results in an error from 'pip wheel' when set and --build-constraint is also passed
unset PIP_CONSTRAINT

rapids-logger "Building 'gersemi-rapids-cmake' wheel"
rapids-telemetry-record build-gersemi-rapids-cmake.log rapids-pip-retry wheel \
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
rapids-telemetry-record build-gersemi-rapids-cmake.log rapids-pip-retry wheel \
rapids-pip-retry wheel \

I only see rapids-telemetry-record used in rmm and cudf (https://github.com/search?q=org%3Arapidsai+%22rapids-telemetry-record%22+AND+NOT+is%3Aarchived&type=code).

I think it might be left over from some earlier, unfinished experiment with telemetry tracking.

If you didn't intentionally include this and it's just here because you copied things from one of those repos, remove it... one less thing that can break, and I doubt that the builds of this library will be so expensive that the telemetry data would be worth that risk and maintenance burden right now.

-w dist \
-v \
--no-deps \
--disable-pip-version-check \
.

cp dist/* "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}"
Comment on lines +30 to +36
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-w dist \
-v \
--no-deps \
--disable-pip-version-check \
.
cp dist/* "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}"
"${RAPIDS_PIP_WHEEL_ARGS[@]}" \
.

This pairs with my suggestion above about making this match all the other RAPIDS wheel-building scripts, using the patterns from rapidsai/build-planning#242

Notice that it also cuts out an unnecessary step via -w "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}".

This is a pure-Python project that isn't being post-processed in any way (e.g. auditwheel repair)... instead of writing it to dist/ then copying it to the artifact-upload location, we can just have pip wheel directly write to the artifact-upload location.


rapids-logger "validate packages with 'pydistcheck'"
pydistcheck \
--inspect \
"$(echo "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}"/*.whl)"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every RAPIDS project that uses pydistcheck opts into a limited set of checks by using the select configuration. Like this:

[tool.pydistcheck]
select = [
    "distro-too-large-compressed",
]

# PyPI limit is 100 MiB, fail CI before we get too close to that
max_allowed_size_compressed = '75M'

(rapidsai/nx-cugraph - pyproject.toml)

This PR doesn't have a [tool.pydistcheck] table in pyproject.toml or the --select CLI argument, so it's running ALL of the pydistcheck checks: https://pydistcheck.readthedocs.io/en/latest/check-reference.html

That's fine if you want to do that (I wrote them and think they're mostly useful), but commenting in case this was unintentional.


rapids-logger "validate packages with 'twine'"
twine check \
--strict \
"$(echo "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}"/*.whl)"
Comment on lines +38 to +46
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convention across the rest of RAPIDS is to put this stuff in a file ci/validate_wheel.sh.

For example: https://github.com/rapidsai/nx-cugraph/blob/2530c42e24d9b92190349e61a2941ebab225ac85/ci/build_wheel_nx-cugraph.sh#L10

Could we do that here, for consistency? I know the indirection seems like it might not be worth it, but I think it is to make all-of-RAPIDS searches and automated updates easier.

11 changes: 11 additions & 0 deletions ci/checks/gersemi.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

# SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0

# It's not possible for pre-commit to install a local Python package, so
# manually add it to the PYTHONPATH instead.

set -euo pipefail

PYTHONPATH="$(dirname "$0")/../../python/gersemi-rapids-cmake:${PYTHONPATH:-}" gersemi "${@}"
Loading