chore(evalhub): sync provider ConfigMaps from upstream eval-hub#720
chore(evalhub): sync provider ConfigMaps from upstream eval-hub#720gnaulak-redhat wants to merge 5 commits intotrustyai-explainability:mainfrom
Conversation
Co-Authored-By: Claude <noreply@anthropic.com>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughSix EvalHub provider ConfigMaps update runtime commands from a no-op ( ChangesEvalHub provider ConfigMap formatting and runtime command updates
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/configmaps/evalhub/provider-garak-kfp.yaml`:
- Around line 29-30: Replace the modified local command value with the upstream
placeholder: locate the YAML block with the local key and the command: python
tests/features/test_data/runtime/main.py entry in the provider-garak-kfp
configuration and restore it to upstream’s value (true) so the local block
matches the original upstream content exactly; do not change any other keys or
formatting in that provider-*.yaml file to preserve byte-for-byte alignment.
In `@config/configmaps/evalhub/provider-garak.yaml`:
- Around line 28-29: Revert the local-only override in provider-garak.yaml by
restoring runtime.local.command to the upstream value (true) instead of the
local test path; locate the runtime.local.command entry in provider-garak.yaml
and replace "python tests/features/test_data/runtime/main.py" with the upstream
placeholder/boolean true so the file matches upstream exactly and stops
check_configmap_sync.py from reporting drift.
In `@config/configmaps/evalhub/provider-guidellm.yaml`:
- Around line 24-25: This file alters the upstream config for provider-guidellm
by setting the local.command override; revert the change so the ConfigMap
matches upstream exactly: remove or restore the local.command entry in
provider-guidellm.yaml (the local block / local.command key) back to the
upstream placeholder/template value so the file is no longer intentionally
unsynced and scripts/check_configmap_sync.py will pass.
In `@config/configmaps/evalhub/provider-ibm-clear.yaml`:
- Around line 24-25: The local ConfigMap change replacing the upstream
placeholder for runtime.local.command must be reverted: restore the upstream
placeholder (set runtime.local.command back to true) in
config/configmaps/evalhub/provider-ibm-clear.yaml so the file exactly matches
upstream; do not replace the placeholder with the repo-specific test entrypoint
("python tests/features/test_data/runtime/main.py")—either revert this local
edit or land the equivalent change upstream and resync, ensuring the file stays
identical to eval-hub so python scripts/check_configmap_sync.py no longer
reports drift.
In `@config/configmaps/evalhub/provider-lighteval.yaml`:
- Around line 24-25: This PR hardcodes runtime.local.command to "python
tests/features/test_data/runtime/main.py" in provider-lighteval.yaml which
breaks the upstream sync; revert that change so runtime.local.command is
restored to the upstream placeholder/value (set runtime.local.command: true or
the exact upstream token), removing the local test script, and ensure
provider-*.yaml files (specifically the runtime.local.command entry) remain
identical to upstream templates rather than being modified for local tests.
In `@config/configmaps/evalhub/provider-lm-evaluation-harness.yaml`:
- Around line 29-30: Revert the hardcoded test runner in
provider-lm-evaluation-harness.yaml by restoring the upstream placeholder for
runtime.local.command (set it back to true) so the file matches upstream
exactly; locate the runtime.local.command entry in
provider-lm-evaluation-harness.yaml and replace the custom "python
tests/features/test_data/runtime/main.py" value with the upstream value true to
satisfy scripts/check_configmap_sync.py.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1a2d0b8f-f572-42ff-8555-9cf325c22537
📒 Files selected for processing (6)
config/configmaps/evalhub/provider-garak-kfp.yamlconfig/configmaps/evalhub/provider-garak.yamlconfig/configmaps/evalhub/provider-guidellm.yamlconfig/configmaps/evalhub/provider-ibm-clear.yamlconfig/configmaps/evalhub/provider-lighteval.yamlconfig/configmaps/evalhub/provider-lm-evaluation-harness.yaml
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ppadashe-psp The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/retest-required |
|
New changes are detected. LGTM label has been removed. |
9519019 to
f6d1064
Compare
…onfigMaps Replace yaml.dump round-trip with raw content embedding in sync-evalhub-providers.py to preserve upstream YAML formatting exactly. Re-sync all provider and collection ConfigMaps to reflect the preserved original formatting. Co-Authored-By: Claude <noreply@anthropic.com>
f6d1064 to
a96100d
Compare
Limit line length to 100 characters in yaml.dump to reduce excessive wrapping while keeping output consistent. Re-sync all provider and collection ConfigMaps. Co-Authored-By: Claude <noreply@anthropic.com>
Adjust line width to 117 characters in yaml.dump to better match yamllint line-length limits. Re-sync all affected provider and collection ConfigMaps. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
config/configmaps/evalhub/provider-lm-evaluation-harness.yaml (1)
1-3100:⚠️ Potential issue | 🟠 MajorFix yamllint violations before merge. The file currently has 102 yamllint warnings that prevent CI/CD compliance:
- 101 line-length violations (lines exceeding 80 characters)
- 1 missing document start ("---") warning
Address these issues by either rewrapping long lines or adjusting yamllint configuration rules in CI/CD to match the file's needs.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@config/configmaps/evalhub/provider-lm-evaluation-harness.yaml` around lines 1 - 3100, The ConfigMap value lm_evaluation_harness.yaml is failing yamllint due to a missing document start and many >80-character lines; add a YAML document start ("---") at the top of the lm_evaluation_harness.yaml value and either rewrap long scalar lines (especially long description and name fields inside the lm_evaluation_harness.yaml multi-line string) to <=80 chars or update the CI yamllint config to allow longer line-lengths for this provider config; focus changes around the metadata/data key where lm_evaluation_harness.yaml is defined and the long "description" and "name" fields in benchmark entries (e.g., id: lm_evaluation_harness, id: arc_easy, and the many benchmark description blocks).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@config/configmaps/evalhub/provider-lm-evaluation-harness.yaml`:
- Around line 1-3100: The ConfigMap value lm_evaluation_harness.yaml is failing
yamllint due to a missing document start and many >80-character lines; add a
YAML document start ("---") at the top of the lm_evaluation_harness.yaml value
and either rewrap long scalar lines (especially long description and name fields
inside the lm_evaluation_harness.yaml multi-line string) to <=80 chars or update
the CI yamllint config to allow longer line-lengths for this provider config;
focus changes around the metadata/data key where lm_evaluation_harness.yaml is
defined and the long "description" and "name" fields in benchmark entries (e.g.,
id: lm_evaluation_harness, id: arc_easy, and the many benchmark description
blocks).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fa6652b6-b146-4787-ae1e-adf769f2e73b
📒 Files selected for processing (8)
config/configmaps/evalhub/collection-toxicity-and-ethical-principles.yamlconfig/configmaps/evalhub/provider-garak-kfp.yamlconfig/configmaps/evalhub/provider-garak.yamlconfig/configmaps/evalhub/provider-guidellm.yamlconfig/configmaps/evalhub/provider-ibm-clear.yamlconfig/configmaps/evalhub/provider-lighteval.yamlconfig/configmaps/evalhub/provider-lm-evaluation-harness.yamlhack/sync-evalhub-providers.py
✅ Files skipped from review due to trivial changes (3)
- config/configmaps/evalhub/collection-toxicity-and-ethical-principles.yaml
- hack/sync-evalhub-providers.py
- config/configmaps/evalhub/provider-ibm-clear.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- config/configmaps/evalhub/provider-guidellm.yaml
- config/configmaps/evalhub/provider-lighteval.yaml
- config/configmaps/evalhub/provider-garak-kfp.yaml
Adjust line width to 130 characters in yaml.dump. Re-sync affected provider and collection ConfigMaps. Co-Authored-By: Claude <noreply@anthropic.com>
|
@gnaulak-redhat: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Sync eval-hub/eval-hub main config provider yaml files.
These files are auto-synced with
hack/sync-evalhub-providers.pyscriptThis is in part to fix the ci pipeline in eval-hub/eval-hub repository side
Summary by CodeRabbit