feat: kube-rbac-proxy sidecar container for eval-hub#716
feat: kube-rbac-proxy sidecar container for eval-hub#716nbs-rh wants to merge 8 commits intotrustyai-explainability:mainfrom
Conversation
|
Skipping CI for Draft Pull Request. |
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds a kube-rbac-proxy sidecar to EvalHub Deployments, moves EvalHub to bind to 127.0.0.1 on an internal app port (8444), shifts probe and service wiring to use the proxy, extends operator ConfigMap keys (proxy image, disable_auth, resource overrides), and migrates tests to Ginkgo/Gomega validating the two-container topology and ConfigMap fallbacks. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant KubeRBACProxy as kube-rbac-proxy
participant EvalHub
participant ConfigMap
Note over Client,KubeRBACProxy: External TLS + auth handled by sidecar
Client->>KubeRBACProxy: HTTPS request (auth headers)
KubeRBACProxy->>KubeRBACProxy: Validate auth, terminate TLS
KubeRBACProxy->>EvalHub: HTTP forward to http://127.0.0.1:8444/
EvalHub->>EvalHub: Serve request on loopback
EvalHub-->>KubeRBACProxy: HTTP response
KubeRBACProxy-->>Client: HTTPS response
Note over KubeRBACProxy,EvalHub: Health probe flow
KubeRBACProxy->>EvalHub: GET /api/v1/health (upstream)
EvalHub-->>KubeRBACProxy: 200 OK
Note over KubeRBACProxy,ConfigMap: Image resolution
KubeRBACProxy->>ConfigMap: Lookup proxy image (operator ConfigMap)
ConfigMap-->>KubeRBACProxy: Image or fallback
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
controllers/evalhub/deployment_test.go (1)
47-51: Optional: use the package-level constants in this fixture too.Same nit as
suite_test.goline 127 —build_test.goandunit_test.goalready useconfigMapEvalHubImageKeyandconfigMapKubeRBACProxyImageKey. Using them here would make the fixture rename-safe.♻️ Suggested change
Data: map[string]string{ - "evalHubImage": "quay.io/ruimvieira/eval-hub:test", - "kube-rbac-proxy": "quay.io/openshift/origin-kube-rbac-proxy:4.19", + configMapEvalHubImageKey: "quay.io/ruimvieira/eval-hub:test", + configMapKubeRBACProxyImageKey: "quay.io/openshift/origin-kube-rbac-proxy:4.19", },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment_test.go` around lines 47 - 51, The fixture in deployment_test.go hardcodes the ConfigMap keys "evalHubImage" and "kube-rbac-proxy"; replace those literal keys in the Data map with the package-level constants configMapEvalHubImageKey and configMapKubeRBACProxyImageKey so the test becomes rename-safe (locate the Data: map[string]string{...} block in the test fixture and swap the string literals for the constants).controllers/evalhub/constants.go (1)
36-37: Optional: align the ConfigMap key naming style.
configMapEvalHubImageKey = "evalHubImage"(camelCase) andconfigMapKubeRBACProxyImageKey = "kube-rbac-proxy"(kebab-case, identical to the container name) use different conventions in the same operator ConfigMap. Consider renaming the key value to e.g."kubeRBACProxyImage"for consistency and to remove the accidental coincidence withkubeRBACProxyContainerName. This is a public surface (operator ConfigMap key), so changing it later would be a breaking change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/constants.go` around lines 36 - 37, The two ConfigMap key constants use inconsistent naming: update the value of configMapKubeRBACProxyImageKey from "kube-rbac-proxy" to a camelCase form (e.g. "kubeRBACProxyImage") to match configMapEvalHubImageKey; update all references/usages of configMapKubeRBACProxyImageKey throughout the codebase to use the new key, and ensure you do not change the constant name so callers still reference configMapKubeRBACProxyImageKey while the string literal becomes "kubeRBACProxyImage" (also verify there is no accidental collision with kubeRBACProxyContainerName usage).controllers/evalhub/deployment.go (2)
78-82: Minor: redundant fallback handling forkube-rbac-proxyimage.
utils.GetImageFromConfigMapWithFallbackalready returnsdefaultKubeRBACProxyImagewhen the ConfigMap key is missing and reports the error to the caller. Theif err != nilbranch inbuildDeploymentSpecthen re-assigns the same fallback. It's harmless but duplicative. The same pattern is used forgetEvalHubImage, so this can be deferred. Either short-circuit the assignment or changegetKubeRBACProxyImageto swallow the not-found error and returnnilafter logging.Also applies to: 419-426
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment.go` around lines 78 - 82, The code in buildDeploymentSpec redundantly reassigns kubeRBACProxyImage to defaultKubeRBACProxyImage when getKubeRBACProxyImage already returns that default via utils.GetImageFromConfigMapWithFallback; remove the duplicate fallback by simply assigning the returned value without overwriting on err (i.e., let kubeRBACProxyImage, err := r.getKubeRBACProxyImage(ctx) stand and drop the kubeRBACProxyImage = defaultKubeRBACProxyImage inside the if err != nil block), or alternatively change getKubeRBACProxyImage/getEvalHubImage to handle the not-found case internally (log and return the default without surfacing an error) so callers like buildDeploymentSpec don't need to reassign the default.
219-231: Optional: use singlefmt.Sprintfcalls for proxy args.Strings like
"--secure-listen-address=0.0.0.0:" + fmt.Sprintf("%d", servicePort)and"--proxy-endpoints-port=" + fmt.Sprintf("%d", kubeRBACProxyHealthPort)mix concatenation and formatting. A singlefmt.Sprintfis clearer and avoids the intermediate allocation.♻️ Suggested change
Args: []string{ - "--secure-listen-address=0.0.0.0:" + fmt.Sprintf("%d", servicePort), + fmt.Sprintf("--secure-listen-address=0.0.0.0:%d", servicePort), "--upstream=" + upstreamURL, "--upstream-ca-file=" + upstreamCAPath, "--config-file=" + kubeRBACProxyConfigMountPath, "--tls-cert-file=" + tlsSecretMountPath + "/" + tlsCertFile, "--tls-private-key-file=" + tlsSecretMountPath + "/" + tlsKeyFile, - "--proxy-endpoints-port=" + fmt.Sprintf("%d", kubeRBACProxyHealthPort), + fmt.Sprintf("--proxy-endpoints-port=%d", kubeRBACProxyHealthPort), "--ignore-paths=" + evalHubHealthPath, "--auth-header-fields-enabled", "--auth-header-user-field-name=X-User", "--v=0", },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment.go` around lines 219 - 231, The Args entries in the container spec are building flags by concatenating strings with fmt.Sprintf (see the Args slice and variables servicePort, kubeRBACProxyHealthPort, upstreamURL, upstreamCAPath, kubeRBACProxyConfigMountPath, tlsSecretMountPath, tlsCertFile, tlsKeyFile, evalHubHealthPath); replace those concatenations with single fmt.Sprintf calls for each flag (e.g. use fmt.Sprintf("--secure-listen-address=0.0.0.0:%d", servicePort) and fmt.Sprintf("--proxy-endpoints-port=%d", kubeRBACProxyHealthPort)) and similarly format other flags that currently combine literals and variables so each Args element is constructed with one fmt.Sprintf for clarity and fewer allocations.controllers/evalhub/suite_test.go (1)
126-128: Optional: use the package-level key constants in the test fixture.
build_test.goandunit_test.goreference these viaconfigMapEvalHubImageKey/configMapKubeRBACProxyImageKey. Using the same constants here would make tests robust to a future key-rename and avoid duplicating the literals.♻️ Suggested change
Data: map[string]string{ - "evalHubImage": "quay.io/ruimvieira/eval-hub:test", - "kube-rbac-proxy": "quay.io/openshift/origin-kube-rbac-proxy:4.19", + configMapEvalHubImageKey: "quay.io/ruimvieira/eval-hub:test", + configMapKubeRBACProxyImageKey: "quay.io/openshift/origin-kube-rbac-proxy:4.19", },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/suite_test.go` around lines 126 - 128, The test fixture uses hard-coded map keys "evalHubImage" and "kube-rbac-proxy" which duplicates literals; replace those literals with the package-level constants configMapEvalHubImageKey and configMapKubeRBACProxyImageKey (the same constants used in build_test.go and unit_test.go) so the test references the canonical keys and remains correct if the keys are renamed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controllers/evalhub/constants.go`:
- Around line 17-20: The comment on evalHubAppPort/evalHubHealthPath contradicts
the deployment wiring (TLS env vars and --upstream-ca-file) versus
deployment.go's use of "http://"; pick the correct protocol (TLS or plain HTTP),
make the code consistent, and update the comment to match: if you choose TLS,
ensure the eval-hub listener actually binds HTTPS and deployment.go uses
"https://" for the kube-rbac-proxy upstream and keeps the TLS env
vars/--upstream-ca-file; if you choose HTTP, remove TLS env
vars/--upstream-ca-file and change any HTTPS assumptions accordingly; update the
doc-comment for evalHubAppPort and evalHubHealthPath to reflect the chosen
protocol.
In `@controllers/evalhub/deployment.go`:
- Around line 212-227: The upstream URL in kubeRBACProxyContainer is incorrectly
built as "http://127.0.0.1:%d/" (upstreamURL) while the rest of the wiring
expects HTTPS; change the upstreamURL format to "https://127.0.0.1:%d/" so
kubeRBACProxyContainer.Args includes an HTTPS upstream that matches the existing
--upstream-ca-file, the TLS env vars/secret mounts on the eval-hub container,
the evalHubAppPort semantics, and the deployment_test.go assertion that looks
for "--upstream=https://127.0.0.1:…/". If instead you intend HTTP, update/remove
--upstream-ca-file, TLS env vars/mounts in the eval-hub container, the
constants.go comment about TLS, and adjust deployment_test.go accordingly — but
do not mix HTTP upstream with --upstream-ca-file.
---
Nitpick comments:
In `@controllers/evalhub/constants.go`:
- Around line 36-37: The two ConfigMap key constants use inconsistent naming:
update the value of configMapKubeRBACProxyImageKey from "kube-rbac-proxy" to a
camelCase form (e.g. "kubeRBACProxyImage") to match configMapEvalHubImageKey;
update all references/usages of configMapKubeRBACProxyImageKey throughout the
codebase to use the new key, and ensure you do not change the constant name so
callers still reference configMapKubeRBACProxyImageKey while the string literal
becomes "kubeRBACProxyImage" (also verify there is no accidental collision with
kubeRBACProxyContainerName usage).
In `@controllers/evalhub/deployment_test.go`:
- Around line 47-51: The fixture in deployment_test.go hardcodes the ConfigMap
keys "evalHubImage" and "kube-rbac-proxy"; replace those literal keys in the
Data map with the package-level constants configMapEvalHubImageKey and
configMapKubeRBACProxyImageKey so the test becomes rename-safe (locate the Data:
map[string]string{...} block in the test fixture and swap the string literals
for the constants).
In `@controllers/evalhub/deployment.go`:
- Around line 78-82: The code in buildDeploymentSpec redundantly reassigns
kubeRBACProxyImage to defaultKubeRBACProxyImage when getKubeRBACProxyImage
already returns that default via utils.GetImageFromConfigMapWithFallback; remove
the duplicate fallback by simply assigning the returned value without
overwriting on err (i.e., let kubeRBACProxyImage, err :=
r.getKubeRBACProxyImage(ctx) stand and drop the kubeRBACProxyImage =
defaultKubeRBACProxyImage inside the if err != nil block), or alternatively
change getKubeRBACProxyImage/getEvalHubImage to handle the not-found case
internally (log and return the default without surfacing an error) so callers
like buildDeploymentSpec don't need to reassign the default.
- Around line 219-231: The Args entries in the container spec are building flags
by concatenating strings with fmt.Sprintf (see the Args slice and variables
servicePort, kubeRBACProxyHealthPort, upstreamURL, upstreamCAPath,
kubeRBACProxyConfigMountPath, tlsSecretMountPath, tlsCertFile, tlsKeyFile,
evalHubHealthPath); replace those concatenations with single fmt.Sprintf calls
for each flag (e.g. use fmt.Sprintf("--secure-listen-address=0.0.0.0:%d",
servicePort) and fmt.Sprintf("--proxy-endpoints-port=%d",
kubeRBACProxyHealthPort)) and similarly format other flags that currently
combine literals and variables so each Args element is constructed with one
fmt.Sprintf for clarity and fewer allocations.
In `@controllers/evalhub/suite_test.go`:
- Around line 126-128: The test fixture uses hard-coded map keys "evalHubImage"
and "kube-rbac-proxy" which duplicates literals; replace those literals with the
package-level constants configMapEvalHubImageKey and
configMapKubeRBACProxyImageKey (the same constants used in build_test.go and
unit_test.go) so the test references the canonical keys and remains correct if
the keys are renamed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1d98f71b-14c6-43b1-8a14-91638f7eac99
📒 Files selected for processing (8)
controllers/evalhub/build_test.gocontrollers/evalhub/configmap.gocontrollers/evalhub/constants.gocontrollers/evalhub/deployment.gocontrollers/evalhub/deployment_test.gocontrollers/evalhub/evalhub_controller_test.gocontrollers/evalhub/suite_test.gocontrollers/evalhub/unit_test.go
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
controllers/evalhub/unit_test.go (1)
26-186:⚠️ Potential issue | 🟠 MajorController test style is out of policy in this updated unit test path.
The modified controller tests still use
testing/testifypatterns instead of Ginkgo v2 + Gomega/envtest required by repo policy.As per coding guidelines
controllers/**/*_test.go: “Use Ginkgo v2 with Gomega assertions and controller-runtime envtest for all controller tests”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/unit_test.go` around lines 26 - 186, The unit test TestEvalHubReconciler_reconcileDeployment in controllers/evalhub/unit_test.go uses plain testing/testify patterns and a fake client instead of the repo-mandated Ginkgo v2 + Gomega + controller-runtime envtest approach; convert this test to a Ginkgo Describe/It style using Gomega assertions and set up envtest to run real controller-runtime machinery, replacing TestEvalHubReconciler_reconcileDeployment with a Ginkgo spec that boots an envtest environment, registers schemes (corev1, appsv1, evalhubv1alpha1), creates an EvalHubReconciler instance (EvalHubReconciler) against the envtest manager, and exercises reconciler.reconcileDeployment via the reconcile loop or by calling the reconciler with a client from the envtest manager, asserting resources and images with Gomega matchers instead of testify.controllers/evalhub/build_test.go (1)
71-216:⚠️ Potential issue | 🟠 MajorController test style is out of policy (still
testing+testify).This updated controller test path still uses
t.Runwithassert/requireinstead of Ginkgo v2 + Gomega/envtest, which is a repository-standard compliance gap.As per coding guidelines
controllers/**/*_test.go: “Use Ginkgo v2 with Gomega assertions and controller-runtime envtest for all controller tests”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/build_test.go` around lines 71 - 216, The tests in controllers/evalhub/build_test.go use testing.T with testify (t.Run, assert, require) which violates the repository policy; convert these tests to Ginkgo v2 + Gomega and run under controller-runtime envtest. Replace each t.Run block with Ginkgo Describe/Context/It, replace assert/require calls with Gomega Expect/Ω matchers, and set up an envtest environment (or the repo-standard test harness) to create the test client and resources used by EvalHubReconciler; keep assertions about buildDeploymentSpec, EvalHubReconciler behavior, containerName, kubeRBACProxyContainerName, defaultEvalHubImage, defaultKubeRBACProxyImage, and findContainer logic but expressed with Gomega, and ensure test setup/teardown uses BeforeEach/AfterEach and the shared scheme and client from envtest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@controllers/evalhub/build_test.go`:
- Around line 71-216: The tests in controllers/evalhub/build_test.go use
testing.T with testify (t.Run, assert, require) which violates the repository
policy; convert these tests to Ginkgo v2 + Gomega and run under
controller-runtime envtest. Replace each t.Run block with Ginkgo
Describe/Context/It, replace assert/require calls with Gomega Expect/Ω matchers,
and set up an envtest environment (or the repo-standard test harness) to create
the test client and resources used by EvalHubReconciler; keep assertions about
buildDeploymentSpec, EvalHubReconciler behavior, containerName,
kubeRBACProxyContainerName, defaultEvalHubImage, defaultKubeRBACProxyImage, and
findContainer logic but expressed with Gomega, and ensure test setup/teardown
uses BeforeEach/AfterEach and the shared scheme and client from envtest.
In `@controllers/evalhub/unit_test.go`:
- Around line 26-186: The unit test TestEvalHubReconciler_reconcileDeployment in
controllers/evalhub/unit_test.go uses plain testing/testify patterns and a fake
client instead of the repo-mandated Ginkgo v2 + Gomega + controller-runtime
envtest approach; convert this test to a Ginkgo Describe/It style using Gomega
assertions and set up envtest to run real controller-runtime machinery,
replacing TestEvalHubReconciler_reconcileDeployment with a Ginkgo spec that
boots an envtest environment, registers schemes (corev1, appsv1,
evalhubv1alpha1), creates an EvalHubReconciler instance (EvalHubReconciler)
against the envtest manager, and exercises reconciler.reconcileDeployment via
the reconcile loop or by calling the reconciler with a client from the envtest
manager, asserting resources and images with Gomega matchers instead of testify.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fa3267cf-fb1b-4460-aa1f-2760be365066
📒 Files selected for processing (5)
controllers/evalhub/build_test.gocontrollers/evalhub/deployment.gocontrollers/evalhub/deployment_test.gocontrollers/evalhub/evaluation_job_failure_reconciler.gocontrollers/evalhub/unit_test.go
✅ Files skipped from review due to trivial changes (1)
- controllers/evalhub/evaluation_job_failure_reconciler.go
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
controllers/evalhub/build_test.go (1)
376-378: PreferservicePorthere instead of hardcoding8443.The production spec is driven by the constant, so the test should be too.
Proposed fix
- Expect(port.Port).To(Equal(int32(8443))) + Expect(port.Port).To(Equal(int32(servicePort)))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/build_test.go` around lines 376 - 378, The test currently asserts a hardcoded port value (Expect(port.Port).To(Equal(int32(8443))) ); replace the magic number with the production constant by asserting Expect(port.Port).To(Equal(int32(servicePort))) (or Expect(...).To(Equal(int32(<package>.servicePort))) if the constant lives in another package). Keep the other assertions (port.Name and port.TargetPort) as-is and ensure you import or reference the servicePort symbol so the test uses the canonical production value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controllers/evalhub/build_test.go`:
- Line 169: The test asserts the kube-rbac-proxy upstream uses "https://" but
the sidecar in deployment.go configures an HTTP localhost hop; update the
assertion in build_test.go so Expect(strings.Join(krp.Args, "
")).To(ContainSubstring(fmt.Sprintf("--upstream=http://127.0.0.1:%d/",
evalHubAppPort))) — i.e., change the scheme to http to match krp.Args and the
upstream built in the sidecar (krp, evalHubAppPort).
In `@controllers/evalhub/deployment_test.go`:
- Around line 147-148: The test assertions currently expect the kube-rbac-proxy
upstream to use HTTPS but deployment.go builds the flag with HTTP; update the
assertions that check krp.Args (the Expect(krp.Args).To(ContainElement(...)) and
the Expect(strings.Join(krp.Args, " ")).To(ContainSubstring(...)) checks) to
assert "http://127.0.0.1:<evalHubAppPort>/" instead of "https://..." so they
match the upstream flag generated by the code that builds --upstream in
deployment.go.
---
Nitpick comments:
In `@controllers/evalhub/build_test.go`:
- Around line 376-378: The test currently asserts a hardcoded port value
(Expect(port.Port).To(Equal(int32(8443))) ); replace the magic number with the
production constant by asserting Expect(port.Port).To(Equal(int32(servicePort)))
(or Expect(...).To(Equal(int32(<package>.servicePort))) if the constant lives in
another package). Keep the other assertions (port.Name and port.TargetPort)
as-is and ensure you import or reference the servicePort symbol so the test uses
the canonical production value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 68095eff-b4d1-4b35-bc80-c983260ea94e
📒 Files selected for processing (3)
controllers/evalhub/build_test.gocontrollers/evalhub/deployment_test.gocontrollers/evalhub/unit_test.go
💤 Files with no reviewable changes (1)
- controllers/evalhub/unit_test.go
| corev1.ResourceMemory: resource.MustParse("32Mi"), | ||
| }, | ||
| Limits: corev1.ResourceList{ | ||
| corev1.ResourceCPU: resource.MustParse("100m"), |
There was a problem hiding this comment.
@nbs-rh is it to be expected that these are hard-coded here?
| Data: map[string]string{ | ||
| "evalHubImage": "quay.io/ruimvieira/eval-hub:test", | ||
| "evalHubImage": "quay.io/ruimvieira/eval-hub:test", | ||
| "kube-rbac-proxy": "quay.io/openshift/origin-kube-rbac-proxy:4.19", |
There was a problem hiding this comment.
@julpayne we are constructing a configmap for tests here to simulate how a real version would be populated in the configmap on a cluster. So this is apt for the tests?
| configMapName = "trustyai-service-operator-config" | ||
| configMapEvalHubImageKey = "evalHubImage" | ||
| configMapKubeRBACProxyImageKey = "kube-rbac-proxy" | ||
| defaultKubeRBACProxyImage = "quay.io/openshift/origin-kube-rbac-proxy:4.19" |
There was a problem hiding this comment.
This is the fallback if a version is not provided through the operator configmap. Since we do not own kube-rbac-proxy repo, it is better to stick to a default image that we know works well rather than use ":latest"? Btw, this version 4.19 does not have our changes with endpoint-specific SARs - it needs to be added to the configmap and here in code as default once we have an image.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
controllers/evalhub/deployment_test.go (1)
312-321:⚠️ Potential issue | 🟠 MajorThis fallback test never checks the fallback path.
After
reconcileDeployment, there is noExpectonerror on the generated Deployment, so this spec currently passes whether the fallback works or not. The title also says “should fail” while the inline comment expects success. Please assert the intended default-image behavior explicitly.Proposed fix
-It("should fail when config map is missing", func() { +It("falls back to default images when config map is missing", func() { By("Deleting config map") err := k8sClient.Delete(ctx, configMap) Expect(err).NotTo(HaveOccurred()) By("Reconciling deployment without config map") err = reconciler.reconcileDeployment(ctx, evalHub, nil, nil) - // Deployment still succeeds because EvalHub image falls back to default - // when the ConfigMap is missing + Expect(err).NotTo(HaveOccurred()) + + deployment := waitForDeployment(evalHubName, testNamespace) + app := findContainerByName(deployment.Spec.Template.Spec.Containers, containerName) + krp := findContainerByName(deployment.Spec.Template.Spec.Containers, kubeRBACProxyContainerName) + Expect(app).NotTo(BeNil()) + Expect(krp).NotTo(BeNil()) + Expect(app.Image).To(Equal(defaultEvalHubImage)) + Expect(krp.Image).To(Equal(defaultKubeRBACProxyImage)) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment_test.go` around lines 312 - 321, The test currently deletes the ConfigMap and calls reconciler.reconcileDeployment but never asserts the result; update the spec (and its description) to explicitly check the intended fallback behavior by asserting the returned error from reconcileDeployment (expect no error if fallback should succeed or expect error if it should fail) and retrieve the generated Deployment (e.g., using k8sClient.Get or the helper used elsewhere) to assert the container image equals the default image constant; reference reconcileDeployment, evalHub, configMap, k8sClient.Delete and the Deployment resource to locate where to add Expect(err) and an Expect(deployment.Spec.Template.Spec.Containers[...].Image). Ensure the test title/comment matches the asserted behavior.controllers/evalhub/deployment.go (1)
146-147:⚠️ Potential issue | 🟠 MajorDon't let CR env overrides break the sidecar-owned bind settings.
mergeEnvVars(defaultEnvVars, instance.Spec.Env)lets a user-providedAPI_HOSTorPORToverride the fixed127.0.0.1:<evalHubAppPort>listener that the proxy assumes. That can re-expose eval-hub on the pod network or desynchronize it from--upstream=http://127.0.0.1:<evalHubAppPort>/, which turns the proxy into best-effort only. Please treat these keys as non-overridable.Proposed fix
- // Merge environment variables with CR values taking precedence - env := mergeEnvVars(defaultEnvVars, instance.Spec.Env) + // Merge environment variables with CR values taking precedence, except for + // proxy-owned bind settings that must remain fixed. + protectedEnv := map[string]struct{}{ + "API_HOST": {}, + "PORT": {}, + } + userEnv := make([]corev1.EnvVar, 0, len(instance.Spec.Env)) + for _, envVar := range instance.Spec.Env { + if _, blocked := protectedEnv[envVar.Name]; blocked { + continue + } + userEnv = append(userEnv, envVar) + } + env := mergeEnvVars(defaultEnvVars, userEnv)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment.go` around lines 146 - 147, The mergeEnvVars call allows CR overrides to stomp sidecar bind settings; prevent user-provided API_HOST and PORT from overriding the fixed 127.0.0.1:<evalHubAppPort> listener by treating those keys as non-overridable: either filter out API_HOST and PORT from instance.Spec.Env before calling mergeEnvVars or modify mergeEnvVars to accept a nonOverridableKeys set (e.g., {"API_HOST","PORT"}) and ignore any incoming entries for those keys so defaultEnvVars retain precedence for the evalHubAppPort-bound listener and proxy upstream.
♻️ Duplicate comments (1)
controllers/evalhub/constants.go (1)
17-20:⚠️ Potential issue | 🟡 MinorUpdate the port comment to match the current loopback protocol.
deployment.gonow builds the sidecar upstream ashttp://127.0.0.1:<evalHubAppPort>/, so describingevalHubAppPortas the place where eval-hub “binds TLS” is stale and misleading. Please reword this as the internal loopback app port, or explicitly call out the temporary HTTP hop.Based on learnings: in PR
#716the upstream URL from kube-rbac-proxy to evalhub is intentionally set tohttp://127.0.0.1:<evalHubAppPort>/(HTTP, not HTTPS).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/constants.go` around lines 17 - 20, The comment on evalHubAppPort is outdated stating TLS binding; update the comment on the constant evalHubAppPort to indicate this is the internal loopback application port (or explicitly note the temporary HTTP hop from kube-rbac-proxy to the sidecar upstream), reflecting that deployment.go now uses http://127.0.0.1:<evalHubAppPort>/; also ensure the evalHubHealthPath comment remains accurate about kube-rbac-proxy forwarding and kubelet probe behavior.
🧹 Nitpick comments (1)
controllers/evalhub/deployment.go (1)
410-428: ReuseoperatorNamespace()in the image helpers.Both image lookups duplicate the same
"trustyai-service-operator-system"fallback thatdeployment_operator_settings.goalready centralized. Using the helper here keeps image lookup and resource settings on one namespace policy.Proposed refactor
func (r *EvalHubReconciler) getKubeRBACProxyImage(ctx context.Context) (string, error) { - namespace := r.Namespace - if namespace == "" { - namespace = "trustyai-service-operator-system" - } - return utils.GetImageFromConfigMapWithFallback(ctx, r.Client, configMapKubeRBACProxyImageKey, r.effectiveOperatorConfigMapName(), namespace, defaultKubeRBACProxyImage) + return utils.GetImageFromConfigMapWithFallback( + ctx, + r.Client, + configMapKubeRBACProxyImageKey, + r.effectiveOperatorConfigMapName(), + r.operatorNamespace(), + defaultKubeRBACProxyImage, + ) } // getEvalHubImage retrieves the EvalHub image from ConfigMap with fallback to default func (r *EvalHubReconciler) getEvalHubImage(ctx context.Context) (string, error) { - // Get the namespace where the operator is deployed (where the ConfigMap should be) - namespace := r.Namespace - if namespace == "" { - // Fallback to default namespace if not set - namespace = "trustyai-service-operator-system" - } - - return utils.GetImageFromConfigMapWithFallback(ctx, r.Client, configMapEvalHubImageKey, r.effectiveOperatorConfigMapName(), namespace, defaultEvalHubImage) + return utils.GetImageFromConfigMapWithFallback( + ctx, + r.Client, + configMapEvalHubImageKey, + r.effectiveOperatorConfigMapName(), + r.operatorNamespace(), + defaultEvalHubImage, + ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/evalhub/deployment.go` around lines 410 - 428, Both getKubeRBACProxyImage and getEvalHubImage duplicate the hardcoded "trustyai-service-operator-system" fallback; replace the manual namespace fallback logic with a call to the centralized helper (operatorNamespace or operatorNamespace()) on the EvalHubReconciler so both functions use r.operatorNamespace() to obtain the namespace, then pass that namespace into utils.GetImageFromConfigMapWithFallback (keep using configMapKubeRBACProxyImageKey, configMapEvalHubImageKey, r.effectiveOperatorConfigMapName(), and the same default image constants).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@controllers/evalhub/deployment_test.go`:
- Around line 312-321: The test currently deletes the ConfigMap and calls
reconciler.reconcileDeployment but never asserts the result; update the spec
(and its description) to explicitly check the intended fallback behavior by
asserting the returned error from reconcileDeployment (expect no error if
fallback should succeed or expect error if it should fail) and retrieve the
generated Deployment (e.g., using k8sClient.Get or the helper used elsewhere) to
assert the container image equals the default image constant; reference
reconcileDeployment, evalHub, configMap, k8sClient.Delete and the Deployment
resource to locate where to add Expect(err) and an
Expect(deployment.Spec.Template.Spec.Containers[...].Image). Ensure the test
title/comment matches the asserted behavior.
In `@controllers/evalhub/deployment.go`:
- Around line 146-147: The mergeEnvVars call allows CR overrides to stomp
sidecar bind settings; prevent user-provided API_HOST and PORT from overriding
the fixed 127.0.0.1:<evalHubAppPort> listener by treating those keys as
non-overridable: either filter out API_HOST and PORT from instance.Spec.Env
before calling mergeEnvVars or modify mergeEnvVars to accept a
nonOverridableKeys set (e.g., {"API_HOST","PORT"}) and ignore any incoming
entries for those keys so defaultEnvVars retain precedence for the
evalHubAppPort-bound listener and proxy upstream.
---
Duplicate comments:
In `@controllers/evalhub/constants.go`:
- Around line 17-20: The comment on evalHubAppPort is outdated stating TLS
binding; update the comment on the constant evalHubAppPort to indicate this is
the internal loopback application port (or explicitly note the temporary HTTP
hop from kube-rbac-proxy to the sidecar upstream), reflecting that deployment.go
now uses http://127.0.0.1:<evalHubAppPort>/; also ensure the evalHubHealthPath
comment remains accurate about kube-rbac-proxy forwarding and kubelet probe
behavior.
---
Nitpick comments:
In `@controllers/evalhub/deployment.go`:
- Around line 410-428: Both getKubeRBACProxyImage and getEvalHubImage duplicate
the hardcoded "trustyai-service-operator-system" fallback; replace the manual
namespace fallback logic with a call to the centralized helper
(operatorNamespace or operatorNamespace()) on the EvalHubReconciler so both
functions use r.operatorNamespace() to obtain the namespace, then pass that
namespace into utils.GetImageFromConfigMapWithFallback (keep using
configMapKubeRBACProxyImageKey, configMapEvalHubImageKey,
r.effectiveOperatorConfigMapName(), and the same default image constants).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cd288ce8-3276-4484-bead-09745548dc60
📒 Files selected for processing (5)
controllers/evalhub/build_test.gocontrollers/evalhub/constants.gocontrollers/evalhub/deployment.gocontrollers/evalhub/deployment_operator_settings.gocontrollers/evalhub/deployment_test.go
|
@nbs-rh: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
julpayne
left a comment
There was a problem hiding this comment.
LGTM (as far as I can see)
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: julpayne The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Related to https://redhat.atlassian.net/browse/RHOAIENG-56638
kube-rbac-proxywhich handles authentication/authorization and invokes downstream EH app on localhost with theX-UserandX-Tenantheaders. This is to be consistent with other RHAI services in terms of authentication/authorization.Tests
Test scenario details
EvalHub + kube-rbac-proxy Integration Test Results
Configuration Summary
Pod: evalhub-644df647c-zls4w
Namespace: prabhu
kube-rbac-proxy Configuration:
evalhub Configuration:
TEST 1: Authentication - Valid ServiceAccount Token
Objective: Verify that kube-rbac-proxy accepts valid Kubernetes ServiceAccount tokens
Request:
Response Status: 200
Response Body (first 500 chars):
{"first":{"href":"/api/v1/evaluations/providers"},"limit":50,"total_count":3,"items":[{"resource":{"id":"lm_evaluation_harness","created_at":"2026-04-27T10:13:38.012789Z","updated_at":"2026-04-27T10:13:38.012789Z","owner":"system"},"name":"LM Evaluation Harness","description":"Comprehensive evaluation framework for language models with 180 benchmarks ","title":"LM Evaluation Harness","benchmarks":[{"id":"arc_easy","name":"Basic science Q&A","description":"Grade-school science questions testing b ...evalhub Logs (showing received headers):
{"level":"info","ts":"2026-04-27T11:21:08.011Z","caller":"handlers/providers.go:117","msg":"Request started","request_id":"f968aadc-48f4-4731-b5fe-cb6a02a14ca6","method":"GET","uri":"/api/v1/evaluations/providers","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:32944","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","filter":"{\"limit\":50,\"offset\":0,\"params\":map[name: owner: tags:]}"} {"level":"info","ts":"2026-04-27T11:21:08.014Z","caller":"server/execution_context.go:166","msg":"Request successful","request_id":"f968aadc-48f4-4731-b5fe-cb6a02a14ca6","method":"GET","uri":"/api/v1/evaluations/providers","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:32944","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","code":200,"duration":0.003096457}Result: ✅ PASS - Authentication successful, HTTP 200 OK
Evidence:
TEST 2: Authentication - Invalid/Missing Token
Objective: Verify that kube-rbac-proxy rejects requests without valid authentication
Request:
curl -k -H "X-Tenant: prabhu" \ https://evalhub-prabhu.apps.rosa.prabhu-comhub.xqmp.p3.openshiftapps.com/api/v1/evaluations/providersResponse Status: 401
Response Body:
Result: ✅ PASS - Authentication failed as expected, HTTP 401 Unauthorized
Evidence:
TEST 3: Authorization - Configured Endpoint (Allowed)
Objective: Verify that authorized endpoints in auth.yaml configuration are accessible
auth.yaml configuration includes:
Request:
Response Status: 200
Response Body (truncated):
{"first":{"href":"/api/v1/evaluations/providers"},"limit":50,"total_count":3,"items":[{"resource":{"id":"lm_evaluation_harness","created_at":"2026-04-27T10:13:38.012789Z","updated_at":"2026-04-27T10:13:38.012789Z","owner":"system"},"name":"LM Evaluation Harness","description":"Comprehensive evaluati...Result: ✅ PASS - Authorization successful for configured endpoint, HTTP 200 OK
Evidence:
TEST 4: Authorization - Unconfigured Endpoint (Blocked)
Objective: Verify that endpoints NOT in auth.yaml configuration are blocked
Request:
Response Status: 403
Response Body:
Result: ✅ PASS - Authorization blocked for unconfigured endpoint, HTTP 403 Forbidden
Evidence:
TEST 5: Health Endpoint Bypass (--ignore-paths)
Objective: Verify that /api/v1/health bypasses authentication and authorization checks for liveness/readiness probes
kube-rbac-proxy configuration:
Test 5a: Health endpoint WITH Authorization header
curl -k -H "Authorization: Bearer <token>" \ https://evalhub-prabhu.apps.rosa.prabhu-comhub.xqmp.p3.openshiftapps.com/api/v1/healthResponse Status: 200
Response Body:
{"status":"healthy","timestamp":"2026-04-27T11:24:11.362238185Z","build":"0.4.0","build_date":"2026-04-27T08:49:30.137Z"}Test 5b: Health endpoint WITHOUT Authorization header (Kubelet probe scenario)
Response Status: 200
Response Body:
{"status":"healthy","timestamp":"2026-04-27T11:36:00.168231836Z","build":"0.4.0","build_date":"2026-04-27T08:49:30.137Z"}Result: ✅ PASS - Health endpoint bypassed authentication and authorization, HTTP 200 OK in both cases
Evidence:
TEST 6: Header Forwarding - X-User and X-Tenant
Objective: Verify that kube-rbac-proxy adds X-User header and forwards X-Tenant header to evalhub
kube-rbac-proxy configuration:
Request:
evalhub Application Logs:
{"level":"info","ts":"2026-04-27T11:24:37.363Z","caller":"handlers/providers.go:117","msg":"Request started","request_id":"58eba93d-98e8-41f3-b268-6975e7119fbc","method":"GET","uri":"/api/v1/evaluations/providers","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:56634","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","filter":"{\"limit\":50,\"offset\":0,\"params\":map[name: owner: tags:]}"}Extracted Headers from evalhub logs:
Result: ✅ PASS - Both headers correctly received by evalhub
Evidence:
TEST 7: evalhub Authentication Disabled
Objective: Verify that evalhub has authentication disabled (DisableAuth: true)
ConfigMap Configuration:
evalhub Startup Log:
{"level":"info","ts":"2026-04-27T10:13:38.032Z","caller":"eval_hub/main.go:146","msg":"Server starting","server_port":8444,"version":"0.4.0","build":"0.4.0","build_date":"2026-04-27T08:49:30.137Z","validator":true,"local":false,"tls":false,"mlflow_tracking":true,"otel":false,"prometheus":true,"authentication":false}Extracted Authentication Status:
Result: ✅ PASS - evalhub authentication is disabled
Evidence:
TEST 8: TLS Termination Architecture
Objective: Verify TLS architecture - kube-rbac-proxy terminates TLS, evalhub uses HTTP internally
evalhub Server Configuration:
{"level":"info","ts":"2026-04-27T10:13:38.033Z","caller":"server/server.go:509","msg":"Server starting","addr":"127.0.0.1:8444","tls":false}Extracted Configuration:
kube-rbac-proxy Configuration:
Result: ✅ PASS - Correct TLS architecture in place
Evidence:
TEST 9: Authorization - Multiple Unconfigured Endpoints
Objective: Verify authorization blocks multiple unconfigured endpoints
Test 9a: /docs endpoint
Response: HTTP 403 - Forbidden (user=system:serviceaccount:prabhu:evalhub-client, verb=get, resource=, subresource=)
Test 9b: /metrics endpoint
Response: HTTP 403 - Forbidden (user=system:serviceaccount:prabhu:evalhub-client, verb=get, resource=, subresource=)
Result: ✅ PASS - All unconfigured endpoints blocked
Evidence:
TEST 10: Format 1 (Global Fallback) Authorization
Objective: Verify that kube-rbac-proxy supports Format 1 (global fallback) authorization and that it works when endpoint-specific rules (Format 2) are not present
Configuration Change
Original auth.yaml (Format 2 only):
Updated auth.yaml (Format 1 + Format 2):
Changes Made:
/api/v1/evaluations/providersauthorization.rewrites: Extract namespace from X-Tenant headerauthorization.resourceAttributes: Checkprovidersresource withgetverboc patch configmap evalhub-config -n prabhuoc delete pod evalhub-644df647c-zls4w -n prabhuTest Execution
Request:
Response Status: 200
Response Body:
{"first":{"href":"/api/v1/evaluations/providers"},"limit":50,"total_count":3,"items":[{"resource":{"id":"lm_evaluation_harness","created_at":"2026-04-27T12:03:13.177791Z","updated_at":"2026-04-27T12:03:13.177791Z","owner":"system"},"name":"LM Evaluation Harness","description":"Comprehensive evaluation framework for language models with 180 benchmarks\n","title":"LM Evaluation Harness",...}]}evalhub Logs:
{"level":"info","ts":"2026-04-27T12:03:58.864Z","caller":"handlers/providers.go:117","msg":"Request started","request_id":"0a4bdfeb-c7a4-49f8-936e-f10ce5bffd21","method":"GET","uri":"/api/v1/evaluations/providers","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:55852","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","filter":"{\"limit\":50,\"offset\":0,\"params\":map[name: owner: tags:]}"} {"level":"info","ts":"2026-04-27T12:03:58.871Z","caller":"server/execution_context.go:166","msg":"Request successful","request_id":"0a4bdfeb-c7a4-49f8-936e-f10ce5bffd21","method":"GET","uri":"/api/v1/evaluations/providers","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:55852","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","code":200,"duration":0.006536411}Result: ✅ PASS - Endpoint successfully authorized through Format 1 global fallback
Evidence:
ConfigMap Verification:
Authorization Flow:
Success Indicators:
"tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client"Format Comparison:
authorization.endpoints[].path: /api/v1/evaluations/providersauthorization.resourceAttributes(top-level){{.FromMethod}}(GET/POST/DELETE dynamic)get(static){{.FromHeader}}{{ .Value }}(both from X-Tenant)Key Learnings:
TEST 11: Internal Events API - POST /api/v1/evaluations/jobs/{job_id}/events
Objective: Verify that the internal events API endpoint is properly secured through kube-rbac-proxy and that requests are authorized using endpoint-specific Format 2 authorization rules
Background
The
/api/v1/evaluations/jobs/{job_id}/eventsendpoint is documented in the private API documentation. This endpoint allows evaluation job workers to post status updates during benchmark execution.auth.yaml configuration for this endpoint:
Test Setup
Step 1: Create Evaluation Job
Response Status: 202 Accepted
Response Body:
{ "resource": { "id": "a5842b8a-9d8f-411f-ac6c-24f8854cc221", "tenant": "prabhu", "created_at": "2026-04-27T13:24:08.420809873Z", "owner": "system:serviceaccount:prabhu:evalhub-client" }, "status": { "state": "pending", "message": {"message": "Evaluation job created", "message_code": "evaluation_job_created"} } }Job ID:
a5842b8a-9d8f-411f-ac6c-24f8854cc221Test Execution
Step 2: POST Event to Job
Request:
Response Status: 409 Conflict
Response Body:
{ "message_code": "job_can_not_be_updated", "message": "The job a5842b8a-9d8f-411f-ac6c-24f8854cc221 can not be running because it is 'failed'.", "trace": "1ebd7d88-566e-4c5c-9bbd-275be59deaea" }evalhub Logs:
{"level":"info","ts":"2026-04-27T13:25:17.445Z","caller":"handlers/evaluations.go:419","msg":"Request started","request_id":"1ebd7d88-566e-4c5c-9bbd-275be59deaea","method":"POST","uri":"/api/v1/evaluations/jobs/a5842b8a-9d8f-411f-ac6c-24f8854cc221/events","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:43088","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client"} {"level":"info","ts":"2026-04-27T13:25:17.446Z","caller":"sql/evaluations.go:374","msg":"Updating evaluation job","request_id":"1ebd7d88-566e-4c5c-9bbd-275be59deaea","method":"POST","uri":"/api/v1/evaluations/jobs/a5842b8a-9d8f-411f-ac6c-24f8854cc221/events","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:43088","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","id":"a5842b8a-9d8f-411f-ac6c-24f8854cc221","status":"running"} {"level":"info","ts":"2026-04-27T13:25:17.447Z","caller":"server/execution_context.go:166","msg":"Request successful","request_id":"1ebd7d88-566e-4c5c-9bbd-275be59deaea","method":"POST","uri":"/api/v1/evaluations/jobs/a5842b8a-9d8f-411f-ac6c-24f8854cc221/events","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:43088","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","code":409,"duration":0.001591851} {"level":"info","ts":"2026-04-27T13:25:17.447Z","caller":"handlers/evaluations.go:456","msg":"Request failed","request_id":"1ebd7d88-566e-4c5c-9bbd-275be59deaea","method":"POST","uri":"/api/v1/evaluations/jobs/a5842b8a-9d8f-411f-ac6c-24f8854cc221/events","user_agent":"curl/8.7.1","remote_addr":"127.0.0.1:43088","tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client","error":"The job a5842b8a-9d8f-411f-ac6c-24f8854cc221 can not be running because it is 'failed'.","code":409,"duration":0.001634783}Result: ✅ PASS - Authentication and authorization successful, request reached evalhub
Evidence
Note: The HTTP 409 response is from evalhub's business logic (job in 'failed' state), NOT from authorization failure. The key evidence that authorization passed is that the request reached evalhub and was processed.
Authentication Success:
system:serviceaccount:prabhu:evalhub-clientAuthorization Success:
Headers Forwarded Successfully:
"tenant":"prabhu","user":"system:serviceaccount:prabhu:evalhub-client"system:serviceaccount:prabhu:evalhub-clientprabhuRequest Processing:
handlers/evaluations.go:419Authorization Configuration:
/api/v1/evaluations/jobs/*/eventsstatus-eventswith verbcreateX-TenantheaderKey Learnings
Internal API Security:
status-eventsvsevaluations)Authorization Flow:
Business Logic vs Authorization:
Format 2 Features Demonstrated:
/api/v1/evaluations/jobs/*/events)methods: [post])status-eventsfor events,evaluationsfor jobs)TEST SUMMARY
All Tests: ✅ PASSED (11/11)
*HTTP 409 = Business logic rejection, not authorization failure. Request successfully passed authentication and authorization.
Key Findings:
Authentication Layer (kube-rbac-proxy):
Authorization Layer (kube-rbac-proxy + auth.yaml):
Header Forwarding:
Security Architecture:
Authorization Format Flexibility:
Deployment Topology:
Authorization Format Comparison
Format 1 (Global Fallback) - Used in TEST 10:
authorization.rewritesandauthorization.resourceAttributesFormat 2 (Endpoint-Specific) - Used in TEST 3 and TEST 11:
authorization.endpoints[].pathBoth formats successfully authorize requests, demonstrating configuration flexibility.
API Coverage Tested
Test Documentation Complete: All 11 tests passed with comprehensive evidence, request/response data, and configuration explanations.
Summary by CodeRabbit
New Features
Improvements
Tests