Fix NFS mount failure on IBM Cloud by fixing retry logic and opening port 2049#15119
Conversation
The retry() calls wrapping self.con.exec_cmd() were broken in two ways: 1. exec_cmd was called immediately and its tuple result passed to retry() instead of passing a callable, so no retry ever occurred 2. Connection.exec_cmd() returns (retcode, stdout, stderr) and never raises CommandFailed, so the retry exception type never triggered Add _mount_nfs_with_retry() helper that wraps exec_cmd in a nested function which raises CommandFailed on non-zero retcode, and calls it via retry properly. Replace all 5 broken call sites. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
IBM Cloud VPC LoadBalancer security groups block inbound traffic by default. The NFS LB on port 2049 needs an explicit inbound rule, same as the ingress LB needs rules for ports 80/443 (added in red-hat-storage#15012). Add configure_nfs_lb_security_group() that finds the VPC LB backing the rook-ceph-nfs-my-nfs-load-balancer Service and adds an inbound TCP 2049 rule to its security groups. Call it automatically from create_nfs_load_balancer_service() on IBM Cloud. Add remove_nfs_lb_security_group_rules() to clean up the rule during teardown, called from delete_nfs_load_balancer_service() before the Service is deleted (so the VPC LB is still present for lookup). Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
Assert that the specific subvolume created by the test is no longer stale after deletion, instead of asserting zero stale subvolumes cluster-wide. Pre-existing stale subvolumes from other tests caused false failures. Also log the delete output and stale lists for debuggability. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: amr1ta, dahorak, ebenahar The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/cherry-pick release-4.21 |
|
@amr1ta: new pull request created: #15128 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/cherry-pick release-4.20 |
|
@amr1ta: #15119 failed to apply on top of branch "release-4.20": DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/cherry-pick release-4.19 |
|
@amr1ta: #15119 failed to apply on top of branch "release-4.19": DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The cherry-pick of red-hat-storage#15119 brought in test changes that import skip_for_provider_or_client_if_ocs_version from testlib (via wildcard from marks.py), but this mark only existed in master, not release-4.20. Add it so the py310 collectonly check passes. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
The cherry-pick of red-hat-storage#15119 brought in test changes that import skip_for_provider_or_client_if_ocs_version from testlib (via wildcard from marks.py), but this mark only existed in master, not release-4.19. Add it so the py310 collectonly check passes. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Amrita Mahapatra <49347640+amr1ta@users.noreply.github.com>
Summary
Root cause of IBM Cloud failure:
IBM Cloud VPC LoadBalancer security groups block inbound traffic by default. When the NFS LB service (rook-ceph-nfs-my-nfs-load-balancer) is created, port 2049 is not allowed through the security group, so mount requests from the external NFS client VM never reach the NFS server.Fix:
Open inbound TCP port 2049 on IBM Cloud VPC LoadBalancer security group for the NFS LB service. IBM Cloud VPC LB security groups block inbound traffic by default — same issue that required configure_ingress_load_balancer_security_group() for ports 80/443 in #[15012(https://github.com//pull/15012)]. The rule is added automatically in create_nfs_load_balancer_service() and cleaned up in delete_nfs_load_balancer_service().
Have updated, Stale subvolume assertion logic — the old assertion assert len(stale_volumes) == 0 fails if there are pre-existing stale subvolumes from other tests or previous runs. The test should only verify that its own subvolume (new_pvc[1]) was successfully deleted, not that the entire cluster has zero stale subvolumes.
Changed to:
Log stale volumes before and after delete for debugging
Log the delete command output so failures are visible
Assert that the specific subvolume created by this test is no longer in the stale list