-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NO-JIRA: bootstrap: add CLUSTER_PROFILE_ANNOTATION variable to auth-api bootstrapping stage #9508
base: main
Are you sure you want to change the base?
Conversation
…rapping stage Signed-off-by: Bryce Palmer <[email protected]>
Holding this until I'm more confident in whether or not there is a permafailing nightly we can test this change against to verify it fixes the issue @vrutkovs found. /hold |
@everettraven: This pull request explicitly references no jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
After discussion with @patrickdillon it doesn't seem like there is a permafailing situation. Ref: https://redhat-internal.slack.com/archives/C68TNFWA2/p1740408989067559?thread_ts=1740385336.341589&cid=C68TNFWA2 This PR should fix the issue. Cancelling the hold. /hold cancel |
@@ -136,6 +136,8 @@ then | |||
|
|||
rm --recursive --force auth-api-bootstrap | |||
|
|||
CLUSTER_PROFILE_ANNOTATION="self-managed-high-availability" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it makes more sense to move the original definition of this variable on line 103 out of the conditional and deduplicate?
It's only two instances in the same file so I'm not too concerned about this.
/approve
The original comment says:
A more desireable end state would be require wiring up the infrastructure in some way and read it from rendered-manifest-dir
So this annotation is in manifests rendered during the api bootstrap phase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This annotation is in the CRD manifests that would be rendered.
In this instance, we are planning to remove a CRD from the openshift/api generated payload manifest so that it can be managed by the cluster-authentication-operator. The CRD that will be managed by the cluster-authentication-operator does not currently have this annotation from what I recall, but we wanted to prepare for a future where the CAO may be responsible for managing feature-gated/cluster-profile aware CRDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this input to the flag would be use to filter the CRD manifests output in this stage to be only ones that map to the cluster profile specified.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: patrickdillon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/retest-required |
/override ci/prow/okd-scos-images not sure what's going on, the history looked like we were seeing green again. heisenbug |
@patrickdillon: Overrode contexts on behalf of patrickdillon: ci/prow/okd-scos-images In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/retest-required |
@everettraven: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/retest-required |
This should resolve #9424 (comment)
I haven't been able to identify any permafailing nightlies to signal that this variable definition being missing is causing nightly failures, but I do have a slack thread open with TRT here: https://redhat-internal.slack.com/archives/C01CQA76KMX/p1740402125911299