#1612 (Cloud SIG) OKDerator CI/CD space
Closed: Fixed with Explanation by arrfab. Opened by zedsm.

CentOS CI - On-boarding

Please note that Infra space is for Fedora and CentOS related projects to
consume. Decision may take some time (often up to 2 weeks) as these are decided
by the whole team.
Once decided as go, we will create you a namespace in a openshift cluster where
you can configure your CI. We do provide a Jenkins template in case you want to be
able to consume vms/baremetal nodes to perform your CI.

Please answer the following questions so that we understand your requirement.

  • How does your project relates to Fedora/CentOS?
    OKDerators is part of the OKD Working Group. It is an opinionated collection of operators packaged for OKD. It provides integrations with ecosystem projects such as Rook-Ceph, Istio, ArgoCD.

  • Describe your workflow and if you need any special permissions (other than
    admin access to namespace), please tell us and provide a reason for them.
    Within the namespace we would run our OKDerator build controller which coordinates the orchestration of Tekton pipeline within the namespace. Other than admin access we shouldn't need anything else. Ideally we would run a GitOps repo to manage the namespace - do you have OpenShift GitOps set up already if we provide you a repo?

  • Do you need bare-metal/vms checkout capability? (we prefer your workflow
    containerized)
    The CI/CD will run within the cluster.
    For testing it would be useful to spin up tempoary OKD clusters. I don't think we will get enough access/resource to run a "full" installation but being able to spin up a node and install SNO on it could be useful for rudimentarily verifying operator functionality and upgrade paths.

  • Resources required

  • PVs: Don't think there is a requirement for persistent data other than build logs etc which will get cleaned up by Tekton.
Project_name: OKDerators
Project_admins:
 - zedsm
 - owenh

Metadata Update from @arrfab:
- Issue tagged with: centos-ci-infra, investigation, namespace-request, need-more-info

hey @zedsm
Sorry for very late reply but busy with various (important) things.
I read that you only need (summarizing the request) a namespace with admin rights for Cloud SIG (okd sub-variant).
Ideally we'd stick to vanilla openshift cluster, so not having to install many operators that would be a problem in the future. But if you can confirm that you just need a namespace we can easily create needed group in FAS/ACO (that will be synced over automatically).

@ausil : do you agree on the main idea ?

Metadata Update from @arrfab:
- Issue assigned to arrfab

Metadata Update from @arrfab:
- Issue priority set to: Waiting on Reporter (was: Needs Review)

@arrfab apologies for the delay here.

Currently yes, we are just looking for admin in a namespace in order to build operators and our operator catalogue. At this time we do not need a Konflux installation. We will require Openshift Pipeline operator.

What version of Openshift are you currently running? Do you have a GitOps setup?

To confirm, happy to just be set up with a namespace for now.

In our roadmap we intend to move all OKD build infrastructure out of RedHat into a Konflux installation but it is not sure where this will live yet as obviously it is quite significant workload.

The following FAS group (https://accounts.centos.org/group/ocp-cico-cloud-okd/) is now creating, and corresponding cloud-okd namespace created on ocp.
See https://docs.centos.org/centos-sig-guide/ci/#testing for details about console and oc login.

@ausil is also sponsor for that group, that is synced to ocp automatically so adding/removing FAS account from that group would add/remove permissions (admin) for that project/namespace.

WRT openshift, we're still updating minor versions but still on 4.15 branch, but we have another ticket to jump between multiple releases (tracked in #1609)

Please be sure to subscribe to ci-users where we'll be sending announces about CI (openshift or not) infra changes.

We don't have any GitOps setup in place for that CI cluster, which isn't used by centos btw, but more a place where projects/tenants around CentOS ecosystem can deploy some CI workload.
IF you need a place under https://gitlab.com/CentOS/cloud , I'd advise you to reach out to @spotz or someone else from that SIG to create a project and grant you access (I can also do that and reuse the same FAS group if needed for the SAML link between Gitlab and FAS)

Let us know if you have all you need and if so, we can close ticket after we'll have received feedback (or if no ticket activity for two weeks)

Metadata Update from @arrfab:
- Issue untagged with: investigation, need-more-info
- Issue tagged with: high-gain, medium-trouble

Thank you, I have subscribed to the ci-users mailing list.

We will need, @owenh added to the FAS group as well, they will be setting up the CI stuff.

With regards to the upgrade, we will be keen to see it but it's not urgent - I think we would also be happy to help if that's an option.

Would you be open to installing first-party Openshift operators?

For the openshift upgrade, it's now scheduled for April 28th, so jumping to 4.18.x branch.
Can you elaborate on the "installing first-party openshift operators" part ?
We try to keep ocp as "vanilla" as possible , to avoid upgrade issues due to unmaintained operators.
If you have something specific in mind, we can have a look , as long as it's not causing any issue for other CI tenants on that ocp cluster
WRT sponsorship, @ausil can then add/remove other people if/when needed

Our operator build workflows currently require Tekton/OpenShift Pipelines to function. Is this something that already exists on the cluster? I believe this operator only supports cluster-wide installation, so we wouldn't be able to scope it to our own namespace.

I think that's our only requirement. It would be nice to be able to sync our configuration into the cluster with ArgoCD if it's already deployed, but it's not necessary for us to get things running.

@owenh : as said above, we just provide ocp vanilla, nothing more added. The less we have to install, the better for us as we don't even use openshift at all for our infra needs (overcomplicated things that are always getting in the way of sysadmins)
Now if you want us to just install cluster-wide operators, maintained by Red Hat and so if official catalog, feel free to ask as we'll see :)

@zedsm : can you come with a list of operators/versions you'd like us to just install in cluster ?
I'd like to close this request and so searching for what is missing (that wasn't asked for in initial request)

Metadata Update from @arrfab:
- Issue untagged with: medium-trouble
- Issue tagged with: high-trouble

Hi,

We would like OpenShift Pipelines installed. This should be minimal overhead/setup but will require an update button being pressed by an admin from time to time.

For OpenShift GitOps, it is a slightly more involved to get setup. Do you have interest in using GitOps for maintaining configuration for other bits of the cluster? I spoke to @owenh and he said he'd be happy to support on that if it was wanted.

@zedsm : Red Hat OpenShift Pipelines v 1.18.0 is now installed on that ocp cluster and I see that it's already claiming that it's installed in openshift-operators but managing the cloud-okd namespace so you should have what you need.

WRT GitOps, as said, we don't have any workload at all for centos running from that cluster, as we just provide namespaces for CentOS SIGs willing to deploy some apps for CI pipelines (mostly Jenkins related in the past)
We can try to discuss that later if you want but I have zero time to invest into this with other tasks on my plate :D

So let me know if you're unblocked and if so , we can close this ticket ?

no update to this ticket but we sincerely believe that initial request is now fulfilled so we'll close this ticket.
Should you have other needs (that weren't expressed initially) you're welcome to open a new ticket with details requirements/pointers to what you'd like to see being configured/changed/added

Metadata Update from @arrfab:
- Issue close_status updated to: Fixed with Explanation
- Issue status updated to: Closed (was: Open)

Thank you very much for fulfilling the request, apologies for the delay in response. We will be fine now and open a new ticket should we need anything.

Log in to comment on this ticket.

Metadata