diff --git a/docs/book/src/topics/rosa/creating-a-cluster.md b/docs/book/src/topics/rosa/creating-a-cluster.md index f196afddb4..3c1fc49f5c 100644 --- a/docs/book/src/topics/rosa/creating-a-cluster.md +++ b/docs/book/src/topics/rosa/creating-a-cluster.md @@ -1,27 +1,55 @@ # Creating a ROSA HCP cluster +## Prerequisites + +1. Install the required tools and set up the prerequisite infrastructure using the [ROSA Setup guide](https://docs.aws.amazon.com/rosa/latest/userguide/set-up.html). + + +2. Export the following: + ```shell + export EXP_ROSA=true + export EXP_MACHINE_POOL=true + ``` + +3. Create a management cluster using the [Quick Start Guide.](https://cluster-api-aws.sigs.k8s.io/quick-start) + +## IAM Role Configuration + +**Note:** This step is only required when using a ROSA HCP or OCP cluster as the management cluster. + +Configure the IAM role authentication for the CAPA controller following the directions [here](specify-management-iam-role.md). + +## Authentication + +The CAPA controller requires service account credentials to provision ROSA HCP clusters. + +**Note:** If you already have a service account, you can skip these steps. + +1. Create a service account by visiting [https://console.redhat.com/iam/service-accounts](https://console.redhat.com/iam/service-accounts). +2. For every newly created service account, make sure to activate the account using the [ROSA command line tool](https://github.com/openshift/rosa). + First, log in using your newly created service account: + ```shell + rosa login --client-id ... --client-secret ... + ``` +3. Then activate your service account: + ```shell + rosa whoami + ``` + ## Permissions -### Authentication using service account credentials -CAPA controller requires service account credentials to be able to provision ROSA HCP clusters: -1. Visit [https://console.redhat.com/iam/service-accounts](https://console.redhat.com/iam/service-accounts) and create a service account. If you already have a service account, you can skip this step. - For every newly created service account, make sure to activate the account using the [ROSA command line tool](https://github.com/openshift/rosa). First, log in using your newly created service account - ```shell - rosa login --client-id ... --client-secret ... - ``` - Then activate your service account - ```shell - rosa whoami - ``` -1. Create a new kubernetes secret with the service account credentials to be referenced later by `ROSAControlPlane` +1. Create a new kubernetes secret with the service account credentials to be referenced later by the `ROSAControlPlane` + ```shell kubectl create secret generic rosa-creds-secret \ --from-literal=ocmClientID='....' \ --from-literal=ocmClientSecret='eyJhbGciOiJIUzI1NiIsI....' \ --from-literal=ocmApiUrl='https://api.openshift.com' ``` - Note: to consume the secret without the need to reference it from your `ROSAControlPlane`, name your secret as `rosa-creds-secret` and create it in the CAPA manager namespace (usually `capa-system`) + + **Note:** The secret must be created in the same namespace where your ROSA resources will be deployed. Alternatively, to consume the secret without the need to reference it from your `ROSAControlPlane`, name your secret `rosa-creds-secret` and create it in the CAPA manager namespace (usually `capa-system`) + ```shell kubectl -n capa-system create secret generic rosa-creds-secret \ --from-literal=ocmClientID='....' \ @@ -30,146 +58,303 @@ CAPA controller requires service account credentials to be able to provision ROS ``` -### Authentication using SSO offline token (DEPRECATED) -The SSO offline token is being deprecated and it is recommended to use service account credentials instead, as described above. +## Creating the cluster + +1. Prepare the environment: + ```bash + export OPENSHIFT_VERSION="4.20.11" # check available versions with: rosa list versions --hosted-cp + export AWS_REGION="us-west-2" + export AWS_AVAILABILITY_ZONE="us-west-2a" + export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) + export AWS_CREATOR_ARN=$(aws sts get-caller-identity --query Arn --output text) + ``` + +1. Create the `ROSARoleConfig` and `ROSANetwork` resources. + + The `ROSARoleConfig` automates the creation of the AWS IAM resources required by ROSA HCP clusters: + - **Account roles**: Installer, Support, and Worker IAM roles (e.g. `-HCP-ROSA-Installer-Role`) + - **Operator roles**: IAM roles for cluster operators including ingress, image registry, storage, network, kube cloud controller, node pool management, control plane operator, and KMS provider + - **OIDC provider**: A managed OpenID Connect provider used for operator role authentication + + The `ROSANetwork` automates the creation of the VPC networking infrastructure via an AWS CloudFormation stack, including: + + - A VPC with the specified CIDR block + - Public and private subnet pairs for each availability zone + - Associated networking resources (internet gateway, NAT gateways, route tables) -1. Visit https://console.redhat.com/openshift/token to retrieve your SSO offline authentication token + **Note:** The `prefix` field has a maximum length of 4 characters. + + + ```shell + cat < rosa-role-network.yaml + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: ROSARoleConfig + metadata: + name: "role-config" + namespace: "capa-system" + spec: + accountRoleConfig: + prefix: "rosa" + version: "4.20.11" + operatorRoleConfig: + prefix: "rosa" + credentialsSecretRef: + name: rosa-creds-secret + oidcProviderType: Managed + --- + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: ROSANetwork + metadata: + name: "rosa-vpc" + namespace: "capa-system" + spec: + region: "us-west-2" + stackName: "rosa-hcp-net" + availabilityZones: + - "us-west-2a" + - "us-west-2b" + - "us-west-2c" + cidrBlock: 10.0.0.0/16 + EOF + ``` -1. Create a credentials secret within the target namespace with the token to be referenced later by `ROSAControlePlane` ```shell - kubectl create secret generic rosa-creds-secret \ - --from-literal=ocmToken='eyJhbGciOiJIUzI1NiIsI....' \ - --from-literal=ocmApiUrl='https://api.openshift.com' + kubectl apply -f rosa-role-network.yaml ``` - Alternatively, you can edit the CAPA controller deployment to provide the credentials + + Verify the `ROSARoleConfig` was successfully created. The status should contain the `accountRolesRef`, `oidcID`, `oidcProviderARN` and `operatorRolesRef`: + ```shell - kubectl edit deployment -n capa-system capa-controller-manager + kubectl get rosaroleconfig role-config -o yaml ``` - and add the following environment variables to the manager container + + Example expected status: + ```yaml - env: - - name: OCM_TOKEN - value: "" - - name: OCM_API_URL - value: "https://api.openshift.com" # or https://api.stage.openshift.com + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: ROSARoleConfig + metadata: + name: "role-config" + namespace: "capa-system" + spec: + ... + status: + accountRolesRef: + installerRoleARN: arn:aws:iam::123456789012:role/rosa-HCP-ROSA-Installer-Role + supportRoleARN: arn:aws:iam::123456789012:role/rosa-HCP-ROSA-Support-Role + workerRoleARN: arn:aws:iam::123456789012:role/rosa-HCP-ROSA-Worker-Role + conditions: + - lastTransitionTime: "2025-11-03T18:12:09Z" + status: "True" + type: Ready + - lastTransitionTime: "2025-11-03T18:12:09Z" + message: RosaRoleConfig is ready + reason: Created + severity: Info + status: "True" + type: RosaRoleConfigReady + oidcID: anyoidcanyoidctuq4b + oidcProviderARN: arn:aws:iam::123456789012:oidc-provider/oidc.os1.devshift.org/anyoidcanyoidctuq4b + operatorRolesRef: + controlPlaneOperatorARN: arn:aws:iam::123456789012:role/rosa-kube-system-control-plane-operator + imageRegistryARN: arn:aws:iam::123456789012:role/rosa-openshift-image-registry-installer-cloud-credentials + ingressARN: arn:aws:iam::123456789012:role/rosa-openshift-ingress-operator-cloud-credentials + kmsProviderARN: arn:aws:iam::123456789012:role/rosa-kube-system-kms-provider + kubeCloudControllerARN: arn:aws:iam::123456789012:role/rosa-kube-system-kube-controller-manager + networkARN: arn:aws:iam::123456789012:role/rosa-openshift-cloud-network-config-controller-cloud-credentials + nodePoolManagementARN: arn:aws:iam::123456789012:role/rosa-kube-system-capa-controller-manager + storageARN: arn:aws:iam::123456789012:role/rosa-openshift-cluster-csi-drivers-ebs-cloud-credentials ``` -### Migration from offline token to service account authentication + Verify the `ROSANetwork` was successfully created. The status should contain the created subnets: -1. Visit [https://console.redhat.com/iam/service-accounts](https://console.redhat.com/iam/service-accounts) and create a new service account. - -1. If you previously used kubernetes secret to specify the OCM credentials secret, edit the secret: ```shell - kubectl edit secret rosa-creds-secret + kubectl get rosanetwork rosa-vpc -o yaml ``` - where you will remove the `ocmToken` credentials and add base64 encoded `ocmClientID` and `ocmClientSecret` credentials like so: + + Example expected status: + ```yaml - apiVersion: v1 - data: - ocmApiUrl: aHR0cHM6Ly9hcGkub3BlbnNoaWZ0LmNvbQ== - ocmClientID: Y2xpZW50X2lk... - ocmClientSecret: Y2xpZW50X3NlY3JldA==... - kind: Secret - type: Opaque + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: ROSANetwork + metadata: + name: "rosa-vpc" + namespace: "capa-system" + spec: + ... + status: + conditions: + - lastTransitionTime: "2025-11-03T18:15:05Z" + reason: Created + severity: Info + status: "True" + type: ROSANetworkReady + subnets: + - availabilityZone: us-west-2a + privateSubnet: subnet-084ebac3893fc14ff + publicSubnet: subnet-0ec9fa706a26519ee + - availabilityZone: us-west-2b + privateSubnet: subnet-07727689065612f6e + publicSubnet: subnet-0bb2220505b16f606 + - availabilityZone: us-west-2c + privateSubnet: subnet-002e071b9624727f3 + publicSubnet: subnet-049fa2a528d896356 ``` -1. If you previously used capa manager deployment to specify the OCM offline token as environment variable, edit the manager deployment +1. Create the `AWSClusterControllerIdentity` resource: + ```shell - kubectl -n capa-system edit deployment capa-controller-manager + cat < aws-cluster-controller-identity.yaml + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: AWSClusterControllerIdentity + metadata: + name: "default" + spec: + allowedNamespaces: {} # matches all namespaces + EOF ``` - and remove the `OCM_TOKEN` and `OCM_API_URL` variables, followed by `kubectl -n capa-system rollout restart deploy capa-controller-manager`. Then create the new default secret in the `capa-system` namespace with + ```shell - kubectl -n capa-system create secret generic rosa-creds-secret \ - --from-literal=ocmClientID='....' \ - --from-literal=ocmClientSecret='eyJhbGciOiJIUzI1NiIsI....' \ - --from-literal=ocmApiUrl='https://api.openshift.com' + kubectl apply -f aws-cluster-controller-identity.yaml ``` -## Prerequisites +1. Create the `Cluster`, `ROSACluster`, and `ROSAControlPlane` resources: -Follow the guide [here](https://docs.aws.amazon.com/ROSA/latest/userguide/getting-started-hcp.html) up until ["Create a ROSA with HCP Cluster"](https://docs.aws.amazon.com/ROSA/latest/userguide/getting-started-hcp.html#create-hcp-cluster-cli) to install the required tools and setup the prerequisite infrastructure. Once Step 3 is done, you will be ready to proceed with creating a ROSA HCP cluster using cluster-api. + ```shell + cat < rosa-cluster.yaml + apiVersion: cluster.x-k8s.io/v1beta2 + kind: Cluster + metadata: + name: "rosa-hcp-1" + spec: + clusterNetwork: + pods: + cidrBlocks: ["192.168.0.0/16"] + infrastructureRef: + apiGroup: infrastructure.cluster.x-k8s.io + kind: ROSACluster + name: "rosa-hcp-1" + controlPlaneRef: + apiGroup: controlplane.cluster.x-k8s.io + kind: ROSAControlPlane + name: "rosa-hcp-1-control-plane" + --- + apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 + kind: ROSACluster + metadata: + name: "rosa-hcp-1" + spec: {} + --- + apiVersion: controlplane.cluster.x-k8s.io/v1beta2 + kind: ROSAControlPlane + metadata: + name: "rosa-hcp-1-control-plane" + spec: + credentialsSecretRef: + name: rosa-creds-secret + rosaClusterName: rosa-hcp-1 + domainPrefix: rosa-hcp + rosaRoleConfigRef: + name: role-config # reference to the ROSARoleConfig created above + version: "4.20.11" + region: "us-west-2" + rosaNetworkRef: + name: "rosa-vpc" # reference to the ROSANetwork created above + network: + machineCIDR: "10.0.0.0/16" + podCIDR: "10.128.0.0/14" + serviceCIDR: "172.30.0.0/16" + defaultMachinePoolSpec: + instanceType: "m5.xlarge" + autoscaling: + maxReplicas: 6 + minReplicas: 3 + additionalTags: + env: "demo" + EOF + ``` -Note; Skip the "Create the required IAM roles and OpenID Connect configuration" step from the prerequisites url above and use the templates/cluster-template-rosa-role-config.yaml to generate a ROSARoleConfig CR to create the required account roles, operator roles & managed OIDC provider. + ```shell + kubectl apply -f rosa-cluster.yaml + ``` -## Creating the cluster -1. Prepare the environment: - ```bash - export OPENSHIFT_VERSION="4.19.0" - export AWS_REGION="us-west-2" - export AWS_AVAILABILITY_ZONE="us-west-2a" - export AWS_ACCOUNT_ID="" - export AWS_CREATOR_ARN="" # can be retrieved e.g. using `aws sts get-caller-identity` - # Note: if using templates/cluster-template-rosa.yaml set the below env variables - export OIDC_CONFIG_ID="" # OIDC config id creating previously with `rosa create oidc-config` - export ACCOUNT_ROLES_PREFIX="ManagedOpenShift-HCP" # prefix used to create account IAM roles with `rosa create account-roles` - export OPERATOR_ROLES_PREFIX="capi-rosa-quickstart" # prefix used to create operator roles with `rosa create operator-roles --prefix ` +1. Check the `ROSAControlPlane` status: - # Note: if using templates/cluster-template-rosa-role-config.yaml set the below env variables - export ACCOUNT_ROLES_PREFIX="capa" # prefix can be change to preferable prefix with max 4 chars - export OPERATOR_ROLES_PREFIX="capa" # prefix can be change to preferable prefix with max 4 chars + ```shell + kubectl get ROSAControlPlane rosa-hcp-1-control-plane - # subnet IDs created earlier - export PUBLIC_SUBNET_ID="subnet-0b54a1111111111111" - export PRIVATE_SUBNET_ID="subnet-05e72222222222222" + NAME CLUSTER READY + rosa-hcp-1-control-plane rosa-hcp-1 true ``` -1. Render the cluster manifest using the ROSA HCP cluster template: + The ROSA HCP cluster can take around 40 minutes to be fully provisioned. - a. Using templates/cluster-template-rosa.yaml +1. After provisioning has completed, verify the `ROSAMachinePool` resources were successfully created: - Note: The AWS role name must be no more than 64 characters in length. Otherwise an error will be returned. Truncate values exceeding 64 characters. ```shell - clusterctl generate cluster --from templates/cluster-template-rosa.yaml > rosa-capi-cluster.yaml - ``` + kubectl get ROSAMachinePool - b. Using templates/cluster-template-rosa-role-config.yaml - ```shell - clusterctl generate cluster --from templates/cluster-template-rosa-role-config.yaml > rosa-capi-cluster.yaml + NAME READY REPLICAS + workers-0 true 1 + workers-1 true 1 + workers-2 true 1 ``` + **Note:** The number of default `ROSAMachinePool` resources corresponds to the number of availability zones configured. -1. If a credentials secret was created earlier, edit `ROSAControlPlane` to reference it: - ```yaml - apiVersion: controlplane.cluster.x-k8s.io/v1beta2 - kind: ROSAControlPlane - metadata: - name: "capi-rosa-quickstart-control-plane" - spec: - credentialsSecretRef: - name: rosa-creds-secret - ... - ``` +1. To add an additional `ROSAMachinePool`, save the following to a file `rosa-machinepool-extra.yaml`: -1. Provide an AWS identity reference - ```yaml - apiVersion: controlplane.cluster.x-k8s.io/v1beta2 - kind: ROSAControlPlane + ```shell + cat < rosa-machinepool-extra.yaml + apiVersion: cluster.x-k8s.io/v1beta2 + kind: MachinePool metadata: - name: "capi-rosa-quickstart-control-plane" + name: "rosa-hcp-1-workers-extra" spec: - identityRef: - kind: - name: - ... - ``` - - Otherwise, make sure the following `AWSClusterControllerIdentity` singleton exists in your management cluster: - ```yaml + clusterName: "rosa-hcp-1" + replicas: 2 + template: + spec: + clusterName: "rosa-hcp-1" + bootstrap: + dataSecretName: "" + infrastructureRef: + apiGroup: infrastructure.cluster.x-k8s.io + kind: ROSAMachinePool + name: "workers-extra" + --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 - kind: AWSClusterControllerIdentity + kind: ROSAMachinePool metadata: - name: "default" + name: "workers-extra" spec: - allowedNamespaces: {} # matches all namespaces + nodePoolName: "workers-extra" + version: "4.20.11" + instanceType: "m5.xlarge" + autoRepair: true + EOF ``` - see [Multi-tenancy](../multitenancy.md) for more details - -1. Finally apply the manifest to create your ROSA cluster: ```shell - kubectl apply -f rosa-capi-cluster.yaml + kubectl apply -f rosa-machinepool-extra.yaml ``` +## Deleting a ROSA HCP cluster + +To delete a ROSA HCP cluster, delete the `Cluster` and `ROSAControlPlane` resources. This will also clean up the associated `ROSACluster`, `MachinePool`, and `ROSAMachinePool` resources: + +```shell +kubectl delete -n cluster/rosa-hcp-1 --wait=false +kubectl delete -n rosacontrolplane/rosa-hcp-1-control-plane +``` + +After the cluster has been fully deleted, you can clean up the `ROSARoleConfig` and `ROSANetwork` resources: + +```shell +kubectl delete rosaroleconfig role-config +kubectl delete rosanetwork rosa-vpc +``` + see [ROSAControlPlane CRD Reference](https://cluster-api-aws.sigs.k8s.io/crd/#controlplane.cluster.x-k8s.io/v1beta2.ROSAControlPlane) for all possible configurations. diff --git a/docs/book/src/topics/rosa/specify-management-iam-role.md b/docs/book/src/topics/rosa/specify-management-iam-role.md new file mode 100644 index 0000000000..0ed88d5ea8 --- /dev/null +++ b/docs/book/src/topics/rosa/specify-management-iam-role.md @@ -0,0 +1,86 @@ +# Specifying the IAM Role for ROSA HCP Management Components + +When using an OpenShift or ROSA-HCP cluster as the management cluster, you can configure the CAPA controller to use IAM roles instead of storing AWS credentials. This uses OIDC federation to allow the CAPA controller service account to assume an IAM role. + +## Prerequisites + +- A management cluster (OpenShift or ROSA-HCP) with CAPI and CAPA installed. + Follow the [Quick Start Guide](https://cluster-api-aws.sigs.k8s.io/quick-start) to install CAPI and CAPA using `clusterctl init --infrastructure aws`. For the initial installation, you can use temporary AWS credentials (e.g. via `aws sts get-session-token` or environment variables). Once the IAM role is configured below, the CAPA controller will use the role instead of stored credentials. + + **Note:** The ROSA and MachinePool feature gates must be enabled before running `clusterctl init`: + ```shell + export EXP_ROSA=true + export EXP_MACHINE_POOL=true + ``` +- The management cluster must have an OIDC provider configured + +## Retrieve the OIDC Provider + +Extract the OIDC provider from the management cluster and set your AWS account ID: + +```shell +export OIDC_PROVIDER=$(kubectl get authentication.config.openshift.io cluster -ojson | jq -r .spec.serviceAccountIssuer | sed 's/https:\/\///') +export AWS_ACCOUNT_ID= +``` + +## Create the Trust Policy + +Create a trust policy that allows the `capa-controller-manager` service account to assume the IAM role: + +```shell +cat < trust.json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" + }, + "Action": "sts:AssumeRoleWithWebIdentity", + "Condition": { + "StringEquals": { + "${OIDC_PROVIDER}:sub": "system:serviceaccount:capa-system:capa-controller-manager" + } + } + } + ] +} +EOF +``` + + +## Create the IAM Role + +Create the IAM role and attach the required AWS policies: + +```shell +aws iam create-role --role-name "capa-manager-role" \ + --assume-role-policy-document file://trust.json \ + --description "IAM role for CAPA to assume" + +aws iam attach-role-policy --role-name capa-manager-role \ + --policy-arn arn:aws:iam::aws:policy/AWSCloudFormationFullAccess + +aws iam attach-role-policy --role-name capa-manager-role \ + --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess +``` + +## Annotate the Service Account + +Retrieve the IAM role ARN and annotate the CAPA controller service account: + +```shell +export APP_IAM_ROLE_ARN=$(aws iam get-role --role-name=capa-manager-role --query Role.Arn --output text) + +kubectl annotate serviceaccount -n capa-system capa-controller-manager \ + eks.amazonaws.com/role-arn=$APP_IAM_ROLE_ARN +``` + +Restart the CAPA controller to pick up the new role: + +```shell +kubectl rollout restart deployment capa-controller-manager -n capa-system +``` + +After this configuration, the CAPA controller will use the IAM role to manage AWS resources, and you can provision ROSA HCP clusters without storing AWS credentials in the management cluster.