Install Che on the virtual Kubernetes cluster

To deploy Che when the host cluster version is incompatible or lacks external OIDC support, use a virtual Kubernetes cluster (vCluster) to bypass these constraints.

Prerequisites
Procedure
  1. Define the cluster domain name.

    DOMAIN_NAME=<kubernetes_cluster_domain_name>
  2. Install Ingress Controller. Check your Kubernetes provider documentation on how to install it.

    Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    
    helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
        --namespace ingress-nginx \
        --create-namespace \
        --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
        --set controller.service.externalTrafficPolicy=Cluster
  3. Install the cert-manager:

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
    helm install cert-manager jetstack/cert-manager \
      --wait \
      --create-namespace \
      --namespace cert-manager \
      --set installCRDs=true
  4. Define the Keycloak host:

    KEYCLOAK_HOST=keycloak.${DOMAIN_NAME}

    If you use a registrar such as GoDaddy, add the following DNS record in your registrar and point it to the IP address of the ingress controller:

    • type: A

    • name: keycloak

    Run the following command to find the external IP address of the NGINX Ingress Controller:

    kubectl get services ingress-nginx-controller \
      --namespace ingress-nginx \
      --output jsonpath="{.status.loadBalancer.ingress[0].ip}"

    Use the following command to wait until Keycloak host is known:

    until ping -c1 ${KEYCLOAK_HOST} >/dev/null 2>&1; do :; done
  5. Install Keycloak with a self-signed certificate:

    kubectl apply -f - <<EOF
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: keycloak
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: keycloak-selfsigned
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: keycloak-selfsigned
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      isCA: true
      commonName: keycloak-selfsigned-ca
      privateKey:
        algorithm: ECDSA
        size: 256
      issuerRef:
        name: keycloak-selfsigned
        kind: Issuer
        group: cert-manager.io
      secretName: ca.crt
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      ca:
        secretName: ca.crt
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      isCA: false
      commonName: keycloak
      dnsNames:
        - ${KEYCLOAK_HOST}
      privateKey:
        algorithm: RSA
        encoding: PKCS1
        size: 4096
      issuerRef:
        kind: Issuer
        name: keycloak
        group: cert-manager.io
      secretName: keycloak.tls
      subject:
        organizations:
          - Local Eclipse Che
      usages:
        - server auth
        - digital signature
        - key encipherment
        - key agreement
        - data encipherment
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      ports:
      - name: http
        port: 8080
        targetPort: 8080
      selector:
        app: keycloak
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: keycloak
      template:
        metadata:
          labels:
            app: keycloak
        spec:
          containers:
          - name: keycloak
            image: quay.io/keycloak/keycloak:24.0.2
            args: ["start-dev"]
            env:
            - name: KEYCLOAK_ADMIN
              value: "admin"
            - name: KEYCLOAK_ADMIN_PASSWORD
              value: "admin"
            - name: KC_PROXY
              value: "edge"
            ports:
            - name: http
              containerPort: 8080
            readinessProbe:
              httpGet:
                path: /realms/master
                port: 8080
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: keycloak
      namespace: keycloak
      annotations:
        nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600'
        nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
        nginx.ingress.kubernetes.io/ssl-redirect: 'true'
    spec:
      ingressClassName: nginx
      tls:
        - hosts:
            - ${KEYCLOAK_HOST}
          secretName: keycloak.tls
      rules:
      - host: ${KEYCLOAK_HOST}
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: keycloak
                port:
                  number: 8080
    EOF
  6. Wait until the Keycloak pod is ready:

    kubectl wait --for=condition=ready pod -l app=keycloak -n keycloak --timeout=120s
  7. Configure Keycloak to create che realm:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create realms \
            -s realm='che' \
            -s displayName='Eclipse Che' \
            -s enabled=true \
            -s registrationAllowed=false \
            -s resetPasswordAllowed=true"
  8. Configure Keycloak to create che-public client:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create clients \
            -r 'che' \
            -s name=che-public \
            -s clientId=che-public \
            -s id=che-public \
            -s redirectUris='[\"*\"]' \
            -s webOrigins='[\"*\"]' \
            -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \
            -s standardFlowEnabled=true \
            -s publicClient=true \
            -s frontchannelLogout=true \
            -s directAccessGrantsEnabled=true && \
        /opt/keycloak/bin/kcadm.sh create clients/che-public/protocol-mappers/models \
            -r 'che' \
            -s name=groups \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-group-membership-mapper \
            -s consentRequired=false \
            -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}'"
  9. Configure Keycloak to create che user and the vcluster group:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create users \
            -r 'che' \
            -s enabled=true \
            -s username=che \
            -s email=\"che@che\" \
            -s emailVerified=true \
            -s firstName=\"Eclipse\" \
            -s lastName=\"Che\" && \
        /opt/keycloak/bin/kcadm.sh set-password \
            -r 'che' \
            --username che \
            --new-password che && \
        /opt/keycloak/bin/kcadm.sh create groups \
            -r 'che' \
            -s name=vcluster"
  10. Configure Keycloak to add che user to vcluster group:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        USER_ID=\$(/opt/keycloak/bin/kcadm.sh get users \
            -r 'che' \
            -q 'username=che' \
                    |  sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \
        GROUP_ID=\$(/opt/keycloak/bin/kcadm.sh get groups \
            -r 'che' \
            -q 'name=vcluster' \
                    |  sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \
        /opt/keycloak/bin/kcadm.sh update users/\$USER_ID/groups/\$GROUP_ID \
            -r 'che'"
  11. Configure Keycloak to create che-private client:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create clients \
            -r 'che' \
            -s name=che-private \
            -s clientId=che-private \
            -s id=che-private \
            -s redirectUris='[\"*\"]' \
            -s webOrigins='[\"*\"]' \
            -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \
            -s standardFlowEnabled=true \
            -s publicClient=false \
            -s frontchannelLogout=true \
            -s serviceAccountsEnabled=true \
            -s directAccessGrantsEnabled=true && \
        /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \
            -r 'che' \
            -s name=groups \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-group-membership-mapper \
            -s consentRequired=false \
            -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}' && \
        /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \
            -r 'che' \
            -s name=audience \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-audience-mapper \
            -s config='{\"included.client.audience\" : \"che-public\", \"access.token.claim\" : \"true\", \"id.token.claim\" : \"true\"}'"
  12. Print and save che-private client secret:

    kubectl exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh get clients/che-private/client-secret \
            -r che"
  13. Prepare values for vCluster helm chart:

    cat > /tmp/vcluster-values.yaml << EOF
    api:
      image: registry.k8s.io/kube-apiserver:v1.27.1
      extraArgs:
        - --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che
        - --oidc-client-id=che-public
        - --oidc-username-claim=email
        - --oidc-groups-claim=groups
        - --oidc-ca-file=/tmp/certificates/keycloak-ca.crt
    
    init:
      manifestsTemplate: |-
        ---
        kind: ClusterRoleBinding
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: oidc-cluster-admin
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
        - kind: Group
          name: vcluster
    service:
      type: LoadBalancer
    EOF
  14. Install vCluster:

    helm repo add loft-sh https://charts.loft.sh
    helm repo update
    
    helm install vcluster loft-sh/vcluster-k8s \
      --create-namespace \
      --namespace vcluster \
      --values /tmp/vcluster-values.yaml
  15. Mount Keycloak CA certificate into the vcluster pod:

    kubectl get secret ca.crt \
        --output "jsonpath={.data['ca\.crt']}" \
        --namespace keycloak \
          | base64 -d > /tmp/keycloak-ca.crt
    
    kubectl create configmap keycloak-cert \
        --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \
        --namespace vcluster
    
    kubectl patch deployment vcluster -n vcluster --type json -p='[
      {
        "op": "add",
        "path": "/spec/template/spec/volumes/-",
        "value": {
          "name": "keycloak-cert",
          "configMap": {
            "name": "keycloak-cert"
          }
        }
      },
      {
        "op": "add",
        "path": "/spec/template/spec/containers/0/volumeMounts/-",
        "value": {
          "name": "keycloak-cert",
          "mountPath": "/tmp/certificates"
        }
      }
    ]'
  16. Wait until vc-vcluster secret is created:

    timeout 120 bash -c 'while :; do kubectl get secret vc-vcluster -n vcluster && break || sleep 5; done'
  17. Verify the vCluster cluster status:

    vcluster list
  18. Update kubeconfig file:

    kubectl config set-credentials vcluster \
        --exec-api-version=client.authentication.k8s.io/v1beta1 \
        --exec-command=kubectl \
        --exec-arg=\
    oidc-login,\
    get-token,\
    --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che,\
    --certificate-authority=/tmp/keycloak-ca.crt,\
    --oidc-client-id=che-public,\
    --oidc-extra-scope="email offline_access profile openid"
    
    kubectl get secret vc-vcluster -n vcluster -o jsonpath="{.data.certificate-authority}" | base64 -d > /tmp/vcluster-ca.crt
    kubectl config set-cluster vcluster \
        --server=https://$(kubectl get svc vcluster-lb \
                        --namespace vcluster \
                        --output jsonpath="{.status.loadBalancer.ingress[0].ip}"):443 \
        --certificate-authority=/tmp/vcluster-ca.crt
    
    kubectl config set-context vcluster \
        --cluster=vcluster \
        --user=vcluster
  19. Use vcluster kubeconfig context:

    kubectl config use-context vcluster
  20. View the pods in the cluster. Running this command redirects to the authentication page:

    kubectl get pods --all-namespaces
  21. Install Ingress Controller on the virtual Kubernetes cluster.

    Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    
    helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
        --namespace ingress-nginx \
        --create-namespace \
        --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
        --set controller.service.externalTrafficPolicy=Cluster

    If you use a registrar such as GoDaddy, add the following two DNS records in your registrar and point them to the IP address of the ingress controller:

    • type: A

    • name: @ and *

    Run the following command to find the external IP address of the NGINX Ingress Controller:

    kubectl get services ingress-nginx-controller \
    --namespace ingress-nginx \
    --output jsonpath="{.status.loadBalancer.ingress[0].ip}"

    Use the following command to wait until Kubernetes host is known:

    until ping -c1 ${DOMAIN_NAME} >/dev/null 2>&1; do :; done
  22. Create CheCluster patch YAML file and replace CHE_PRIVATE_CLIENT_SECRET saved above:

    cat > /tmp/che-patch.yaml << EOF
    kind: CheCluster
    apiVersion: org.eclipse.che/v2
    spec:
      networking:
        ingressClassName: nginx
        auth:
          oAuthClientName: che-private
          oAuthSecret: CHE_PRIVATE_CLIENT_SECRET
          identityProviderURL: https://$KEYCLOAK_HOST/realms/che
          gateway:
            oAuthProxy:
              cookieExpireSeconds: 300
            deployment:
                containers:
                - env:
                    - name: OAUTH2_PROXY_BACKEND_LOGOUT_URL
                      value: "http://$KEYCLOAK_HOST/realms/che/protocol/openid-connect/logout?id_token_hint={id_token}"
                name: oauth-proxy
    components:
        cheServer:
          extraProperties:
            CHE_OIDC_USERNAME__CLAIM: email
    EOF
  23. Create eclipse-che namespace:

    kubectl create namespace eclipse-che
  24. Copy Keycloak CA certificate into the eclipse-che namespace:

    kubectl create configmap keycloak-certs \
            --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \
            --namespace eclipse-che
    
    kubectl label configmap keycloak-certs \
            app.kubernetes.io/part-of=che.eclipse.org \
            app.kubernetes.io/component=ca-bundle \
            --namespace eclipse-che
  25. Deploy Che:

    chectl server:deploy \
            --platform k8s \
            --domain $DOMAIN_NAME \
            --che-operator-cr-patch-yaml /tmp/che-patch.yaml
Verification
  1. Verify that all pods are in the running state:

    kubectl get pods --all-namespaces
  2. Verify the Che instance status:

    chectl server:status
  3. Navigate to the Che cluster instance:

    chectl dashboard:open
  4. Log in to the Che instance with Username: che and Password: che.