Install Che on the virtual Kubernetes cluster
To deploy Che when the host cluster version is incompatible or lacks external OIDC support, use a virtual Kubernetes cluster (vCluster) to bypass these constraints.
-
You have
helminstalled. See Installing Helm. -
You have
vclusterCLI installed. See Installing vCluster CLI. -
You have
kubectlinstalled. See Installing kubectl. -
You have
kubelogininstalled. See Installing kubelogin. -
You have
chectlinstalled. See Installing the chectl management tool. -
You have an active
kubectlsession with administrative permissions to the destination Kubernetes cluster.
-
Define the cluster domain name.
DOMAIN_NAME=<kubernetes_cluster_domain_name> -
Install Ingress Controller. Check your Kubernetes provider documentation on how to install it.
Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.service.externalTrafficPolicy=Cluster -
Install the cert-manager:
helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --wait \ --create-namespace \ --namespace cert-manager \ --set installCRDs=true -
Define the
Keycloakhost:KEYCLOAK_HOST=keycloak.${DOMAIN_NAME}If you use a registrar such as GoDaddy, add the following DNS record in your registrar and point it to the IP address of the ingress controller:
-
type:
A -
name:
keycloak
Run the following command to find the external IP address of the NGINX Ingress Controller:
kubectl get services ingress-nginx-controller \ --namespace ingress-nginx \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"Use the following command to wait until
Keycloakhost is known:until ping -c1 ${KEYCLOAK_HOST} >/dev/null 2>&1; do :; done -
-
Install Keycloak with a self-signed certificate:
kubectl apply -f - <<EOF --- apiVersion: v1 kind: Namespace metadata: name: keycloak --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: keycloak-selfsigned namespace: keycloak labels: app: keycloak spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: keycloak-selfsigned namespace: keycloak labels: app: keycloak spec: isCA: true commonName: keycloak-selfsigned-ca privateKey: algorithm: ECDSA size: 256 issuerRef: name: keycloak-selfsigned kind: Issuer group: cert-manager.io secretName: ca.crt --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: ca: secretName: ca.crt --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: isCA: false commonName: keycloak dnsNames: - ${KEYCLOAK_HOST} privateKey: algorithm: RSA encoding: PKCS1 size: 4096 issuerRef: kind: Issuer name: keycloak group: cert-manager.io secretName: keycloak.tls subject: organizations: - Local Eclipse Che usages: - server auth - digital signature - key encipherment - key agreement - data encipherment --- apiVersion: v1 kind: Service metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:24.0.2 args: ["start-dev"] env: - name: KEYCLOAK_ADMIN value: "admin" - name: KEYCLOAK_ADMIN_PASSWORD value: "admin" - name: KC_PROXY value: "edge" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /realms/master port: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak namespace: keycloak annotations: nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600' nginx.ingress.kubernetes.io/proxy-read-timeout: '3600' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: ingressClassName: nginx tls: - hosts: - ${KEYCLOAK_HOST} secretName: keycloak.tls rules: - host: ${KEYCLOAK_HOST} http: paths: - path: / pathType: Prefix backend: service: name: keycloak port: number: 8080 EOF -
Wait until the
Keycloakpod is ready:kubectl wait --for=condition=ready pod -l app=keycloak -n keycloak --timeout=120s -
Configure
Keycloakto createcherealm:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create realms \ -s realm='che' \ -s displayName='Eclipse Che' \ -s enabled=true \ -s registrationAllowed=false \ -s resetPasswordAllowed=true" -
Configure
Keycloakto createche-publicclient:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create clients \ -r 'che' \ -s name=che-public \ -s clientId=che-public \ -s id=che-public \ -s redirectUris='[\"*\"]' \ -s webOrigins='[\"*\"]' \ -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \ -s standardFlowEnabled=true \ -s publicClient=true \ -s frontchannelLogout=true \ -s directAccessGrantsEnabled=true && \ /opt/keycloak/bin/kcadm.sh create clients/che-public/protocol-mappers/models \ -r 'che' \ -s name=groups \ -s protocol=openid-connect \ -s protocolMapper=oidc-group-membership-mapper \ -s consentRequired=false \ -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}'" -
Configure
Keycloakto createcheuser and thevclustergroup:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create users \ -r 'che' \ -s enabled=true \ -s username=che \ -s email=\"che@che\" \ -s emailVerified=true \ -s firstName=\"Eclipse\" \ -s lastName=\"Che\" && \ /opt/keycloak/bin/kcadm.sh set-password \ -r 'che' \ --username che \ --new-password che && \ /opt/keycloak/bin/kcadm.sh create groups \ -r 'che' \ -s name=vcluster" -
Configure
Keycloakto addcheuser tovclustergroup:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ USER_ID=\$(/opt/keycloak/bin/kcadm.sh get users \ -r 'che' \ -q 'username=che' \ | sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \ GROUP_ID=\$(/opt/keycloak/bin/kcadm.sh get groups \ -r 'che' \ -q 'name=vcluster' \ | sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \ /opt/keycloak/bin/kcadm.sh update users/\$USER_ID/groups/\$GROUP_ID \ -r 'che'" -
Configure
Keycloakto createche-privateclient:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create clients \ -r 'che' \ -s name=che-private \ -s clientId=che-private \ -s id=che-private \ -s redirectUris='[\"*\"]' \ -s webOrigins='[\"*\"]' \ -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \ -s standardFlowEnabled=true \ -s publicClient=false \ -s frontchannelLogout=true \ -s serviceAccountsEnabled=true \ -s directAccessGrantsEnabled=true && \ /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \ -r 'che' \ -s name=groups \ -s protocol=openid-connect \ -s protocolMapper=oidc-group-membership-mapper \ -s consentRequired=false \ -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}' && \ /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \ -r 'che' \ -s name=audience \ -s protocol=openid-connect \ -s protocolMapper=oidc-audience-mapper \ -s config='{\"included.client.audience\" : \"che-public\", \"access.token.claim\" : \"true\", \"id.token.claim\" : \"true\"}'" -
Print and save
che-privateclient secret:kubectl exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh get clients/che-private/client-secret \ -r che" -
Prepare values for
vClusterhelm chart:cat > /tmp/vcluster-values.yaml << EOF api: image: registry.k8s.io/kube-apiserver:v1.27.1 extraArgs: - --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che - --oidc-client-id=che-public - --oidc-username-claim=email - --oidc-groups-claim=groups - --oidc-ca-file=/tmp/certificates/keycloak-ca.crt init: manifestsTemplate: |- --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: oidc-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: Group name: vcluster service: type: LoadBalancer EOF -
Install
vCluster:helm repo add loft-sh https://charts.loft.sh helm repo update helm install vcluster loft-sh/vcluster-k8s \ --create-namespace \ --namespace vcluster \ --values /tmp/vcluster-values.yaml -
Mount
KeycloakCA certificate into thevclusterpod:kubectl get secret ca.crt \ --output "jsonpath={.data['ca\.crt']}" \ --namespace keycloak \ | base64 -d > /tmp/keycloak-ca.crt kubectl create configmap keycloak-cert \ --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \ --namespace vcluster kubectl patch deployment vcluster -n vcluster --type json -p='[ { "op": "add", "path": "/spec/template/spec/volumes/-", "value": { "name": "keycloak-cert", "configMap": { "name": "keycloak-cert" } } }, { "op": "add", "path": "/spec/template/spec/containers/0/volumeMounts/-", "value": { "name": "keycloak-cert", "mountPath": "/tmp/certificates" } } ]' -
Wait until
vc-vclustersecret is created:timeout 120 bash -c 'while :; do kubectl get secret vc-vcluster -n vcluster && break || sleep 5; done' -
Verify the
vClustercluster status:vcluster list -
Update
kubeconfigfile:kubectl config set-credentials vcluster \ --exec-api-version=client.authentication.k8s.io/v1beta1 \ --exec-command=kubectl \ --exec-arg=\ oidc-login,\ get-token,\ --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che,\ --certificate-authority=/tmp/keycloak-ca.crt,\ --oidc-client-id=che-public,\ --oidc-extra-scope="email offline_access profile openid" kubectl get secret vc-vcluster -n vcluster -o jsonpath="{.data.certificate-authority}" | base64 -d > /tmp/vcluster-ca.crt kubectl config set-cluster vcluster \ --server=https://$(kubectl get svc vcluster-lb \ --namespace vcluster \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"):443 \ --certificate-authority=/tmp/vcluster-ca.crt kubectl config set-context vcluster \ --cluster=vcluster \ --user=vcluster -
Use
vclusterkubeconfigcontext:kubectl config use-context vcluster -
View the pods in the cluster. Running this command redirects to the authentication page:
kubectl get pods --all-namespaces -
Install Ingress Controller on the virtual Kubernetes cluster.
Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.service.externalTrafficPolicy=ClusterIf you use a registrar such as GoDaddy, add the following two DNS records in your registrar and point them to the IP address of the ingress controller:
-
type:
A -
name:
@and*
Run the following command to find the external IP address of the NGINX Ingress Controller:
kubectl get services ingress-nginx-controller \ --namespace ingress-nginx \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"Use the following command to wait until Kubernetes host is known:
until ping -c1 ${DOMAIN_NAME} >/dev/null 2>&1; do :; done -
-
Create
CheClusterpatch YAML file and replaceCHE_PRIVATE_CLIENT_SECRETsaved above:cat > /tmp/che-patch.yaml << EOF kind: CheCluster apiVersion: org.eclipse.che/v2 spec: networking: ingressClassName: nginx auth: oAuthClientName: che-private oAuthSecret: CHE_PRIVATE_CLIENT_SECRET identityProviderURL: https://$KEYCLOAK_HOST/realms/che gateway: oAuthProxy: cookieExpireSeconds: 300 deployment: containers: - env: - name: OAUTH2_PROXY_BACKEND_LOGOUT_URL value: "http://$KEYCLOAK_HOST/realms/che/protocol/openid-connect/logout?id_token_hint={id_token}" name: oauth-proxy components: cheServer: extraProperties: CHE_OIDC_USERNAME__CLAIM: email EOF -
Create
eclipse-chenamespace:kubectl create namespace eclipse-che -
Copy
KeycloakCA certificate into theeclipse-chenamespace:kubectl create configmap keycloak-certs \ --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \ --namespace eclipse-che kubectl label configmap keycloak-certs \ app.kubernetes.io/part-of=che.eclipse.org \ app.kubernetes.io/component=ca-bundle \ --namespace eclipse-che -
Deploy Che:
chectl server:deploy \ --platform k8s \ --domain $DOMAIN_NAME \ --che-operator-cr-patch-yaml /tmp/che-patch.yaml
-
Verify that all pods are in the running state:
kubectl get pods --all-namespaces -
Verify the Che instance status:
chectl server:status -
Navigate to the Che cluster instance:
chectl dashboard:open -
Log in to the Che instance with Username:
cheand Password:che.