Gitlab Error:

$ kubectl get pods
NAME                                             READY   STATUS             RESTARTS      AGE
gitlab-gitaly-0                                  1/1     Running            0             17h
gitlab-gitlab-exporter-768777d547-mhmjg          1/1     Running            0             32h
gitlab-gitlab-runner-94df89d55-k2hh6             0/1     ImagePullBackOff   0             11h
gitlab-gitlab-shell-74747bd668-ct4f9             1/1     Running            0             18h
gitlab-gitlab-shell-74747bd668-mhfqk             1/1     Running            0             10h
gitlab-issuer-1-q4mgp                            0/1     Completed          0             6h39m
gitlab-kas-9c5cd4b46-rgtp5                       1/1     Running            0             16h
gitlab-kas-9c5cd4b46-vrvd9                       1/1     Running            0             11h
gitlab-nginx-ingress-controller-874b4674-2rrn2   0/1     ImagePullBackOff   0             7h7m
gitlab-nginx-ingress-controller-874b4674-6nh6g   0/1     ImagePullBackOff   0             7h7m
gitlab-registry-5fbd86d77c-v7ps6                 1/1     Running            0             11h
gitlab-registry-5fbd86d77c-x8c5j                 1/1     Running            0             32h
gitlab-shared-secrets-1-ckf-vjbwt                0/1     Completed          0             7h29m
gitlab-shared-secrets-1-cvj-qwnhv                0/1     Completed          0             7h29m
gitlab-shared-secrets-1-ir1-7lfn6                0/1     Completed          0             7h29m
gitlab-shared-secrets-1-nbe-f5rwk                0/1     Completed          0             7h29m
gitlab-sidekiq-all-in-1-v2-5776d4d95-jjt2d       0/1     Init:2/3           9 (55m ago)   10h
gitlab-sidekiq-all-in-1-v2-6cb657dfbf-pgcd9      0/1     Init:0/3           0             11h
gitlab-toolbox-8994fb94b-c9kg2                   1/1     Running            0             11h
gitlab-webservice-default-577c7b7f85-2p66v       0/2     Init:0/3           0             24m
gitlab-webservice-default-577c7b7f85-g4v7g       0/2     Init:0/3           0             24m
gitlab-webservice-default-5db576c9-84wrr         0/2     Init:2/3           0             24m
$ kubectl describe pod gitlab-webservice-default-5db576c9-84wrr
Name:         gitlab-webservice-default-5db576c9-84wrr
Namespace:    default
Priority:     0
Node:         ip-10-33-28-93.ap-southeast-1.compute.internal/10.33.28.93
Start Time:   Thu, 09 Mar 2023 00:47:35 +0530
Labels:       app=webservice
chart=webservice-6.7.3
gitlab.com/webservice-name=default
heritage=Helm
pod-template-hash=5db576c9
release=gitlab
Annotations:  checksum/config: 0623e51eaf412de358d23d12b40cd33ee794c40362489545cea01a56532b690b
cluster-autoscaler.kubernetes.io/safe-to-evict: true
gitlab.com/prometheus_path: /metrics
gitlab.com/prometheus_port: 8083
gitlab.com/prometheus_scrape: true
iam.amazonaws.com/role: tcb-gitlab-eks-role
kubernetes.io/psp: eks.privileged
prometheus.io/path: /metrics
prometheus.io/port: 8083
prometheus.io/scrape: true
Status:       Pending
IP:           172.169.46.147
IPs:
IP:           172.169.46.147
Controlled By:  ReplicaSet/gitlab-webservice-default-5db576c9
Init Containers:
certificates:
Container ID:   containerd://43ae901f4a55fd9fb3ab345a31e2de260c050ab797cfda4d5379762c952394ca
Image:          dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/alpine-certificates:20191127-r2@sha256:367d437d024d7647432d67fb2442e3e5723af5930bad77d3535f4f8f4f8630d9
Image ID:       dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/alpine-certificates@sha256:367d437d024d7647432d67fb2442e3e5723af5930bad77d3535f4f8f4f8630d9
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Completed
Exit Code:    0
Started:      Thu, 09 Mar 2023 00:47:36 +0530
Finished:     Thu, 09 Mar 2023 00:47:36 +0530
Ready:          True
Restart Count:  0
Requests:
cpu:        50m
Environment:  <none>
Mounts:
/etc/pki/ca-trust/extracted/pem from etc-pki-ca-trust-extracted-pem (rw)
/etc/ssl/certs from etc-ssl-certs (rw)
configure:
Container ID:  containerd://f96144101f4b8251e755004a24ea7b7f279fcaf4c2bd5090005712b358f0861e
Image:         dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/cloud-native/mirror/images/busybox:latest
Image ID:      dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/cloud-native/mirror/images/busybox@sha256:6bdd92bf5240be1b5f3bf71324f5e371fe59f0e153b27fa1f1620f78ba16963c
Port:          <none>
Host Port:     <none>
Command:
sh
Args:
-c
sh -x /config-webservice/configure ; sh -x /config-workhorse/configure ; mkdir -p -m 3770 /tmp/gitlab
State:          Terminated
Reason:       Completed
Exit Code:    0
Started:      Thu, 09 Mar 2023 00:47:41 +0530
Finished:     Thu, 09 Mar 2023 00:47:41 +0530
Ready:          True
Restart Count:  0
Requests:
cpu:        50m
Environment:  <none>
Mounts:
/config-webservice from webservice-config (ro)
/config-workhorse from workhorse-config (ro)
/init-config from init-webservice-secrets (ro)
/init-secrets from webservice-secrets (rw)
/init-secrets-workhorse from workhorse-secrets (rw)
/tmp from shared-tmp (rw)
dependencies:
Container ID:  containerd://c8595a856d07db35cd20c0718c41a7f174135d34af43707df1e377748db2a240
Image:         dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/gitlab-webservice-ee:v15.7.3
Image ID:      dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/gitlab-webservice-ee@sha256:d3e95851b137d5867254b842469dce2c27d272a40b0de9c16c03fe911750e3b2
Port:          <none>
Host Port:     <none>
Args:
/scripts/wait-for-deps
State:          Running
Started:      Thu, 09 Mar 2023 00:47:42 +0530
Ready:          False
Restart Count:  0
Requests:
cpu:  50m
Environment:
GITALY_FEATURE_DEFAULT_ON:         1
CONFIG_TEMPLATE_DIRECTORY:         /var/opt/gitlab/templates
CONFIG_DIRECTORY:                  /srv/gitlab/config
WORKHORSE_ARCHIVE_CACHE_DISABLED:  1
ENABLE_BOOTSNAP:                   1
Mounts:
/etc/gitlab from webservice-secrets (ro)
/etc/pki/ca-trust/extracted/pem from etc-pki-ca-trust-extracted-pem (ro)
/etc/ssl/certs/ from etc-ssl-certs (ro)
/srv/gitlab/config/secrets.yml from webservice-secrets (ro,path="rails-secrets/secrets.yml")
/var/opt/gitlab/templates from webservice-config (rw)
Containers:
webservice:
Container ID:
Image:          dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/gitlab-webservice-ee:v15.7.3
Image ID:
Ports:          8080/TCP, 8083/TCP
Host Ports:     0/TCP, 0/TCP
State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Requests:
cpu:      300m
memory:   2500M
Liveness:   http-get http://:8080/-/liveness delay=20s timeout=30s period=60s #success=1 #failure=3
Readiness:  http-get http://:8080/-/readiness delay=0s timeout=2s period=5s #success=1 #failure=2
Environment:
GITLAB_WEBSERVER:                  puma
TMPDIR:                            /tmp/gitlab
GITALY_FEATURE_DEFAULT_ON:         1
CONFIG_TEMPLATE_DIRECTORY:         /var/opt/gitlab/templates
CONFIG_DIRECTORY:                  /srv/gitlab/config
prometheus_multiproc_dir:          /metrics
ENABLE_BOOTSNAP:                   1
WORKER_PROCESSES:                  2
WORKER_TIMEOUT:                    60
INTERNAL_PORT:                     8080
PUMA_THREADS_MIN:                  4
PUMA_THREADS_MAX:                  4
PUMA_WORKER_MAX_MEMORY:
DISABLE_PUMA_WORKER_KILLER:        true
SHUTDOWN_BLACKOUT_SECONDS:         10
WORKHORSE_ARCHIVE_CACHE_DISABLED:  true
Mounts:
/etc/gitlab from webservice-secrets (ro)
/etc/krb5.conf from webservice-config (rw,path="krb5.conf")
/etc/pki/ca-trust/extracted/pem from etc-pki-ca-trust-extracted-pem (ro)
/etc/ssl/certs/ from etc-ssl-certs (ro)
/metrics from webservice-metrics (rw)
/srv/gitlab/INSTALLATION_TYPE from webservice-config (rw,path="installation_type")
/srv/gitlab/config/initializers/smtp_settings.rb from webservice-config (rw,path="smtp_settings.rb")
/srv/gitlab/config/secrets.yml from webservice-secrets (rw,path="rails-secrets/secrets.yml")
/srv/gitlab/public/uploads/tmp from shared-upload-directory (rw)
/tmp from shared-tmp (rw)
/var/opt/gitlab/templates from webservice-config (rw)
gitlab-workhorse:
Container ID:
Image:          dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/gitlab-workhorse-ee:v15.7.3
Image ID:
Port:           8181/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Requests:
cpu:      100m
memory:   100M
Liveness:   exec [/scripts/healthcheck] delay=20s timeout=30s period=60s #success=1 #failure=3
Readiness:  exec [/scripts/healthcheck] delay=0s timeout=2s period=10s #success=1 #failure=3
Environment:
TMPDIR:                         /tmp/gitlab
GITLAB_WORKHORSE_AUTH_BACKEND:  http://localhost:8080
GITLAB_WORKHORSE_EXTRA_ARGS:
GITLAB_WORKHORSE_LISTEN_PORT:   8181
GITLAB_WORKHORSE_LOG_FORMAT:    json
CONFIG_TEMPLATE_DIRECTORY:      /var/opt/gitlab/templates
CONFIG_DIRECTORY:               /srv/gitlab/config
Mounts:
/etc/gitlab from workhorse-secrets (ro)
/etc/pki/ca-trust/extracted/pem from etc-pki-ca-trust-extracted-pem (ro)
/etc/ssl/certs/ from etc-ssl-certs (ro)
/srv/gitlab/public/uploads/tmp from shared-upload-directory (rw)
/tmp from shared-tmp (rw)
/var/opt/gitlab/templates from workhorse-config (rw)
Conditions:
Type              Status
Initialized       False
Ready             False
ContainersReady   False
PodScheduled      True
Volumes:
shared-tmp:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:  <unset>
webservice-metrics:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     Memory
SizeLimit:  <unset>
webservice-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      gitlab-webservice
Optional:  false
workhorse-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      gitlab-workhorse-default
Optional:  false
init-webservice-secrets:
Type:                Projected (a volume that contains injected data from multiple sources)
SecretName:          gitlab-rails-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-gitlab-shell-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-gitaly-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-postgresql-password
SecretOptionalName:  <nil>
SecretName:          gitlab-registry-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-registry-notification
SecretOptionalName:  <nil>
SecretName:          gitlab-gitlab-workhorse-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-gitlab-kas-secret
SecretOptionalName:  <nil>
SecretName:          gitlab-gitlab-suggested-reviewers
SecretOptionalName:  <nil>
SecretName:          gitlab-rails-storage
SecretOptionalName:  <nil>
SecretName:          gitlab-rails-storage
SecretOptionalName:  <nil>
SecretName:          gitlab-rails-storage
SecretOptionalName:  <nil>
SecretName:          gitlab-rails-storage
SecretOptionalName:  <nil>
webservice-secrets:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     Memory
SizeLimit:  <unset>
workhorse-secrets:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     Memory
SizeLimit:  <unset>
shared-upload-directory:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:  <unset>
etc-ssl-certs:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     Memory
SizeLimit:  <unset>
etc-pki-ca-trust-extracted-pem:
Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:      Memory
SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  27m   default-scheduler  Successfully assigned default/gitlab-webservice-default-5db576c9-84wrr to ip-10-33-28-93.ap-southeast-1.compute.internal
Normal  Pulled     27m   kubelet            Container image "dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/alpine-certificates:20191127-r2@sha256:367d437d024d7647432d67fb2442e3e5723af5930bad77d3535f4f8f4f8630d9" already present on machine
Normal  Created    27m   kubelet            Created container certificates
Normal  Started    27m   kubelet            Started container certificates
Normal  Pulling    27m   kubelet            Pulling image "dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/cloud-native/mirror/images/busybox:latest"
Normal  Pulled     27m   kubelet            Successfully pulled image "dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/cloud-native/mirror/images/busybox:latest" in 4.22685282s
Normal  Created    27m   kubelet            Created container configure
Normal  Started    27m   kubelet            Started container configure
Normal  Pulled     27m   kubelet            Container image "dockerhXXXXXXXXXXXXXXXXXXX/gitlab-org/build/cng/gitlab-webservice-ee:v15.7.3" already present on machine
Normal  Created    27m   kubelet            Created container dependencies
Normal  Started    27m   kubelet            Started container dependencies
$ kubectl logs gitlab-webservice-default-5db576c9-84wrr -c dependencies                                                                            Begin parsing .erb templates from /var/opt/gitlab/templates
Writing /srv/gitlab/config/cable.yml
Writing /srv/gitlab/config/database.yml
Writing /srv/gitlab/config/gitlab.yml
Writing /srv/gitlab/config/resque.yml
Begin parsing .tpl templates from /var/opt/gitlab/templates
Copying other config files found in /var/opt/gitlab/templates to /srv/gitlab/config
Copying smtp_settings.rb into /srv/gitlab/config

Kubernestes EKS Error: Readiness probe failed /app/grpc-health-probe -addr=:50051

Error

$ kubectl describe pod aws-node-8f2tp -n=kube-system
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Pulled     2m59s                kubelet            Container image "602401143452.dkr.ecr.ap-southeast-1.amazonaws.com/amazon-k8s-cni-init:v1.12.2-eksbuild.1" already present on machine
Normal   Created    2m59s                kubelet            Created container aws-vpc-cni-init
Normal   Started    2m59s                kubelet            Started container aws-vpc-cni-init
Normal   Scheduled  2m59s                default-scheduler  Successfully assigned kube-system/aws-node-8f2tp to ip-10-33-28-136.ap-southeast-1.compute.internal
Normal   Started    2m58s                kubelet            Started container aws-node
Warning  Unhealthy  2m52s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:24:51.160Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m47s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:24:56.225Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m42s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:01.297Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m34s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:09.884Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m24s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:19.886Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m14s                kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:29.873Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  2m4s                 kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:39.880Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  114s                 kubelet            Readiness probe failed: {"level":"info","ts":"2023-02-28T03:25:49.876Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Warning  Unhealthy  104s                 kubelet            Liveness probe failed: {"level":"info","ts":"2023-02-28T03:25:59.877Z","caller":"/root/sdk/go1.19.2/src/runtime/proc.go:250","msg":"timeout: failed to connect service \":50051\" within 5s"}
Normal   Created    84s (x2 over 2m58s)  kubelet            Created container aws-node
Normal   Pulled     84s (x2 over 2m58s)  kubelet            Container image "602401143452.dkr.ecr.ap-southeast-1.amazonaws.com/amazon-k8s-cni:v1.12.2-eksbuild.1" already present on machine
Warning  Unhealthy  84s (x7 over 104s)   kubelet            (combined from similar events): Readiness probe errored: rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task d5365d25417ee26de951d45fa2e1bb04007b61b98c3bbb54a149334131568fdc not found: not found
Normal   Killing    84s                  kubelet            Container aws-node failed liveness probe, will be restarted
Ready:          False
Restart Count:  1
Requests:
cpu:      25m
Liveness:   exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=60s timeout=10s period=10s #success=1 #failure=3
Readiness:  exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=1s timeout=10s period=10s #success=1 #failure=3

Solution

C:\Users\tdg.rajesh>kubectl exec -it aws-node-8f2tp -n kube-system -- /bin/bash
Defaulted container "aws-node" out of: aws-node, aws-vpc-cni-init (init)
error: unable to upgrade connection: container not found ("aws-node")
C:\Users\tdg.rajesh>kubectl exec -it aws-node-bdd52 -n kube-system -- /bin/bash cat /host/var/log/aws-routed-eni/ipamd.log
Defaulted container "aws-node" out of: aws-node, aws-vpc-cni-init (init)
error: unable to upgrade connection: container not found ("aws-node")

SSL Error: no alternative certificate subject name matches target

Error

.ERROR: cURL error 51: SSL: no alternative certificate subject name matches target host name 'www.myhospitalnow.com' (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) {"exception":"[object] (GuzzleHttp\\Exception\\RequestException(code: 0): cURL error 51: SSL: no alternative certificate subject name matches target host name 'myhospitalnow.com' (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) at /opt/lampp/htdocs/myhospitalnow/mhn-admin-ms/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php:201

Solution

Check certificates is generated using www.myhospitalnow.com or myhospitalnow.com and check SSL apache configuration file for the same.

AWS Error: UnauthorizedOperation: You are not authorized to perform this operation

Error

AWS Error: UnauthorizedOperation: You are not authorized to perform this operation
Error: creating EC2 Instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 7L60g1jFi-K2ybPOg0W9t7QBb6yHSS-Lh9xXDHFDNfLXsY9YBajilGnpFpcYkSclbmvmoGyVOEPU0eL_QCyNVlYcKqWiStTDaMx8MZc7kKacbROSqmuSznJTv8POdMbUdLMLTEOYXq8q3dybheadY8umCIbEHgsCJGmeJ2-TjwbSP_oH3giYk4i8_gi8Q6vS27iABInlcPropoHNZ-DsPVYqwNVuDYqFNFies_e_6JLdIEGiJHr55o2PANm5FHC0A14i7D_ELvNzUEiavcXi8tBkPT-X_99xKZASjkvNN3OVVeHzArnPDZA2_0BHa46pV2t171Jph2idjU2kkSD5OX1J8yOlyrzeqAoMQXoadh609TmcWGDUooph8xcw9uRaQ5el4IrGDzqEC39wTVO4D8z91A9rxHbbksA9XsuZlnKTrEdWvgeBnFldcmEDzFmBLLXwuVukWiCQiauv3peLem0qWCZZlHud5Jz8Q3InQCmVn9DhTmdsnR1_9x_-SYn3nfYmNEzXTst9pLW8Pm-xTWs4gaaj24FVmuOZKY9_Bl3KBPXihQFam-x_fD3pyJ4-b-6LkcH4CQ89e657dDecvV-t55B4b_ftq_2QFwTyI1AHm9mcb0Ld3LCo9Xlxk3DJN4ZL7x_sXYz9xCgseQEIsqEy8FckbRlUbarrt8ahjfKCiPSZl8RhPJ7iaqYSOCjjXkA0yaeAT8tnIOhB1sTTMK-OPWo4L6IPz86gClmN89oFsNgiH1K6aO9uf9f45knDrs0-qPlL7zPsJmVe8BO06Tcd8fCjfZIs9jy9PBws5q2GTOUrKvdyI7b0FfRsgBwPY7gXhiMXPn9MpKvoPr8_yziQ_C9Tq9VrMzCXyx7SinpKR2Q1uiNWyyNjN2H-acKd9U47XW762t6MmxwjFqP_ogekoG8UwtPiv0SWdSu11pskPvrkG5lu0wcZhfyBupI3bd6uK0F-gSv-5Z5sC22gh1-Lu0QG4ZhBErWpctUp74mNOFcoCnz_BzI-uPA
│       status code: 403, request id: 227b4ac9-db3d-497a-b085-bb34e794d7d1
│
│   with aws_instance.string,
│   on string.tf line 8, in resource "aws_instance" "string":
│    8: resource "aws_instance" "string" {

Solution

Kubernetes EKS Gitlab Notes


helm install gitlab gitlab/gitlab --set global.hosts.domain=gitlab.digitaldevops.in --set certmanager.install=false --set global.ingress.configureCertmanager=false
helm install gitlab gitlab/gitlab --set global.hosts.domain=gitlab.digitaldevops.in --set certmanager-issuer.email=devops@rajeshkumar.xyz
helm install gitlab gitlab/gitlab \
--set certmanager.install=false \
--set global.ingress.configureCertmanager=false \
--set gitlab-runner.install=false
helm install gitlab gitlab/gitlab \
--set global.hosts.domain=gitlab.site.com \
--set certmanager.install=false \
--set global.ingress.configureCertmanager=false 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: CUSTOM_STORAGE_CLASS_NAME
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
parameters:
type: gp2
zone: '*AWS_ZONE*'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: CUSTOM_STORAGE_CLASS_NAME
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
parameters:
type: gp2
zone: '*AWS_ZONE*'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-027da2b8974bf4726
fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: 
fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: 
fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv4
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-01bb15c5ebd8cf0fe
fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv5
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-063ce825bcd5f2bfc
fsType: ext4
gitlab-postgresql
data-gitlab-postgresql-0
https://docs.gitlab.com/ee/install/requirements.html
oidc_id=$(aws eks describe-cluster --name eks-cluster1 --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
Pod and Persistent volume with existing EBS in EKS
eksctl utils associate-iam-oidc-provider --cluster eks-cluster1 --approve https://docs.gitlab.com/ee/install/requirements.html https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/ https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
Pod and Persistent volume with existing EBS in EKS
https://docs.gitlab.com/charts/installation/cloud/eks.html https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/ https://docs.gitlab.com/charts/installation/storage.html https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html https://docs.gitlab.com/charts/installation/deployment.html#persistence https://aws.amazon.com/premiumsupport/knowledge-center/eks-troubleshoot-ebs-volume-mounts/ https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/ https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html https://docs.gitlab.com/charts/troubleshooting/ https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/examples/kubernetes/dynamic-provisioning/manifests/claim.yaml https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html https://aws.amazon.com/premiumsupport/knowledge-center/eks-troubleshoot-ebs-volume-mounts/ https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/storage/eks_storage_class.yml https://docs.gitlab.com/charts/installation/storage.html https://docs.gitlab.com/charts/troubleshooting/ https://kubernetes.io/docs/concepts/storage/storage-classes/#the-storageclass-resource https://github.com/xinity/custom-gitlab/blob/master/doc/installation/storage.md https://stackoverflow.com/questions/51946393/kubernetes-pod-warning-1-nodes-had-volume-node-affinity-conflict https://github.com/kubernetes-sigs/aws-ebs-csi-driver https://aws-quickstart.github.io/quickstart-eks-gitlab/ https://aws-quickstart.github.io/quickstart-eks-gitlab/ https://dev.to/stack-labs/deploying-production-ready-gitlab-on-amazon-eks-with-terraform-3coh https://polaris.cse.unr.edu/gitlab/help/install/kubernetes/preparation/eks.md https://polaris.cse.unr.edu/gitlab https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/installation/storage.md https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3385 https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3318 https://github.com/xinity/custom-gitlab/blob/master/doc/installation/storage.md https://polaris.cse.unr.edu/gitlab/help/install/kubernetes/preparation/eks.md https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks https://www.linode.com/community/questions/20215/how-to-re-attach-persistent-volume-to-pod-when-claim-is-deleted https://stackoverflow.com/questions/54629660/kubernetes-how-do-i-delete-pv-in-the-correct-manner https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2709 https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1692 https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks https://dev.to/stack-labs/deploying-production-ready-gitlab-on-amazon-eks-with-terraform-3coh https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3935 https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html https://docs.gitlab.com/charts/ https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html https://docs.securestate.vmware.com/rule-docs/eks-nodegroup-configured-with-admin-iam-role https://stackoverflow.com/questions/50667437/what-to-do-with-released-persistent-volume https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3935 https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/ https://stackoverflow.com/questions/72262623/kubernetes-pod-fails-with-unable-to-attach-or-mount-volumes https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/release-1.3/docs/example-iam-policy.json https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html https://aws.amazon.com/premiumsupport/knowledge-center/eks-troubleshoot-ebs-volume-mounts/ ubuntu@ip-172-31-16-250:~/rajesh$ eksctl utils associate-iam-oidc-provider --cluster eks-cluster1 --approve 2023-02-09 04:22:52 [ℹ] will create IAM Open ID Connect provider for cluster "eks-cluster1" in "ap-northeast-1" 2023-02-09 04:22:52 [✔] created IAM Open ID Connect provider for cluster "eks-cluster1" in "ap-northeast-1" https://docs.gitlab.com/charts/installation/storage.html List of prerequisites before setting up EKS cluster Before setting up an Amazon Elastic Container Service for Kubernetes (EKS) cluster, there are several prerequisites that must be met: AWS Account: You need an AWS account to access AWS services, including EKS. AWS CLI and AWS IAM Authenticator: You need to have the AWS CLI installed and configured on your machine to create and manage an EKS cluster. Additionally, you need to install the AWS IAM Authenticator for Kubernetes to manage authentication between your local machine and the EKS cluster. VPC and Subnets: You need to create a Virtual Private Cloud (VPC) and subnets in which to run your EKS cluster. Security Groups: You need to create security groups that control access to the nodes in your EKS cluster and to the cluster itself. IAM Roles: You need to create IAM roles to allow the EKS control plane to manage the nodes in your cluster. Kubernetes CLI (kubectl): You need to install the Kubernetes CLI (kubectl) on your local machine to manage your EKS cluster. AWS Resources: You need to create additional AWS resources, such as an S3 bucket, to store configuration data for your EKS cluster. Kubernetes Troubleshooting with Volume https://stackoverflow.com/questions/72262623/kubernetes-pod-fails-with-unable-to-attach-or-mount-volumes ----------------------- I figured what my issue was. My AWS EBS CSI controllers were running on nodes with IAM roles having insufficient permissions. As a result I was seeing these messages in the logs: $ kubectl logs deployment/ebs-csi-controller -n kube-system -c ebs-plugin status code: 403, request id: f4bdbecb-40d5-4eeb-bcef-d0b734a94c2a E0212 21:04:38.366854 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 1rf720y-vwEYGFNwphni8ZXNr42fkuH3Vx7GWJgExmOd58-tN0S4nkAG6RHWPjHCl_ODo4ripUzogFRKRyPbFOROFCzl7uyTgs3RcWrVVWX0Ug6scvyKRvO7SPMhXsWH0HpDPXWJhqo1_9hJzgP13hE1ecfqCsN204zQQNYziNf3dmELgHnW24XQMdDEF_TOzY0u82xBRJUIVvb7W-w7E1PWbYCW0pT_D8AuEIeoRY-fXfmGZb11-SqY35GB1wFBt-06s0tqphQbthMuRLT5ios33FcyJE3PqI2o6FHF09CGnbFcoxCR1BaDKZ7RAIxM_qHP87JuOSZvQxk3lYa45rlqhj3p0dI4ByTVO1sNX6EJFLkffAnLa0-GSbRhWubUlj1bPQ_UqYnkK5iII2h4IBIUvrPu0vHR0tAkdb2BIM1r7vl1vx9KPFUfjXMhu_KA7thujWYwb7_9N3pj-VC4nn8SL5gmtWqB9NdUziSLh76WlA9xmuB59fJOoFVFdsvmawMxFM3rKCrmHFJUiot9-ZcrC9adZe6wPu4CVqA_Coqm_IIuPc6haySr6P_EylT4k51Bo08eUWCaSQilRFYwEh0GlN4cqOSaiEJ6hGhRg1ID_Qgxt1Iz3kM00hlRBPO3JIYzQY3k-24vvhBZShUmO8fa2MkAIhBArdSwTVnhb0kt3R-unLNkyguWJ8A status code: 403, request id: c6f0488d-0a45-4e70-bb99-35c3635418a6 --------------------------------------------- data-gitlab-postgresql-0 data-gitlab-postgresql-0 kubectl describe pvc data-gitlab-postgresql-0 -n <namespace> oidc_id=$(aws eks describe-cluster --name eks-cluster1 --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) aws-iam-authenticator EKS EC2 VPC ROUTE53 \ What is OIDC What is aws-iam-authenticator kubectl patch pv imReannotations: storageclass.kubernetes.io/is-default-class: "true" allowVolumeExpansion: true provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete parameters: type: gp3 fsType: ext4 allowedTopologies: - matchLabelExpressions: - key: topology.ebs.csi.aws.com/zone values: - us-east-2a - us-east-2b - us-east-2c https://github.com/aws/karpenter/issues/1775 ubuntu@ip-172-31-16-250:~/rajesh$ kubectl logs gitlab-sidekiq-all-in-1-v2-544b887df7-fs8wz Defaulted container "sidekiq" out of: sidekiq, certificates (init), configure (init), dependencies (init) Error from server (BadRequest): container "sidekiq" in pod "gitlab-sidekiq-all-in-1-v2-544b887df7-fs8wz" is waiting to start: PodInitializing Warning FailedScheduling 3m15s default-scheduler 0/2 nodes are available: 2 Too many pods. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod. kubectl patch pv pv1 -p '{"spec":{"claimRef": null}}' kubectl patch pv pv2 -p '{"spec":{"claimRef": null}}' kubectl patch pv pv3 -p '{"spec":{"claimRef": null}}' kubectl patch pv pv4 -p '{"spec":{"claimRef": null}}' kubectl patch pv pv5 -p '{"spec":{"claimRef": null}}' https://github.com/aws/karpenter/issues/1775 kubectl get pod gitlab-sidekiq-all-in-1-v2-544b887df7-glbh7 --template '{{.status.initContainerStatuses}}' kubectl get pod gitlab-webservice-default-64568bbf56-8mst6 --template '{{.status.initContainerStatuses}}' kubectl get pod gitlab-webservice-default-64568bbf56-wkcrc --template '{{.status.initContainerStatuses}}' kubectl logs gitlab-webservice-default-64568bbf56-8mst6 -c certificates kubectl logs gitlab-webservice-default-64568bbf56-wkcrc -c certificates kubectl get deployment -lapp=webservice -ojsonpath='{.items[0].spec.template.spec.initContainers[0].image}' apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: CUSTOM_STORAGE_CLASS_NAME provisioner: kubernetes.io/aws-ebs reclaimPolicy: Retain parameters: type: gp2 zone: '' kubectl patch storageclass gitlab -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 iopsPerGB: "10" fsType: ext4 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-central-1a - us-central-1b Before setting up an Amazon Elastic Container Service for Kubernetes (EKS) cluster, there are several prerequisites that must be met: AWS Account: You need an AWS account to access AWS services, including EKS. AWS CLI and AWS IAM Authenticator: You need to have the AWS CLI installed and configured on your machine to create and manage an EKS cluster. Additionally, you need to install the AWS IAM Authenticator for Kubernetes to manage authentication between your local machine and the EKS cluster. VPC and Subnets: You need to create a Virtual Private Cloud (VPC) and subnets in which to run your EKS cluster. Security Groups: You need to create security groups that control access to the nodes in your EKS cluster and to the cluster itself. IAM Roles: You need to create IAM roles to allow the EKS control plane to manage the nodes in your cluster. Kubernetes CLI (kubectl): You need to install the Kubernetes CLI (kubectl) on your local machine to manage your EKS cluster. AWS Resources: You need to create additional AWS resources, such as an S3 bucket, to store configuration data for your EKS cluster.

Kubernetes EKS Gitlab Database issues –

Error

ubuntu@ip-172-31-16-250:~/rajesh$ kubectl logs gitlab-postgresql-0
Defaulted container "gitlab-postgresql" out of: gitlab-postgresql, metrics
postgresql 00:13:21.83
postgresql 00:13:21.83 Welcome to the Bitnami postgresql container
postgresql 00:13:21.84 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 00:13:21.84 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 00:13:21.85
postgresql 00:13:21.87 INFO  ==> ** Starting PostgreSQL setup **
postgresql 00:13:21.91 INFO  ==> Validating settings in POSTGRESQL_* env vars..
postgresql 00:13:21.92 INFO  ==> Loading custom pre-init scripts...
postgresql 00:13:21.93 INFO  ==> Loading user's custom files from /docker-entrypoint-preinitdb.d ...
postgresql 00:13:21.94 INFO  ==> Initializing PostgreSQL database...
postgresql 00:13:21.96 INFO  ==> pg_hba.conf file not detected. Generating it...
postgresql 00:13:21.97 INFO  ==> Generating local authentication configuration
postgresql 00:13:23.46 INFO  ==> Starting PostgreSQL in background...
postgresql 00:13:23.70 INFO  ==> Changing password of postgres
postgresql 00:13:23.71 INFO  ==> Creating user gitlab
postgresql 00:13:23.73 INFO  ==> Granting access to "gitlab" to the database "gitlabhq_production"
postgresql 00:13:23.76 INFO  ==> Setting ownership for the 'public' schema database "gitlabhq_production" to "gitlab"
postgresql 00:13:23.79 INFO  ==> Configuring replication parameters
postgresql 00:13:23.84 INFO  ==> Configuring fsync
postgresql 00:13:23.88 INFO  ==> Loading custom scripts...
postgresql 00:13:23.89 INFO  ==> Loading user's custom files from /docker-entrypoint-initdb.d ...
postgresql 00:13:23.89 INFO  ==> Starting PostgreSQL in background...
CREATE EXTENSION
postgresql 00:13:24.07 INFO  ==> Enabling remote connections
postgresql 00:13:24.09 INFO  ==> Stopping PostgreSQL...
waiting for server to shut down.... done
server stopped
postgresql 00:13:24.21 INFO  ==> ** PostgreSQL setup finished! **
postgresql 00:13:24.26 INFO  ==> ** Starting PostgreSQL **
2023-02-13 00:13:24.294 GMT [1] LOG:  pgaudit extension initialized
2023-02-13 00:13:24.294 GMT [1] LOG:  starting PostgreSQL 12.7 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2023-02-13 00:13:24.295 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2023-02-13 00:13:24.295 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2023-02-13 00:13:24.301 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-02-13 00:13:24.328 GMT [171] LOG:  database system was shut down at 2023-02-13 00:13:24 GMT
2023-02-13 00:13:24.341 GMT [1] LOG:  database system is ready to accept connections
2023-02-13 00:13:41.037 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.037 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'audit_events')) LIMIT 1
2023-02-13 00:13:41.039 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.039 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'web_hook_logs')) LIMIT 1
2023-02-13 00:13:41.042 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.042 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'loose_foreign_keys_deleted_records')) LIMIT 1
2023-02-13 00:13:41.044 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.044 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'batched_background_migration_job_transition_logs')) LIMIT 1
2023-02-13 00:13:41.046 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.046 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'incident_management_pending_alert_escalations')) LIMIT 1
2023-02-13 00:13:41.048 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.048 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'incident_management_pending_issue_escalations')) LIMIT 1
2023-02-13 00:13:41.050 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.050 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'security_findings')) LIMIT 1
2023-02-13 00:13:41.051 GMT [214] ERROR:  relation "postgres_partitioned_tables" does not exist at character 85
2023-02-13 00:13:41.051 GMT [214] STATEMENT:  /*application:web,db_config_name:main*/ SELECT "postgres_partitioned_tables".* FROM "postgres_partitioned_tables" WHERE (identifier = concat(current_schema(), '.', 'verification_codes')) LIMIT 1
2023-02-13 00:14:14.950 GMT [284] ERROR:  duplicate key value violates unique constraint "index_shards_on_name"
2023-02-13 00:14:14.950 GMT [284] DETAIL:  Key (name)=(default) already exists.
2023-02-13 00:14:14.950 GMT [284] STATEMENT:  /*application:web,db_config_name:main*/ INSERT INTO "shards" ("name") VALUES ('default') RETURNING "id"

Kubernetes EKS Error: Unable to attach or mount volumes

Error


Events:
Type     Reason       Age                    From               Message
----     ------       ----                   ----               -------
Normal   Scheduled    12m                    default-scheduler  Successfully assigned default/gitlab-postgresql-0 to ip-192-168-159-35.ap-northeast-1.compute.internal
Warning  FailedMount  10m                    kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-glj46 custom-init-scripts postgresql-password dshm data]: timed out waiting for the condition
Warning  FailedMount  6m13s (x2 over 8m28s)  kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-glj46 custom-init-scripts postgresql-password dshm]: timed out waiting for the condition
Warning  FailedMount  98s (x2 over 3m55s)    kubelet            Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[dshm data kube-api-access-glj46 custom-init-scripts postgresql-password]: timed out waiting for the condition
$ kubectl logs deployment/ebs-csi-controller -n kube-system -c ebs-plugin
status code: 403, request id: f4bdbecb-40d5-4eeb-bcef-d0b734a94c2a
E0212 21:04:38.366854       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 1rf720y-vwEYGFNwphni8ZXNr42fkuH3Vx7GWJgExmOd58-tN0S4nkAG6RHWPjHCl_ODo4ripUzogFRKRyPbFOROFCzl7uyTgs3RcWrVVWX0Ug6scvyKRvO7SPMhXsWH0HpDPXWJhqo1_9hJzgP13hE1ecfqCsN204zQQNYziNf3dmELgHnW24XQMdDEF_TOzY0u82xBRJUIVvb7W-w7E1PWbYCW0pT_D8AuEIeoRY-fXfmGZb11-SqY35GB1wFBt-06s0tqphQbthMuRLT5ios33FcyJE3PqI2o6FHF09CGnbFcoxCR1BaDKZ7RAIxM_qHP87JuOSZvQxk3lYa45rlqhj3p0dI4ByTVO1sNX6EJFLkffAnLa0-GSbRhWubUlj1bPQ_UqYnkK5iII2h4IBIUvrPu0vHR0tAkdb2BIM1r7vl1vx9KPFUfjXMhu_KA7thujWYwb7_9N3pj-VC4nn8SL5gmtWqB9NdUziSLh76WlA9xmuB59fJOoFVFdsvmawMxFM3rKCrmHFJUiot9-ZcrC9adZe6wPu4CVqA_Coqm_IIuPc6haySr6P_EylT4k51Bo08eUWCaSQilRFYwEh0GlN4cqOSaiEJ6hGhRg1ID_Qgxt1Iz3kM00hlRBPO3JIYzQY3k-24vvhBZShUmO8fa2MkAIhBArdSwTVnhb0kt3R-unLNkyguWJ8A
status code: 403, request id: c6f0488d-0a45-4e70-bb99-35c3635418a6

Solution

I figured what my issue was. My AWS EBS CSI controllers were running on nodes with IAM roles having insufficient permissions.
As a result I was seeing these messages in the logs:

So I had to:

  • add AmazonEBSCSIDriverPolicy policy to the IAM role
  • adjust my HELM chart vars and include controller.nodeSelector.ops="true" option to make
    it run on the nodes with that IAM role.

so my aws ebs csi driver helm chart values

# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
node:
# tolerateAllTaints: true
tolerations:
- effect: NoSchedule
operator: Exists
controller:
nodeSelector:
ops: "true"
storageClasses:
- allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.ebs.csi.aws.com/zone
values:
- us-west-2a
- us-west-2b
- us-west-2c
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp3
parameters:
csi.storage.k8s.io/fstype: ext4
type: gp3
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Reference

Kubernetes EKS Error: attachdetach-controller AttachVolume.Attach failed for volume

Error

 Warning  FailedAttachVolume  3m36s (x324 over 21h)  attachdetach-controller  AttachVolume.Attach failed for volume "pv1" : timed out waiting for external-attacher of ebs.csi.aws.com CSI driver to attach volume vol-0b10c235246e76523

Solution

Enable Amazon EBS CSI Driver

Kubernetes EKS Error: UnauthorizedOperation: You are not authorized to perform this operation

Error

E0210 02:24:14.855368       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": could not attach volume "vol-0b10c235246e76523" to node "i-0bceabf074ee5f7c7": UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: bFaNJKlYxeXP9SlR_9-UMSEieUnzjW-vLPMtSHm5z-GHuNq-0DSHJqyhmR_Q2XcpuStriPlRilXQmz2jU81DY4x-kBt7_PmZAU00jMwQ9iNPydIFY2TgP_dThewzg2XMAdH4gpbcsPmetiXgPEz4BJi4F-3xHubv23fM1UkUG0HWk3qjWHap6KibT6LWt4ZLV6-Vlid4RmracQx9jnzaYe0de9ob_JEhwhtWgBpcnyC6AUyez12Zp2DvKLn61BS7r7OfuimnN38vK3GKKVG_96_SklWqJnGSuBUMbaCi5Tn2xBqQ4nJTvgIingNSv7as777ruU8tdOdm3xeiI40wX8LFI-PacjRgDHWEmHKUH76nAbId7r_VM-Ia3S8wPgdclg939T7uARLS87Jv3CB0j0P_39uxDVevmgOoamSyV4ZdmP4F2MZVR_ta2uf4GsMYZoQ99vTHZkxDVr_eF05HG85No08oi4lxU6J4cTkp44IzWUiwrv_M7Gpk7jKa2Rg-bVDfhcrb2VYVavW0ZtBIOBD3mpwAj7tn-SAfCZhqMt6iJOLXNSr_c_1enK9SkdIaL9rIOiXGvoWvuyqW6skLv5kJfcEdo3fqYAY3LYN7HU-ScOpKpJGMojCgWwmq4ER8ElQQdSWuvwXH6dEX1X8YokELZAq03Ficj-uae0sT65ppLsw1CkDuitQCgXHR
status code: 403, request id: 6b547416-cc5b-447b-9241-09fe32944100

Github Actions: repository element was not specified in the POM inside distributionManagement


Error

Error: Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project email-picker: Deployment failed: repository element was not specified in the POM inside distributionManagement element or in -DaltDeploymentRepository=id::layout::url parameter -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Error: Process completed with exit code 1.

Solution

# Step 1 - make sure you have following entry in pom.xml
<distributionManagement>
<repository>
<id>github</id>       
<name>Releases</name>           
<url>https://maven.pkg.github.com/org1/repo1</url> 
</repository>
</distributionManagement>