Kubernetes EKS Error: ConfigurationConflict Conflicts found when trying to applyKubernetes EKS Error:

Error

ConfigurationConflict	Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts: ClusterRole.rbac.authorization.k8s.io ebs-external-attacher-role - .metadata.labels.app.kubernetes.io/managed-by ClusterRole.rbac.authorization.k8s.io ebs-external-attacher-role - .metadata.labels.app.kubernetes.io/version ClusterRole.rbac.authorization.k8s.io ebs-csi-node-role - .rules ClusterRole.rbac.authorization.k8s.io ebs-csi-node-role - .metadata.labels.app.kubernetes.io/managed-by ClusterRole.rbac.authorization.k8s.io ebs-csi-node-role - .metadata.labels.app.kubernetes.io/version ClusterRole.rbac.authorization.k8s.io ebs-external-provisioner-role - .rules ClusterRole.rbac.authorization.k8s.io ebs-external-provisioner-role - .metadata.labels.app.kubernetes.io/managed-by ClusterRole.rbac.authorization.k8s.io ebs-external-provisioner-role - .metadata.labels.app.kubernetes.io/version ClusterRole.rbac.authorization.k8s.io ebs-external-resizer-role - .metadata.labels.app.kubernetes.io/managed-by ClusterRole.rbac.authorization.k8s.io ebs-external-resizer-role - .metadata.labels.app.kubernetes.io/version ClusterRole.rbac.authorization.k8s.io ebs-external-snapshotter-role - .rules ClusterRole.rbac.authorization.k8s.io ebs-external-snapshotter-role - .metadata.labels.app.kubernetes.io/managed-by ClusterRole.rbac.authorization.k8s.io ebs-external-snapshotter-role - .metadata.labels.app.kubernetes.io/version ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-attacher-binding - .subjects ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-attacher-binding - .metadata.labels.app.kubernetes.io/managed-by ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-attacher-binding - .metadata.labels.app.kubernetes.io/version ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-node-getter-binding - .subjects ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-node-getter-binding - .metadata.labels.app.kubernetes.io/managed-by ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-node-getter-binding - .metadata.labels.app.kubernetes.io/version ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-provisioner-binding - .subjects ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-provisioner-binding - .metadata.labels.app.kubernetes.io/managed-by ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-provisioner-binding - .metadata.labels.app.kubernetes.io/version ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-resizer-binding - .subjects ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-resizer-binding - .metadata.labels.app.kubernetes.io/managed-by ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-resizer-binding - .metadata.labels.app.kubernetes.io/version ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-snapshotter-binding - .subjects ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-snapshotter-binding - .metadata.labels.app.kubernetes.io/managed-by ClusterRoleBinding.rbac.authorization.k8s.io ebs-csi-snapshotter-binding - .metadata.labels.app.kubernetes.io/version CSIDriver.storage.k8s.io ebs.csi.aws.com - .metadata.labels.app.kubernetes.io/managed-by CSIDriver.storage.k8s.io ebs.csi.aws.com - .metadata.labels.app.kubernetes.io/version	

Solution

The provided log indicates a series of configuration conflicts when trying to apply certain changes to the Kubernetes cluster. Specifically, the conflicts seem to be centered around AWS EBS (Elastic Block Store) CSI (Container Storage Interface) driver resources.

Here’s a brief breakdown:

  1. Nature of Conflicts: The conflicts you are encountering seem to be related to metadata labels (app.kubernetes.io/managed-by and app.kubernetes.io/version) and, in some cases, to specific rules or subjects of the Kubernetes resources.
  2. Reason for Conflicts: These conflicts typically arise when:
    • The resources are already defined in the cluster with certain properties, and the definitions you are trying to apply conflict with the existing state.
    • Two or more processes or users are trying to manage or modify the same set of resources, leading to divergent configurations.
  3. How to Address the Conflicts:a. Inspect Current State: Before making any changes, understand the current state of the resources. You can describe or get the resources to check their current configurations: kubectl describe clusterrole ebs-external-attacher-role b. Determine the Desired State: Figure out which version or configuration is the desired one. If the existing state in the cluster is the desired state, you might want to update your local configurations to match. Alternatively, if your local configurations are correct, you’ll need to make changes in the cluster.c. Resolve Conflicts Manually: You might have to manually edit and resolve the conflicts. For example, if certain labels or rules are different between the cluster’s state and your configurations, decide which one should be retained and update accordingly.d. Backup: Before making sweeping changes, consider taking backups of your current configurations, especially if they are production-related. This will allow you to revert in case something goes wrong.e. Reapply: Once conflicts are resolved, try to apply the changes again.
  4. Avoid Future Conflicts: To prevent such conflicts in the future:
    • Ensure clear ownership of resources. If possible, have specific teams or individuals responsible for certain resources to avoid overlaps.
    • Use a GitOps workflow where infrastructure changes are made through version-controlled repositories. This provides an audit trail and a clear history of changes.
    • Regularly synchronize your local configurations with the cluster’s state.

Kubernetes: How to change pvc storage class from one to another?

To change the storage class of a Persistent Volume Claim (PVC) in Kubernetes from efs to gp2, you’ll have to follow a series of steps. Kubernetes doesn’t allow you to change the storage class of an existing PVC directly. Instead, you would typically need to create a new PVC with the desired storage class and then copy the data from the old PVC to the new one.

Here’s a general approach:

  1. Backup Data: Before making any changes, ensure you have a backup of your data.
  2. Create a New PVC:Create a new PVC with the gp2 storage class. Here’s an example manifest (new-pvc.yaml):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-new-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: gp2
  resources:
    requests:
      storage: <SIZE>  # Replace <SIZE> with the desired storage size
  1. Apply the new PVC: kubectl apply -f new-pvc.yaml
  2. Copy Data:One way to copy data between PVCs is to use temporary pods that mount both the old and new PVCs.
    • Launch a temporary pod with both PVCs attached:
apiVersion: v1
kind: Pod
metadata:
  name: temp-pod
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - sleep
      - "3600"
    volumeMounts:
      - name: old-pvc
        mountPath: /old
      - name: new-pvc
        mountPath: /new
  volumes:
    - name: old-pvc
      persistentVolumeClaim:
        claimName: <OLD_PVC_NAME>  # Replace with the name of your old PVC
    - name: new-pvc
      persistentVolumeClaim:
        claimName: my-new-pvc
  • Use kubectl cp or kubectl exec with cp to copy the data:kubectl exec temp-pod -- cp -r /old/* /new/

Remember to monitor the data transfer process and verify that all data has been correctly transferred to the new PVC before deleting the old one.

Kubernetes Error: unknown field “phase” in io.k8s.api.core.v1.PersistentVolumeClaimSpec

Error

kubectl apply -f deployable-pvc.yaml
error: error validating "deployable-pvc.yaml": error validating data: ValidationError(PersistentVolumeClaim.spec): unknown field "phase" in io.k8s.api.core.v1.PersistentVolumeClaimSpec; if you choose to ignore these errors, turn validation off with --validate=false

Solution

The error you're seeing indicates that there's an incorrect or extraneous field phase within the spec section of your PVC definition in the deployable-pvc.yaml file.

The phase field is typically found under the status section of a PVC and not under the spec. You'll need to remove it to make the PVC definition valid.

Here's what you should do:

Open the deployable-pvc.yaml file in an editor.

Look for the phase field inside the spec section and remove it.

Save the file and try applying it again with:


$ kubectl apply -f deployable-pvc.yaml

If you want to automate the removal of the phase field (assuming you have yq installed), you can do:


$ yq eval 'del(.spec.phase)' deployable-pvc.yaml -i

This will modify the file in place, removing the phase field from the spec section. After that, you should be able to apply the file without any issues.

Kubernetes Error: failed to provision volume with StorageClass “gp2”: rpc error: code = InvalidArgumentKubernetes Error:

Error:

Normal   WaitForPodScheduled   3m5s                  persistentvolume-controller                                                              waiting for pod kube-prometheus-stack-grafana-59d698c77f-kk74w to be scheduled
  Normal   Provisioning          49s (x8 over 2m57s)   ebs.csi.aws.com_ebs-csi-controller-676b6876c-t4wz8_1d14b6af-c9fe-4d6f-a6e0-2dd8e3766296  External provisioner is provisioning volume for claim "management/kube-prometheus-stack-grafana"
  Warning  ProvisioningFailed    49s (x8 over 2m57s)   ebs.csi.aws.com_ebs-csi-controller-676b6876c-t4wz8_1d14b6af-c9fe-4d6f-a6e0-2dd8e3766296  failed to provision volume with StorageClass "gp2": rpc error: code = InvalidArgument desc = Volume capabilities MULTI_NODE_MULTI_WRITER not supported. Only AccessModes[ReadWriteOnce] supported.
  Normal   ExternalProvisioning  10s (x15 over 2m57s)  persistentvolume-controller                                                              waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system adm
$ deployable-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"finalizers":["kubernetes.io/pvc-protection"],"labels":{"app.kubernetes.i
o/instance":"kube-prometheus-stack","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"grafana","app.kubernetes.io/version":"8.0.1","helm.sh/char
t":"grafana-6.12.1"},"name":"kube-prometheus-stack-grafana","namespace":"management"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storag
e":"10Gi"}},"storageClassName":"efs"}}
    volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
    volume.kubernetes.io/storage-provisioner: efs.csi.aws.com
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/instance: kube-prometheus-stack
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: grafana
    app.kubernetes.io/version: 8.0.1
    helm.sh/chart: grafana-6.12.1
  name: kube-prometheus-stack-grafana
  namespace: management
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: gp2
  volumeMode: Filesystem

Solution

The error you are seeing indicates that the EBS CSI driver does not support the ReadWriteMany access mode. EBS volumes can only be mounted with ReadWriteOnce in a single node.

Here’s the main problem:

In your PVC definition, you have:

accessModes: - ReadWriteMany

With the storage class gp2, which is backed by EBS, the ReadWriteMany access mode is not valid. EBS volumes can only support ReadWriteOnce.

To fix this issue:

  1. Update the accessModes in your PVC to use ReadWriteOnce:accessModes: - ReadWriteOnce
  2. Reapply the PVC: kubectl apply -f deployable-pvc.yaml

If you truly need a shared filesystem that supports ReadWriteMany, consider using Amazon EFS. However, ensure that you have an appropriate StorageClass set up that provisions EFS volumes. In this case, your original storage class efs would have been appropriate, but ensure you have all the necessary components in place to provision EFS volumes using the EFS CSI driver.