Welcome to the Linux Foundation Forum!

Want help with Lab 4.1

Hello all,
I would like some help: I cannot get the chart "bitnami/wordpress" running when following the instructions. Here are the technical information:

I am running a Kubernetes cluster set up with kubeadm on 2 nodes (master and worker) in 2 VMs installed in Google cloud platform

I do have a StorageClass set as default:

dac@master:~$ k get storageclass
NAME                 PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   kubernetes.io/gce-pd   Delete          Immediate           false                  3h51m

kube-dns is enabled:

dac@master:~$ k get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   37d

But then, when I try to install the chart with

helm install wordpress bitnami/wordpress --version=9.2.5 --set=service.type=NodePort --set=service.nodePorts.http=30001

both pods are in PENDING state:

dac@master:~$ k get all
NAME                            READY   STATUS    RESTARTS   AGE
...
pod/wordpress-5c8cc7769-pg8mg   0/1     Pending   0          3h42m
pod/wordpress-mariadb-0         0/1     Pending   0          3h42m
...

I tried describing them and got

Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  3h40m (x4 over 3h43m)  default-scheduler  running "VolumeBinding" filter plugin for pod "wordpress-mariadb-0": pod has unbound immediate PersistentVolumeClaims

and indeed, the PVCs are PENDING:

dac@master:~$ k get pvc
NAME                       STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-wordpress-mariadb-0   Pending                                                         4h8m
wordpress                  Pending                                          standard       3h42m

🤔 What did I miss?

Comments

  • chrispokorni
    chrispokorni Posts: 2,366

    Hi @dtnguyen,

    It seems your PVs may not be created. And when PVCs are not bound to their respective PVs, the Pods remain in a Pending state, until Kubelet is able to resolve the storage dependency.

    Can you describe the PVC after you try to deploy the chart to see if there are any warnings or errors there?

    Regards,
    -Chris

  • Hello @chrispokorni,
    Thank you for your answer. Here are the output:

    dac@master:~$ k get pvc
    NAME                       STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    data-wordpress-mariadb-0   Pending                                                         3d21h
    registry-claim0            Bound     registryvm   200Mi      RWO                           36d
    dac@master:~$ k describe pvc data-wordpress-mariadb-0
    Name:          data-wordpress-mariadb-0
    Namespace:     default
    StorageClass:  
    Status:        Pending
    Volume:        
    Labels:        app=mariadb
                   component=master
                   heritage=Helm
                   release=wordpress
    Annotations:   <none>
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Mounted By:    <none>
    Events:
      Type    Reason         Age                      From                         Message
      ----    ------         ----                     ----                         -------
      Normal  FailedBinding  3d21h (x102 over 3d21h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
      Normal  FailedBinding  3d16h (x181 over 3d17h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
    ...
    

    I was aware of this no available PV issue. I lowered PVC spec to adapt to my VMs (a little more than 10 Gi free disk space):

    spec: 
      accessModes: 
      - "ReadWriteOnce" 
      resources: 
        requests: 
          storage: "1Gi"
    

    and launch the app from the generated .yaml template file - but that did not work.

  • dtnguyen
    dtnguyen Posts: 4
    edited August 2020

    PS: thanks to dynamic provisioning, I do not have to create PVs myself, do I? (Anyway, PV creation is not mentioned in lab instructions).

  • chrispokorni
    chrispokorni Posts: 2,366

    Hi @dtnguyen,

    It seems that you are not quite following the instructions provided in the lab manual. And you have deviated from the instructions, beginning with the Kubernetes cluster setup. This has implications on most subsequent lab exercises, starting as early as Lab 4.

    The hostpath storage class used in the lab exercise fits the needs of the microk8s cluster. Using a gce-pd provisioned introduces new requirements. And this may or may not work depending on how you bootstrapped your Kubernetes cluster on GCE.

    I would recommend following the lab book instructions from the first lab exercise, so you can focus on learning the new concepts instead of trying to retrofit the exercise to work on your specific environment. Once you have mastered the new concepts, then move to an environment that you may be more familiar with, such as Kubernetes on GCE and possibly gce-pd for storage provisioning.

    Regards,
    -Chris

  • dtnguyen
    dtnguyen Posts: 4
    edited August 2020

    Once again, I sincerely thank you for your help, and I do think your recommendation is right and I'll follow it.

    I just want to point out that my cluster setting, though somehow deviated from, conforms to the instructions:
    In Section 3 / Introduction /Section overview:
    "If you already have a cluster to use (cloud-based or local), you are more than welcome to use that instead"
    In Lab 3.1 - Creating a New Cluster Using MicroK8s:
    "If you already have an existing cluster, you may opt to skip this section and move on."
    And you understand, when one does things correctly, one expects things to work :smiley: Here it doesn't and I still do not know why. But what you said is absolutely right, let's focus on the point of this course.

  • proliant
    proliant Posts: 10
    edited November 2021

    Here is one of the option:

    1. Setup a NFS server
      $ showmount -e 192.168.122.64
      Export list for 192.168.122.64:
      /vdb1 *

    2. Setup nfs-subdir-external-provisioner using helm
      Ref: https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs
      Ref: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/charts/nfs-subdir-external-provisioner
      $ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
      $ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
      --set nfs.server=192.168.122.64 \
      --set nfs.path=/vdb1 \
      --set storageClass.defaultClass=true \
      --set storageClass.accessModes=ReadWriteMany

    3. Check the container
      $ k get pods nfs-subdir-external-provisioner-6879c5c6c-6bc9v
      NAME READY STATUS RESTARTS AGE
      nfs-subdir-external-provisioner-6879c5c6c-6bc9v 1/1 Running 0 11m

    4. Check the storageclass
      $ k get storageclasses.storage.k8s.io
      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
      nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12m

    5. Install wordpress and check the pvc
      $ k get pvc
      NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
      data-wordpress-mariadb-0 Bound pvc-a9ac1231-5235-4c04-a8f2-e5fdb66ceab6 8Gi RWO nfs-client 12m
      wordpress Bound pvc-c89c558c-2760-4670-a4b8-3323cb64545d 10Gi RWO nfs-client 12m

    6. Check the pods
      $ k get pods -A | egrep -e '^NAME|wordpress'
      NAMESPACE NAME READY STATUS RESTARTS AGE
      default wordpress-5cbcfd4954-plsl9 1/1 Running 0 14m
      default wordpress-mariadb-0 1/1 Running 0 14m

    My setup (Ubuntu 20.04.3)
    192.168.122.61 control plane
    192.168.122.62 worker
    192.168.122.63 worker
    192.168.122.64 nfs server

  • k0dard
    k0dard Posts: 115
    edited August 2022

    @proliant Thanks!

    However, my wordpress container is unhealthy, I get

    Normal Scheduled 4m59s default-scheduler Successfully assigned default/wordpress-7c5694dc6f-b4n57 to k8s-w1
    Normal Pulling 4m58s kubelet Pulling image "docker.io/bitnami/wordpress:6.0.1-debian-11-r20"
    Normal Pulled 4m21s kubelet Successfully pulled image "docker.io/bitnami/wordpress:6.0.1-debian-11-r20" in 36.685555391s
    Normal Created 3m18s (x2 over 4m19s) kubelet Created container wordpress
    Normal Started 3m18s (x2 over 4m19s) kubelet Started container wordpress
    Normal Pulled 3m18s kubelet Container image "docker.io/bitnami/wordpress:6.0.1-debian-11-r20" already present on machine
    Warning Unhealthy 60s (x2 over 70s) kubelet Liveness probe failed: Get "http://192.168.228.98:8080/wp-admin/install.php": dial tcp 192.168.228.98:8080: connect: connection refused
    Warning Unhealthy 52s (x16 over 3m40s) kubelet Readiness probe failed: Get "http://192.168.228.98:8080/wp-login.php": dial tcp 192.168.228.98:8080: connect: connection refused

    It looks like it doesn't have to do with storageClass and PVC

Categories

Upcoming Training