Welcome to the new Linux Foundation Forum!

PV / PVC mapping issue in lab 3.2

bryonbakerbryonbaker Posts: 28
edited January 16 in LFD259 Class Forum

I experienced an annoying issue in lab 3.2 where the allocation of persistent volumes to persistent volume claims was the opposite to what the labs intend. Functionally it has no impact, but the result in step 18 is back-to-front.

NAME                                    STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginx-claim0      Bound    task-pv-volume   200Mi      RWO                           13m
persistentvolumeclaim/registry-claim0   Bound    registryvm       200Mi      RWO                           13m

There is a solution to this though. The PersistentVolumeClaim API has a volumeName field that lets you define which persistent volume a claim should use. This would be a useful addition to the labs.
The YAML for one of them follows:

    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: registry-claim0
        name: registry-claim0
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Mi
        volumeName: task-pv-volume
      status: {} 

Comments

  • serewiczserewicz Posts: 502

    Hello,

    Thank you for the feedback.

    Indeed once could hard-code the volume to the claim, but this would lower the administrative flexibility I think. I do think it is helpful to understand that the match of claim to volume doesn't always work the way one may want. The non-name method follows a path to first look for a member of a storage class, then access policy matches, then find a volume at least big enough to fulfill the request. But if you had a 200Mi volume and a 2Tb volume which otherwise met the request there would be no way other than forcing the name to guarantee which one was chosen. Which could result in wasting resources.

    If you declare the volumeName and some other object claimed it first, without using the name, then may get an error and no access to storage. It would add a lot to the administration overhead to keep track of all the names and ensure no one else was using the volume.

    In the end we want the space, and the particular 200Mi volume name does not affect functionality.

    Regards,

Sign In or Register to comment.