Welcome to the Linux Foundation Forum!

Lab Exercise 5.5

HatofmonkeysHatofmonkeys Posts: 5
edited January 19 in LFS260 Class Forum

Hello,

I'm a little confused by Lab 5.5, related to Pod Security Policy. I'm hoping someone has been through it already and can point out where I've gone awry.

From my understanding, the lab would rely on the PSP controller being active, but the standard lab cluster setup doesn't activate the controller, as this would block all pod creation. Adding the controller to --enable-admission-controllers on the apiserver seems to create the desired behaviour, although the policy in nopriv.yaml is never bound to a user/serviceaccount in the lab, so I'm not clear how this is expected to work.

The example with the mariadb image is also confusing. If I follow the example, the pod is failing to start because MariaDB needs an environment variable, either specifying the root password, or allowing operation with no password. The following command will start MariaDB in the lab, independent of whether the pod security policy exists:
kubectl run mariadb --image=mariadb --env=MYSQL_ALLOW_EMPTY_PASSWORD=true

I think I'm missing something fundamental on this lab; perhaps I've missed a setup step or similar. Does anyone have any advice?

Thanks in advance

Comments

  • fcioancafcioanca Posts: 869

    @Hatofmonkeys The lab pdf attachment has been removed from your post, as the forum is public, while the lab material is paid/copyrighted and should not be attached to forum posts. Instructors moderating the forum have access to the course content, and will assist you. Referencing the chapter/lab number, and a section or step number, along with the issue encountered, is sufficient context when asking for help. Thank you!

  • @fcioanca Thanks, I'd presumed the forum was private.

  • fcioancafcioanca Posts: 869

    @Hatofmonkeys Anyone can see the posts, but you need to log in with LF ID to be able to post.

  • serewiczserewicz Posts: 918

    Hello!

    Which step in exercise 5.5 diverged from the lab, only step 9? Were there any previous steps that didn't follow what is seen within the exercise?

    Perhaps there is some other difference, what version of Kubernetes are you using? OS version? Did you use kubeadm or the included scripts to build the cluster?

    Regards,

  • Hello,

    Thanks for getting back to me.

    In the strictest sense, the expected inputs/outputs of the lab deviate at step 10. The expected output is that the pod is running, however the observed output is that the pod immediately enters CrashLoopBackoff.

    As mentioned above, this can be remedied by supplying environment variables to the MariaDB container, however I believe the intent of the lab is to illustrate how Pod Security Policies work, and not how to use kubectl logs and environment variables, so I think I've misunderstood.

    Regarding cluster setup, I am using two Ubuntu 18.04 vbox servers, installed via the k8sMaster/k8sSecond scripts in LFS260/SOLUTIONS/s_04/ , which in turn call out to kubeadm init.
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    osboxes Ready master 5d22h v1.19.0 192.168.86.54 Ubuntu 18.04.3 LTS 4.15.0-130-generic docker://19.3.6
    osboxes2 Ready 5d22h v1.19.0 192.168.86.56 Ubuntu 18.04.3 LTS 4.15.0-130-generic docker://19.3.6

    As also mentioned above, this install creates an apiserver static pod with --enable-admission-plugins=NodeRestriction . I was able to add the PSP controller successfully to the apiserver command line, and validate the correct working of PSPs, but my actions seemed to deviate a long way from the steps in the lab.

    Thanks

  • chrispokornichrispokorni Posts: 1,062

    Hi @Hatofmonkeys,

    From your output it seems your Node IP addresses are 192.168.86.54 and 192.168.86.56.

    If you are using Calico, and it is configured with the default Pod network 192.168.0.0/16, then you may be running into DNS issues with your cluster. The Node IP addresses should not overlap with the Pod network.

    The fix is to either provision your VMs with different IP addresses, which would not overlap the default Pod network, or to leave the VM IPs alone but reconfigure Calico and the kubeadm init command with a different private Pod network.

    Regards,
    -Chris

  • Hi Chris,

    Thanks for the note; in my install Calico is using a pod CIDR of 192.169.0.0/16 (although k8s itself seems to remove the supernet /16 and return to the classful /24) to avoid the conflict with the lan's /16. From kubectl describe node:

    System Info:
    Machine ID: d34d8283f3ec49858389e19b8bf0746f
    System UUID: 9D4CEB92-A203-6F41-B01C-E37296F8B745
    Boot ID: 21dd4a28-ed35-4151-bb3f-146171b75d22
    Kernel Version: 4.15.0-130-generic
    OS Image: Ubuntu 18.04.3 LTS
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://19.3.6
    Kubelet Version: v1.19.0
    Kube-Proxy Version: v1.19.0
    PodCIDR: 192.169.0.0/24
    PodCIDRs: 192.169.0.0/24

    As an aside, if you ever do set Calico up with an overlapping pod CIDR of your nodes' gateway network, it routes all outbound traffic to the IP tunnel device and nothing can get in or out from your k8s nodes. Fun times.

    Thanks

  • Hi Hatofmonkeys, I exercised 5.5 as below.

    Before 4, I enabled Pod Security policy.

    • Add PodSecurityPolicy to --enable-admission-plugins argument in /etc/kubernetes/manifests/kube-apiserver.yaml.
    • kill kube-apiserver process.

    Instead of kubectl create deployment, I executed kubectl run mariadb --image=mariadb --env=MYSQL_ALLOW_EMPTY_PASSWORD=true as you pointed out.

    Instead of result of 5, I got the following.

    $ kubectl get pod
    NAME READY STATUS RESTARTS AGE
    mariadb 0/1 CreateContainerConfigError 0 6s

    I am not sure this follows contents creator's intention.

    Thanks,
    Hidekazu Nakamura

  • HatofmonkeysHatofmonkeys Posts: 5
    edited January 23

    Hi Hidekazu,

    If you enable the PodSecurityPolicy controller without enabling any policies (and binding those policies to your users/serviceaccounts) then no pods will be able to start in your cluster.

    Once you've enabled the PSP controller admission plugin you will need to create a policy and bind it to your user (or to the serviceaccount of the replicaset if you're using a deployment). This is outlined at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies .

    With these steps in place, plus the specified environment variable when launching mariadb, you should be able to observe the mariadb container being allowed/disallowed depending on the runAsUser PSP setting mentioned in the lab. I believe this is the intent of the lab, although I am very interested to hear from the lab's author as to whether I've misunderstood.

    Regards

  • serewiczserewicz Posts: 918

    Hello,

    Now that the community has decided to deprecate PSPs, even before choosing what to replace it with I will most likely remove this feature section from the lab. Hopefully the community decides what they will be using next and I can swap out with new material. In any case I will revisit the steps when time allows.

    Regards,

  • @serewicz If PSPs have been deprecated by the community (likely in favor of OPA Gatekeeper based on the chatter I've been seeing), does this have any effect on the exam material? If you can't answer because of the firewalling between training and exam teams, that's fine, I just want to make sure I understand CNCF's position on PSPs for CKS going forward.

  • serewiczserewicz Posts: 918

    Hello,

    I am limited in what I can say. At the moment PSPs remain on the list of skills and knowledge. The SIGs responsible have decided to leave PSP, but have not decided what will be next. The overall community seems to back OPA/Gatekeeper at the moment. Once that is formalized I am pretty sure the skills and knowledge list will be updated with the choice soon after. I will be adding a quick OPA lab soon. I'm working on it now, among lots of other updates.

    Regards,

  • @serewicz I appreciate the answer. Also, just saw the latest version of the PSP lab, really helps with the issues I was having with that lab. Thanks!

  • pblaaspblaas Posts: 2

    Although PodSecurityPolicy might be deprecated I too found 5.5 step 14 and 15 a bit cryptic.
    Also the replicaset error message doesn't really tell what is wrong:

    Error creating: pods10"db-two-6fd7fc85c9-" is forbidden: PodSecurityPolicy: unable to admit pod: []
    

    @Hatofmonkeys brought me on the right track and I found the solution by adding a new serviceAccount dbtwo to the deployment object and creating a new role and rolebinding.

    kind: Role
    metadata:
      name: run-db-two
      namespace: default
    rules:
    - apiGroups: ['policy']
      resources: ['podsecuritypolicies']
      verbs:     ['use']
      resourceNames:
      - no-priv
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: dbtwobinding
    roleRef:
      kind: ClusterRole
      name: run-db-two
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: dbtwo 
      namespace: default
    
Sign In or Register to comment.