Welcome to the Linux Foundation Forum!

Docs version 2023-09-14: you should check them out

It is a paid certification and one is supposed to study on the docs provided by LF. Did you take the time to proof read the document before sending it out ? I mean, even a simple OOWrite proof readed would have warned you that stuff didn't match.
Also, you keep on claiming that the labs are "platform agnostic" but you fail to inform that you expect a certain configuration (ips, subnets etc) and, ultimately, your cilium installation simply does not work. I invite you to follow the steps to create a control plane and a worker node as written in docs using 2 vm (yes, ubuntu 20.04, memory and cpu set as per specs) and then see if cilium comes up nicely and you're able to join the worker node to the control plane.

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • I created 3 nodes on GCP, followed the documents and installed cilium as mentioned in the document
    (kubectl apply -f ~/SOLUTIONS/s_03/cilium-cni.yaml) and was able to setup the cluster and join worker nodes without any trouble.

    Then I used my VM ware workstation Pro - created 3 Ubuntu 20.04 vms - Followed the exact same steps and was able to join my nodes to the cluster...

  • well, you're one lucky guy.
    The only way I was able to have Cilium up and running was by following the Cilium installation process from Cilium website.
    How did you configure your netcards (vm) ?

  • ok @fazlur.khan I figured out how you did it.
    You probably have a network setting that assigns your cards something other than 192.168.x.x thus your configuration does not clash with cilium-cni.yaml provided with the labs.
    In my case (virtualbox) I had to create the vm with bridged network only, chek the assigned ip address (in my case 192.168.0.x) and modify both cilium-cni.yaml and kubeadm-config.yaml as follows :

    kubeadm-config.yaml:

    1. apiVersion: kubeadm.k8s.io/v1beta3
    2. kind: ClusterConfiguration
    3. kubernetesVersion: 1.27.1
    4. controlPlaneEndpoint: "k8scp:6443"
    5. networking:
    6. podSubnet: 10.128.0.0/16

    cilium-cni.yaml:

    1. ...
    2. cluster-pool-ipv4-cidr: "10.128.0.0/16"
    3. ...

    At this point I have what it appears to be a running cluster:

    1. Every 2.0s: kubectl get nodes controlplane: Sun Dec 17 11:02:40 2023
    2.  
    3. NAME STATUS ROLES AGE VERSION
    4. controlplane Ready control-plane 51m v1.27.1
    5. node-1 Ready worker 14m v1.27.1
    1. Every 2.0s: kubectl get po -o wide --all-namespaces controlplane: Sun Dec 17 11:03:31 2023
    2.  
    3. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    4. kube-system cilium-4v4z7 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>
    5. kube-system cilium-mjfgb 1/1 Running 1 (42m ago) 48m 192.168.0.22 controlplane <none> <none>
    6. kube-system cilium-operator-788c7d7585-dhbjb 1/1 Running 0 48m 192.168.0.24 node-1 <none> <none>
    7. kube-system cilium-operator-788c7d7585-lvnjm 1/1 Running 2 (15m ago) 48m 192.168.0.22 controlplane <none> <none>
    8. kube-system coredns-5d78c9869d-br2tg 1/1 Running 1 (42m ago) 52m 10.128.0.43 controlplane <none> <none>
    9. kube-system coredns-5d78c9869d-hkzxj 1/1 Running 1 (42m ago) 52m 10.128.0.155 controlplane <none> <none>
    10. kube-system etcd-controlplane 1/1 Running 1 (42m ago) 52m 192.168.0.22 controlplane <none> <none>
    11. kube-system etcd-node-1 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>
    12. kube-system kube-apiserver-controlplane 1/1 Running 1 (42m ago) 52m 192.168.0.22 controlplane <none> <none>
    13. kube-system kube-apiserver-node-1 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>
    14. kube-system kube-controller-manager-controlplane 1/1 Running 2 (15m ago) 52m 192.168.0.22 controlplane <none> <none>
    15. kube-system kube-controller-manager-node-1 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>
    16. kube-system kube-proxy-bxq6f 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>
    17. kube-system kube-proxy-lp2jk 1/1 Running 1 (42m ago) 52m 192.168.0.22 controlplane <none> <none>
    18. kube-system kube-scheduler-controlplane 1/1 Running 2 (15m ago) 52m 192.168.0.22 controlplane <none> <none>
    19. kube-system kube-scheduler-node-1 1/1 Running 0 15m 192.168.0.24 node-1 <none> <none>

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training