Welcome to the Linux Foundation Forum!

Docs version 2023-09-14: you should check them out

It is a paid certification and one is supposed to study on the docs provided by LF. Did you take the time to proof read the document before sending it out ? I mean, even a simple OOWrite proof readed would have warned you that stuff didn't match.
Also, you keep on claiming that the labs are "platform agnostic" but you fail to inform that you expect a certain configuration (ips, subnets etc) and, ultimately, your cilium installation simply does not work. I invite you to follow the steps to create a control plane and a worker node as written in docs using 2 vm (yes, ubuntu 20.04, memory and cpu set as per specs) and then see if cilium comes up nicely and you're able to join the worker node to the control plane.

Comments

  • I created 3 nodes on GCP, followed the documents and installed cilium as mentioned in the document
    (kubectl apply -f ~/SOLUTIONS/s_03/cilium-cni.yaml) and was able to setup the cluster and join worker nodes without any trouble.

    Then I used my VM ware workstation Pro - created 3 Ubuntu 20.04 vms - Followed the exact same steps and was able to join my nodes to the cluster...

  • well, you're one lucky guy.
    The only way I was able to have Cilium up and running was by following the Cilium installation process from Cilium website.
    How did you configure your netcards (vm) ?

  • ok @fazlur.khan I figured out how you did it.
    You probably have a network setting that assigns your cards something other than 192.168.x.x thus your configuration does not clash with cilium-cni.yaml provided with the labs.
    In my case (virtualbox) I had to create the vm with bridged network only, chek the assigned ip address (in my case 192.168.0.x) and modify both cilium-cni.yaml and kubeadm-config.yaml as follows :

    kubeadm-config.yaml:

    apiVersion: kubeadm.k8s.io/v1beta3
    kind: ClusterConfiguration
    kubernetesVersion: 1.27.1
    controlPlaneEndpoint: "k8scp:6443"
    networking:
      podSubnet: 10.128.0.0/16
    

    cilium-cni.yaml:

    ...
      cluster-pool-ipv4-cidr: "10.128.0.0/16"
    ...
    

    At this point I have what it appears to be a running cluster:

    Every 2.0s: kubectl get nodes              controlplane: Sun Dec 17 11:02:40 2023
    
    NAME           STATUS   ROLES           AGE   VERSION
    controlplane   Ready    control-plane   51m   v1.27.1
    node-1         Ready    worker          14m   v1.27.1
    
    Every 2.0s: kubectl get po -o wide --all-namespaces                                                                  controlplane: Sun Dec 17 11:03:31 2023
    
    NAMESPACE     NAME                                   READY   STATUS    RESTARTS      AGE   IP             NODE           NOMINATED NODE   READINESS GATES
    kube-system   cilium-4v4z7                           1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    kube-system   cilium-mjfgb                           1/1     Running   1 (42m ago)   48m   192.168.0.22   controlplane   <none>           <none>
    kube-system   cilium-operator-788c7d7585-dhbjb       1/1     Running   0             48m   192.168.0.24   node-1         <none>           <none>
    kube-system   cilium-operator-788c7d7585-lvnjm       1/1     Running   2 (15m ago)   48m   192.168.0.22   controlplane   <none>           <none>
    kube-system   coredns-5d78c9869d-br2tg               1/1     Running   1 (42m ago)   52m   10.128.0.43    controlplane   <none>           <none>
    kube-system   coredns-5d78c9869d-hkzxj               1/1     Running   1 (42m ago)   52m   10.128.0.155   controlplane   <none>           <none>
    kube-system   etcd-controlplane                      1/1     Running   1 (42m ago)   52m   192.168.0.22   controlplane   <none>           <none>
    kube-system   etcd-node-1                            1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    kube-system   kube-apiserver-controlplane            1/1     Running   1 (42m ago)   52m   192.168.0.22   controlplane   <none>           <none>
    kube-system   kube-apiserver-node-1                  1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    kube-system   kube-controller-manager-controlplane   1/1     Running   2 (15m ago)   52m   192.168.0.22   controlplane   <none>           <none>
    kube-system   kube-controller-manager-node-1         1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    kube-system   kube-proxy-bxq6f                       1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    kube-system   kube-proxy-lp2jk                       1/1     Running   1 (42m ago)   52m   192.168.0.22   controlplane   <none>           <none>
    kube-system   kube-scheduler-controlplane            1/1     Running   2 (15m ago)   52m   192.168.0.22   controlplane   <none>           <none>
    kube-system   kube-scheduler-node-1                  1/1     Running   0             15m   192.168.0.24   node-1         <none>           <none>
    

Categories

Upcoming Training