Welcome to the Linux Foundation Forum!

Can't launch pods on worker

I'm getting these kubelet errors and calico/kube-proxy won't start on the worker.

Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.006517   17289 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping.
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.006958   17289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 16 20:58:57 worker-2 kubelet[17289]: W1116 20:58:57.006972   17289 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.006992   17289 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping.
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.017460   17289 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to mount container
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.017550   17289 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to mount container k8s_
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.017648   17289 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to mount container k8s_
Nov 16 20:58:57 worker-2 kubelet[17289]: E1116 20:58:57.018089   17289 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-node-5srkp_kube-system(0893d801
lines 951-1001/1001 (END)

Comments

  • Hi @tiffanyfay,

    Depending on the exercise you are on, there may have been several steps that eventually lead up to this behavior.

    What exercise are you currently working on and what steps did you run prior to this behavior? Were all prior steps successful? Any discrepancies between your outputs and the ones presented in the lab guide?

    Regards,
    -Chris

  • samude
    samude Posts: 8
    edited December 2023

    Am facing the same issue currently but with cilium and am working on Exercise 2.2: Deploy a New Cluster. Prompt response will be nice.

  • Hi @samude,

    In order to better assist you please provide context to your reported issue. What type of infrastructure hosts your cluster, the size of the VMs, the OS running the VMs, is your firewall allowing all ingress traffic to the VMs?

    What is the output of kubectl get nodes -owide and what command is producing an unexpected output (please provide command and output as well, by capturing the prompt to show the node where the command fails).

    Regards,
    -Chris

  • samude
    samude Posts: 8
    edited December 2023

    Hi @chrispokorni, am running two droplet(VMs), both of the same sizes
    4 CPUs/ 8 GB Memory / 160 GB SSD Disk / Ubuntu 20.04 (LTS) x64 on Digital Ocean cloud. All ingress traffic is allowed.
    Here is the output as i already mentioned earlier in my reply. The problem is that curl command fails when i execute it on pods scheduled on the worker but works on pod scheduled on cp node. Please can you point me to a video lesson for deploying a cluster if any as i am only following the steps provided in the documentation since no video is provided.
    Thanks and waiting for your reply as am really stuck and can't make any further progress now.



  • Hi @support team, is there any hope to get response?

  • Hi @samude,

    Can you check for the existence of the /var/lib/kubelet/config.yam file on both nodes, and provide its content?
    Also, check for the status of the kubelet service on both nodes with sudo systemctl status kubelet ?

    Typically curl does not work across nodes when the VM networking is improperly configured at the VPC or firewall level. The simple requirement is to have both VMs in the same VPC, and to be protected by the same firewall rule. Private and public IP addresses of droplets in the same VPC tend to be very close, even sequential in most cases. The Kubernetes cluster bootstrapping should be done on the private IP, not the public IP as it seems to be the case above.

    There are two configuration video guides found in the introductory chapter, for Google Cloud GCE VMs and AWS EC2 VMs. You may find the GCP video more helpful because its naming convention closely resembles Digital Ocean. You can find helpful details around VPC configuration, firewall definition, and VM provisioning.

    I recommend for a new set of VMs to be provisioned, in a custom created VPC and an all open firewall rule, followed by a new cluster bootstrapping on the private IPs this time.

    Regards,
    -Chris

  • samude
    samude Posts: 8
    edited December 2023

    Thanks, seems is ok now

Categories

Upcoming Training