Welcome to the Linux Foundation Forum!

Lab 4.2. Could not get pod logs - "Error from server (NotFound)"

I'm trying to complete "Lab 4.2. Working with CPU and Memory Constraints" and faced following troubles.

I've deployed "hog" successfully:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       hog-775c7c858f-c2nmk                       1/1     Running   0          10s
kube-system   calico-kube-controllers-69496d8b75-knwzg   1/1     Running   1          3d14h
kube-system   calico-node-cnj4n                          1/1     Running   1          3d14h
kube-system   calico-node-cw5wb                          1/1     Running   1          3d14h
kube-system   coredns-f9fd979d6-w77l2                    1/1     Running   1          3d14h
kube-system   coredns-f9fd979d6-wzmmq                    1/1     Running   1          3d14h
kube-system   etcd-k8smaster                             1/1     Running   1          3d14h
kube-system   kube-apiserver-k8smaster                   1/1     Running   1          3d14h
kube-system   kube-controller-manager-k8smaster          1/1     Running   1          3d14h
kube-system   kube-proxy-srth7                           1/1     Running   1          3d14h
kube-system   kube-proxy-xwnhc                           1/1     Running   1          3d14h
kube-system   kube-scheduler-k8smaster                   1/1     Running   1          3d14h

But could not get logs for it:

$ kubectl --namespace default logs hog-775c7c858f-c2nmk
Error from server (NotFound): the server could not find the requested resource ( pods/log hog-775c7c858f-c2nmk)

Could anybody help?

Comments

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @Gim6626,

    Can you try running kubectl logs hog-775c7c858f-c2nmk instead? To provide the default namespace name is not necessary for kubectl commands. If desired, however, you may try to add the namespace after the logs command, as such kubectl logs hog-775c7c858f-c2nmk --namespace default.

    Regards,
    -Chris

  • Gim6626
    Gim6626 Posts: 27

    Hi! @chrispokorni,

    I've tried what you've asked. Same result:

    $ kubectl logs hog-775c7c858f-c2nmk
    Error from server (NotFound): the server could not find the requested resource ( pods/log hog-775c7c858f-c2nmk)
    
  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @Gim6626,

    After creating the hog deployment, can you run a kubectl describe pod hog-<TAB>? Can you provide its output?

    Also, what are the specs of your nodes (CPU and MEM)? What are the specified values of the MEM requests and limits for the hog application?

    Regards,
    -Chris

  • Gim6626
    Gim6626 Posts: 27

    @chrispokorni, thank you for trying to help me. I really appreciate it.

    Here is requested describe output (with requests and limits info both there):

    $ kubectl describe pod hog-775c7c858f-c2nmk
    Name:         hog-775c7c858f-c2nmk
    Namespace:    default
    Priority:     0
    Node:         ubuntu-training-server-2/10.0.2.15
    Start Time:   Tue, 06 Apr 2021 06:41:43 +0000
    Labels:       app=hog
                  pod-template-hash=775c7c858f
    Annotations:  cni.projectcalico.org/podIP: 192.168.0.224/32
                  cni.projectcalico.org/podIPs: 192.168.0.224/32
    Status:       Running
    IP:           192.168.0.224
    IPs:
      IP:           192.168.0.224
    Controlled By:  ReplicaSet/hog-775c7c858f
    Containers:
      stress:
        Container ID:   docker://1a32f6fb31acf401856e52e01f33c104a4b0b239aae9378c8bbcd961937c6825
        Image:          vish/stress
        Image ID:       docker-pullable://vish/stress@sha256:b6456a3df6db5e063e1783153627947484a3db387be99e49708c70a9a15e7177
        Port:           <none>
        Host Port:      <none>
        State:          Running
          Started:      Wed, 14 Apr 2021 01:24:59 +0000
        Last State:     Terminated
          Reason:       Error
          Exit Code:    2
          Started:      Tue, 13 Apr 2021 06:42:30 +0000
          Finished:     Tue, 13 Apr 2021 11:02:33 +0000
        Ready:          True
        Restart Count:  5
        Limits:
          memory:  4Gi
        Requests:
          memory:     2500Mi
        Environment:  <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-ckqmd (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    Volumes:
      default-token-ckqmd:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-ckqmd
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                     node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason          Age                   From                               Message
      ----     ------          ----                  ----                               -------
      Warning  FailedMount     4m5s                  kubelet, ubuntu-training-server-2  MountVolume.SetUp failed for volume "default-token-ckqmd" : failed to sync secret cache: timed out waiting for the condition
      Normal   SandboxChanged  3m31s (x2 over 4m4s)  kubelet, ubuntu-training-server-2  Pod sandbox changed, it will be killed and re-created.
      Normal   Pulling         3m30s                 kubelet, ubuntu-training-server-2  Pulling image "vish/stress"
      Normal   Pulled          3m27s                 kubelet, ubuntu-training-server-2  Successfully pulled image "vish/stress" in 2.621269997s
      Normal   Created         3m27s                 kubelet, ubuntu-training-server-2  Created container stress
      Normal   Started         3m27s                 kubelet, ubuntu-training-server-2  Started container stress
    

    I'm operating on two VirtualBox nodes (master and worker), each with MemTotal: 4039204 kB according to /proc/meminfo

  • Gim6626
    Gim6626 Posts: 27

    Noticed that limits may be too high, recreated hog with Limits/memory = 1Gi and Requests/memory=500Mi - same thing:

    $ kubectl get pods
    NAME                   READY   STATUS    RESTARTS   AGE
    hog-6b46648d4f-wxxdf   1/1     Running   0          4m6s
    $ kubectl logs hog-6b46648d4f-wxxdf
    Error from server (NotFound): the server could not find the requested resource ( pods/log hog-6b46648d4f-wxxdf)
    $ kubectl describe pod hog-6b46648d4f-wxxdf | grep -B 1 memory
        Limits:
          memory:  1Gi
        Requests:
          memory:     500Mi
    

    Funny fact actually. Pod exists in list, it autocompletes, but "NotFound".

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @Gim6626,

    The failed volume mount from the warning message may be the result of the server-2's kubelet not being able to resolve a required dependency.

    With MEM at a bare minimum, how much CPU is assigned to each VM?
    Have you noticed any changes after your VM/nodes are rebooted?

    Regards,
    -Chris

  • Gim6626
    Gim6626 Posts: 27

    I've got 6-core 12-thread MacBook Pro as host and for each node (master, worker) I've assigned 2 CPU.
    After reboot/new container start - nothing interesting.

    Now I'm at lab 8.1 and have created pod shell-demo, it exists at the list:

    $ kubectl get pods
    NAME                   READY   STATUS    RESTARTS   AGE
    busybox                0/1     Error     0          2d18h
    curlpod                1/1     Running   6          7d
    hog-6b46648d4f-bwjw2   1/1     Running   3          42h
    shell-demo             1/1     Running   0          3m30s
    

    but:

    $ kubectl exec shell-demo -- /bin/bash -c'echo $ilike'
    error: unable to upgrade connection: pod does not exist
    

    Looks like related thing.

  • Gim6626
    Gim6626 Posts: 27

    Got it after more googling. There was networking error.

    Here is how I've found that something is wrong:

    $ kubectl get nodes -o wide
    NAME                       STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
    k8smaster                  Ready    master   16d   v1.19.1   192.168.56.104   <none>        Ubuntu 18.04.5 LTS   4.15.0-141-generic   docker://19.3.6
    ubuntu-training-server-2   Ready    <none>   16d   v1.19.1   10.0.2.15        <none>        Ubuntu 18.04.5 LTS   4.15.0-141-generic   docker://19.3.6
    

    First IP is OK, it is internal IP, and second is NAT IP (same for both machines).

    So I've manually set internal IPs (192.168.56.104 for master and 192.168.56.105 for worker) in /etc/kubernetes/kubelet.conf (using KUBELET_EXTRA_ARGS=--node-ip=192.168.56.104) and restarted kubelet systemctl restart kubelet.service.

  • chrispokorni
    chrispokorni Posts: 2,372

    Hi @Gim6626,

    When there are more than one network interfaces on a node, Kubernetes tends to misbehave. A single bridged adapter for each node should provide you with all the required networking - node-to-node, node-to-internet, and node-to-host/host-to-node connectivity.

    Regards,
    -Chris

Categories

Upcoming Training