Welcome to the Linux Foundation Forum!

LFD259 Paragraph 6 on Page 49

I tried running the code to show the ilike environment variable, but it gives me the error:

error: unable to upgrade connection: container not found ("simpleapp")

This is what I get in terms of the pod state:

kubectl get pods
NAME                        READY   STATUS             RESTARTS      AGE
init-tester                 1/1     Running            1 (57m ago)   5d21h
nginx-79ff9c85db-t5qmq      1/1     Running            9 (57m ago)   24d
registry-7d7db4bd8f-fk8lx   1/1     Running            9 (57m ago)   24d
try1-7845d5b88-jddt8        1/2     ErrImagePull       0             16m
try1-7845d5b88-jvtbg        1/2     ImagePullBackOff   0             16m
try1-7845d5b88-ndbzp        1/2     ErrImagePull       0             16m
try1-7845d5b88-rh4vv        1/2     ImagePullBackOff   0             16m
try1-7845d5b88-rz8zt        1/2     ImagePullBackOff   0             16m
try1-7845d5b88-v42hl        1/2     ImagePullBackOff   0             16m

I ran the command on both a pod with ErrImagePull and the one with ImagePullBackOff. Neither one finds simpleapp, which kind of makes if the image was not pulled correctly.

This is what I get when I use kubectl describe for the first pod:

Name:             init-tester
Namespace:        default
Priority:         0
Service Account:  default
Node:             ip-172-31-5-160/172.31.5.160
Start Time:       Sun, 20 Aug 2023 23:38:57 +0000
Labels:           app=inittest
Annotations:      cni.projectcalico.org/containerID: aaa6f94cf5853f28c04bd15ccf98136adf2d7b82f8a4355072e7defe3c111a5b
                  cni.projectcalico.org/podIP: 192.168.72.34/32
                  cni.projectcalico.org/podIPs: 192.168.72.34/32
Status:           Running
IP:               192.168.72.34
IPs:
  IP:  192.168.72.34
Init Containers:
  failed:
    Container ID:  containerd://dfef68b8fc23835ddfb79c6181da8ca34b171be9e7d24301fb4322626d96f354
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/true
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 26 Aug 2023 19:55:25 +0000
      Finished:     Sat, 26 Aug 2023 19:55:25 +0000
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9rjk6 (ro)
Containers:
  webservice:
    Container ID:   containerd://f42e45bb62d366f75746433b01dbd87041a7c9c75fcad0f3be35d7b03e36d2bc
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 26 Aug 2023 19:55:26 +0000
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Sun, 20 Aug 2023 23:38:59 +0000
      Finished:     Sat, 26 Aug 2023 19:54:22 +0000
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9rjk6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-9rjk6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

Answers

  • chrispokorni
    chrispokorni Posts: 2,340

    Hi @rasputin312,

    Can you describe one of the try1-7845d5b88-xxxxx pods? Since those are the pods we would need to troubleshoot in this case. Recreate try1 deployment to have the events regenerated. Then describe a try1 pod with the ErrImagePull status and one with the ImagePullBackOff status.

    Also, what is the output of the following command:

    kubectl get pods -A -o wide

    The init-tester pod seems to be running as expected, so it does not help us in troubleshooting the try1 deployment.

    Regards,
    -Chris

Categories

Upcoming Training