Welcome to the Linux Foundation Forum!

Lab 3.2 - Why simpleapp image do not persists in local docker repository

Hello!

Lab 3.2 - Why simpleapp tagged image do not persists in local docker repository after rebooting the master node?
curl server-ip:5000/v2/ also is not responding, so probably the repository server do not persists. Is there any way to fix this - to have all
services running after rebooting the node and to continue the lab were I exited from?

~$ kubectl get pods,svc,pvc,pv,deployment
NAME READY STATUS RESTARTS AGE
pod/nginx-595f85746d-w8khf 1/1 Running 4 2d23h
pod/registry-cbc9b4779-nhf5f 1/1 Running 4 2d23h
pod/try1-7f766ff65-7x79x 0/1 ImagePullBackOff 0 19h
pod/try1-7f766ff65-fpzpx 0/1 ImagePullBackOff 0 19h
pod/try1-7f766ff65-hq7ql 0/1 ImagePullBackOff 0 19h
pod/try1-7f766ff65-j7nh2 0/1 ImagePullBackOff 0 19h
pod/try1-7f766ff65-vzcsk 0/1 ImagePullBackOff 0 19h
pod/try1-7f766ff65-x646d 0/1 ImagePullBackOff 0 19h

Stefan

Best Answers

  • susersuser Posts: 67
    Accepted Answer

    Chris
    All services ran correctly.
    (copied now, but shown the same IP when problem existed):
    service/kubernetes ClusterIP 10.96.0.1 443/TCP 7d16h
    service/nginx ClusterIP 10.96.242.252 443/TCP 3d17h
    service/registry ClusterIP 10.107.241.131 5000/TCP 3d17h

    Surprisingly after tackling the "sudo docker-compose up" command and recreating the images & tags the pods were pulling the image successfully, but this looks like a random luck, not a fix. (before re running the docker-compose up command apparently the docker daemon didn't ran or didn't run correctly). The yaml files and daemon json files were left unchanged.
    My IP shown for registry service was the same initially given IP.

    Is there a way I can improve docker daemon behavior per my issue?
    Is there a way to manipulate calico for a static IP?

    Thank again.

    Stefan

Answers

  • susersuser Posts: 67

    More details regarding my question:
    One failed pod description:

    $ kubectl describe pod try1-7f766ff65-x646d
    Name: try1-7f766ff65-x646d
    Namespace: default
    Priority: 0
    Node: kw1/10.1.10.31
    Start Time: Tue, 07 Apr 2020 01:46:05 +0000
    Labels: app=try1
    pod-template-hash=7f766ff65
    Annotations: cni.projectcalico.org/podIP: 192.168.159.102/32
    Status: Pending
    IP: 192.168.159.102
    IPs:
    IP: 192.168.159.102
    Controlled By: ReplicaSet/try1-7f766ff65
    Containers:
    simpleapp:
    Container ID:
    Image: 10.107.241.131:5000/simpleapp:latest
    Image ID:
    Port:
    Host Port:
    State: Waiting
    Reason: ImagePullBackOff
    Ready: False
    Restart Count: 0
    Readiness: exec [cat /tmp/healthy] delay=0s timeout=1s period=5s #success=1 #failure=3
    Environment:
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    default-token-zqbsw:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-zqbsw
    Optional: false
    QoS Class: BestEffort
    Node-Selectors:
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 25h default-scheduler Successfully assigned default/try1-7f766ff65-x646d to kw1
    Normal Pulling 25h kubelet, kw1 Pulling image "10.107.241.131:5000/simpleapp:latest"
    Normal Pulled 25h kubelet, kw1 Successfully pulled image "10.107.241.131:5000/simpleapp:latest"
    Normal Created 25h kubelet, kw1 Created container simpleapp
    Normal Started 25h kubelet, kw1 Started container simpleapp
    Warning Unhealthy 25h (x120 over 25h) kubelet, kw1 Readiness probe failed: cat: /tmp/healthy: No such file or directory
    Warning FailedCreatePodSandBox 4m48s kubelet, kw1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3c107122748a2d99f9f89e92d960d939c29392fbf94f28761c1cdce66240b844" network for pod "try1-7f766ff65-x646d": networkPlugin cni failed to set up pod "try1-7f766ff65-x646d_default" network: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout
    Normal SandboxChanged 4m47s kubelet, kw1 Pod sandbox changed, it will be killed and re-created.
    Warning Failed 4m32s kubelet, kw1 Failed to pull image "10.107.241.131:5000/simpleapp:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://10.107.241.131:5000/v2/: dial tcp 10.107.241.131:5000: connect: no route to host
    Normal BackOff 3m15s (x5 over 4m31s) kubelet, kw1 Back-off pulling image "10.107.241.131:5000/simpleapp:latest"
    Normal Pulling 3m2s (x4 over 4m45s) kubelet, kw1 Pulling image "10.107.241.131:5000/simpleapp:latest"
    Warning Failed 3m2s (x4 over 4m32s) kubelet, kw1 Error: ErrImagePull
    Warning Failed 3m2s (x3 over 4m19s) kubelet, kw1 Failed to pull image "10.107.241.131:5000/simpleapp:latest": rpc error: code = Unknown desc = Error response from daemon: manifest for 10.107.241.131:5000/simpleapp:latest not found: manifest unknown: manifest unknown
    Warning Failed 2m47s (x6 over 4m31s) kubelet, kw1 Error: ImagePullBackOff
    [email protected]:/localdocker$ kubectl describe pod try1-7f766ff65-x646d

  • chrispokornichrispokorni Posts: 982
    edited April 2020

    Hi Stefan,

    Curling the node-IP:5000/v2/ will not do you any good.

    What you are experiencing is normal expected Kubernetes behavior. The local registry is exposed via a service, which is assigned a dynamic virtual private IP address. That is the address you retrieved in lab exercise 3.2 step 20, and then used to setup the /etc/docker/daemon.json files on both nodes (steps 22 and 29).

    That being a dynamic IP, once you restart the node, a new IP may be assigned to the same service, while your registry is still configured with the old IP, therefore Docker is not able to reach the new registry service.

    I suggest revisiting steps 20-23 and 29 to update the local registry address for the docker daemons.

    This assumes that all other artifacts are restarted as expected: volumes, claims, deployments, pods, services.

    Regards,
    -Chris

  • Thank you for the detailed output.

    If you look carefully at the output, it tells you that there is an i/o timeout. Something may be blocking some of the traffic between your worker and the master nodes.

    Are both kubelet services running after the reboot?

    Regards,
    -Chris

  • susersuser Posts: 67

    Hello,

    All my services are running after the reboot. Apparently only the docker daemon fails.

    Stefan

Sign In or Register to comment.