Welcome to the Linux Foundation Forum!

ImagePullBackoff

Options

Hello all.
I need some help on this problem.
So, I'm using Ubuntu 16.04 and have one master and one worker.
My master node ip is 192.168.0.35
My worker node ip is 192.168.0.36
Both have static bridge enp0s3 connection, gateway 192.168.0.1, dns 8.8.8.8.

I changed my coredns pod configMap before.

There was an error which ImagePullBackoff status when i tried to pull image from docker to make a pod, but this only happened on my master node. On my worker node, all worked fine, no error and got running status. But this matter will be solved when i restart the master node, may it one times or more until all pods were running. After restart one times or more, i got running status on all pods.
Anybody know why did this happen? Is there a problem with my network configurations?

Comments

  • serewicz
    serewicz Posts: 1,000
    Options

    Hello,

    If you can share the error and the command you ran to generate it, it will help with the troubleshooting.

    Changing the network configuration after the cluster has been initialized is tricky at best. I note you are using an IP in the 192.168.0.0 range, which is the default range used by Calico to provide IP addresses to the pods. This can cause network issues.

    To troubleshoot I would look at the IP of the master node when this problem occurs. Then use docker run to see if I can pull and run the container outside of Kubernetes. This will narrow down where to look for the issue.

    I think a better solution is to rebuild the cluster and either change the range of IPs of the nodes, or edit the calico.yaml file and the kubeadm init command to use a range without overlap. If you reference the lab exercise you'll note there are steps exactly about this issue and where to find the settings in the calico.yaml file and then the kubeadm init command to avoid the issue of having overlapping IPs.

    Regards,

  • chrispokorni
    Options

    Several master node restarts forced your pods to receive new IP addresses, probably until there was no more overlap in IPs with your nodes' IPs. In multi-node clusters, on the first node, the calico network plugin will assign IPs to pods from the 192.168.0.X subnet, on the second node the plugin will assign IPs from 192.168.1.X, on a potential third node from 192.168.2.X, ... - provided your calico was started with its default configuration. Keeping this in mind, I would see how IPs of pods running on the master node and the IPs of your nodes would overlap, considering both ranges are on 192.168.0.X, causing DNS confusion.

    Regards,
    -Chris

  • neirkate
    Options

    @serewicz @chrispokorni

    I will make it more details.
    Master node IP 192.168.0.35/24, hostname master
    Worker node IP 192.168.0.36/24, hostname ubuntu
    Both are static IPs, bridge enp0s3, gateway 192.168.0.1, dns 8.8.8.8
    Calico is using the default, which is 192.168.0.0/16, i did not change anything
    Also, master node and worker node both has same docker0 bridge IP which is 172.17.0.1/16
    Actually, i'm wondering is it okay if both of my nodes have same docker0 with same IP?

    This is my cluster info

    This is my /etc/hosts on master

    This is my /etc/hosts on worker

    When i ran docker run on my worker, it worked fine. But when i ran docker run on my master, i got this error

    The command i ran before i got the error are

    • kubectl apply -n sock-shop -f complete-demo.yaml
    • kubectl get pods -n sock-shop -o wide

    This is the pod that got the error
    1. payment-7d4d4bf9hb4-hgbrx



    2. queue-master-6b5b5c7658-rr5gb


    Do you reference about Exercise 3.1 from number 8 until number 10?
    Am i having Network Issues because of how i configured my network?

  • chrispokorni
    Options

    Hi @neirkate,
    Thank you for all the details provided. There seem to be several issues related to a misconfigured cluster. As mentioned earlier the default pods IP range 192.168.0.0/16, used by calico and the kubeadm init command, overlap the node IPs configured by your hypervisor. To fix this issue the suggestion was to rebuild the cluster and use different IP ranges: either change the IPs assigned to your nodes by the hypervisor or change the calico.yaml file and issue kubeadm init command with a different IP range for pods.
    There also seems to be some naming confusion between the ubuntu node and worker node. It seems that pods are scheduled on a node named ubuntu when they should be running on the worker.
    I agree with the suggestion of starting clean from 2 new nodes, and re-install all the components and re-create the cluster, by making sure there is no overlapping between nodes' IP range and pods' IP range. Both nodes should have the same type of networking interface, with promiscuous mode set to allow-all traffic.

    Good luck,
    -Chris

  • neirkate
    Options

    @chrispokorni
    Sorry for the late reply.
    I just knew it that i had configured my cluster wrongly, especially the network part.
    Okay, i will do that. Seems like that is the best solution for this problem.
    Thank you for your help.

    Have a nice day.

Categories

Upcoming Training