Welcome to the Linux Foundation Forum!

Problem with upgrade cluster lab 4.1

Options

That is my pods bofere Upgrade the Cluster:

students@cp:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5f6cfd688c-b7jfp 1/1 Running 0 21m
kube-system calico-node-bbtns 1/1 Running 0 76s
kube-system calico-node-lhxc4 1/1 Running 0 21m
kube-system coredns-74ff55c5b-c4g2v 1/1 Running 0 23m
kube-system coredns-74ff55c5b-pkfd4 1/1 Running 0 23m
kube-system etcd-cp 1/1 Running 0 24m
kube-system kube-apiserver-cp 1/1 Running 0 24m
kube-system kube-controller-manager-cp 1/1 Running 0 24m
kube-system kube-proxy-c4cbg 1/1 Running 0 76s
kube-system kube-proxy-x7kkv 1/1 Running 0 23m
kube-system kube-scheduler-cp 1/1 Running 0 24m

When I done Upgrade CP node, Calico-kube-controllers e Calico-node just got stuck.

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5f6cfd688c-pgm6x 0/1 Error 5 4m34s
kube-system calico-node-bbtns 1/1 Running 0 7m43s
kube-system calico-node-lhxc4 1/1 Running 0 28m
kube-system coredns-558bd4d5db-g2l9k 0/1 Running 0 76s
kube-system coredns-558bd4d5db-z84b5 0/1 Running 0 76s
kube-system coredns-74ff55c5b-d2v8d 0/1 Running 0 4m34s
kube-system etcd-cp 1/1 Running 1 48s
kube-system kube-apiserver-cp 1/1 Running 1 47s
kube-system kube-controller-manager-cp 1/1 Running 0 47s
kube-system kube-proxy-95lcf 1/1 Running 0 22s
kube-system kube-proxy-x7kkv 0/1 Terminating 0 30m
kube-system kube-scheduler-cp 1/1 Running 0 48s
kube-system upgrade-health-check-8bws2 0/1 Completed 0 22s

when I inspect core dns logs, I've got this:

pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout

I just follow the instructions and ignore thoose errors. But when I drain worker node this just happens:

error when evicting pods/"calico-kube-controllers-5f6cfd688c-pgm6x" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget

I've checked the pods state and I saw the calico-kube-controllers doest start on CP Node. So I tried some things but only solved when I delete the Coredns pods.

This happens to me Two times and I dont know why.

CP Node Interfaces:

students@cp:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:c4:61:44 brd ff:ff:ff:ff:ff:ff
inet 192.168.72.130/24 brd 192.168.72.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec4:6144/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b8:b7:05:57 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: cali421ca4a708d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
5: calif9316858326@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
6: calib74c228e8b5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.242.64/32 scope global tunl0
valid_lft forever preferred_lft forever

Worker node interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:4c:83:e6 brd ff:ff:ff:ff:ff:ff
inet 192.168.72.131/24 brd 192.168.72.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe4c:83e6/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:60:03:97:5c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.171.64/32 scope global tunl0
valid_lft forever preferred_lft forever

Comments

  • chrispokorni
    chrispokorni Posts: 2,164
    Options

    Hi @leonardo2021,

    What are the IP addresses of your two nodes? What type of infrastructure are you using to provision your node VMs? Cloud or a local hypervisor? What are the sizes (CPU, mem, disk) of your VMs?

    Also, what is the Pod network used to configure your cluster? Is it the default 192.168.0.0/16 ?

    Regards,
    -Chris

  • leonardo2021
    Options

    Hi Cris,

    I'm using vmware workstation

    CP 2 core, 4GB memory, 25GB for Disk
    Worker 2 core, 3GB memory, 25GB for disk

    CP - 192.168.72.130
    Worker - 192.168.72.131

    Yes, it is the default 192.168.0.0/16

    The problem is my network 192.168.72.130 belongs to 192.168.0.0/16 ?

  • chrispokorni
    chrispokorni Posts: 2,164
    Options

    Hi @leonardo2021,

    Such an overlap of node IP addresses with the Pod network is known to cause many issues with the cluster. Also, ensuring that the nodes can see each other and can access the outside world is important - there should be no firewalls between the CP and Worker nodes.

    Regards,
    -Chris

  • leonardo2021
    Options

    Hi Cris,

    I've think that so problem with firewalls and other thinks...but, I dont see any rules can affect the comunication beetwen CP node and Worker Node.

    So i just notice this problem only happens with kube-system pods. I've tried created some deployment and is running also well.

    example:

    students@cp:~$ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    nginx-6799fc88d8-7sxbx 1/1 Running 0 2m16s

    students@cp:~$ kubectl describe pods nginx-6799fc88d8-7sxbx
    Name: nginx-6799fc88d8-7sxbx
    Namespace: default
    Priority: 0
    Node: worker/192.168.72.131

    So dont think the problem is IPtables rules, also known the problems with overlap POD network IP Addresses...but isn't cleared when you see the pods logs.

  • chrispokorni
    chrispokorni Posts: 2,164
    Options

    Hi @leonardo2021,

    You can read more about Pod network plugins configuration tips from the official documentation:

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

    Regards,
    -Chris

  • rsaavedraf
    rsaavedraf Posts: 1
    edited October 2021
    Options

    Edited, troubleshooting more before posting just in case

Categories

Upcoming Training