Unable to curl
I have a 3 node Ubuntu 18.04 cluster on physical hardware created using the k8sMaster.sh and k8sSecond.sh.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi-controller Ready master 6d22h v1.18.1 10.0.0.230 Ubuntu 18.04.5 LTS 5.4.0-1016-raspi docker://19.3.6
rpi-worker1 Ready 4h2m v1.18.1 10.0.0.231 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6
rpi-worker2 Ready 110m v1.18.1 10.0.0.232 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6
Since I have 3 nodes I skipped step 2.13 (remove taint on controller)
When I get to Exercise 2.3 step 7 I am only able to curl (192.168.1.x) from the node where the pod was deployed. Not from controller or the other node. Similarly when I get to step 10 I cannot curl the 10.96.x.x cluster ip. Same for step 13 and the NodePort 10.100..139.x.
I've been searching these forums and noticed other students have have similar threads that were resolved via firewall or iptable problems. I checked that the Ubuntu Firewall ubf is turned off on all nodes. However when it comes to iptables I don't know where to start and previous threads did not contain details of how to correct.
Can anyone please help me get the various Cluster and NodePort curl's working?
Comments
-
Additional info in case it is helpful in troubleshooting:
rpi-controller is Raspberry Pi 4 with 8GB memory
rpi-workers are both Raspberry Pi 4's with 4GB memory(RPI-Controller login stats)
Usage of /: 15.5% of 29.04GB IP address for eth0: 10.0.0.230
Memory usage: 21% IP address for docker0: 172.17.0.1
Swap usage: 0% IP address for tunl0: 192.168.65.128wade@rpi-controller:~$ k get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default basicpod 1/1 Running 0 12h 192.168.41.4 rpi-worker2
kube-system calico-kube-controllers-854c58bf56-s9np4 1/1 Running 1 7d10h 192.168.65.134 rpi-controller
kube-system calico-node-h55kz 0/1 Running 0 15h 10.0.0.231 rpi-worker1
kube-system calico-node-p88dw 0/1 Running 1 13h 10.0.0.232 rpi-worker2
kube-system calico-node-qjnz9 1/1 Running 1 7d10h 10.0.0.230 rpi-controller
kube-system coredns-66bff467f8-7q8sz 1/1 Running 1 7d10h 192.168.65.132 rpi-controller
kube-system coredns-66bff467f8-xx2l2 1/1 Running 1 7d10h 192.168.65.133 rpi-controller
kube-system etcd-rpi-controller 1/1 Running 1 7d10h 10.0.0.230 rpi-controller
kube-system kube-apiserver-rpi-controller 1/1 Running 1 7d10h 10.0.0.230 rpi-controller
kube-system kube-controller-manager-rpi-controller 1/1 Running 2 7d10h 10.0.0.230 rpi-controller
kube-system kube-proxy-bsmg4 1/1 Running 1 13h 10.0.0.232 rpi-worker2
kube-system kube-proxy-lj95d 1/1 Running 0 15h 10.0.0.231 rpi-worker1
kube-system kube-proxy-sp7bd 1/1 Running 1 7d10h 10.0.0.230 rpi-controller
kube-system kube-scheduler-rpi-controller 1/1 Running 1 7d10h 10.0.0.230 rpi-controllerwade@rpi-controller:~$ k get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default basicservice NodePort 10.97.92.72 80:31039/TCP 13h
default kubernetes ClusterIP 10.96.0.1 443/TCP 7d10h
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 7d10hHere is a thread where the poster is experiencing the same curl problems I am:
https://forum.linuxfoundation.org/discussion/comment/23280#Comment_23280 where the poster mentions:
1. After the installation, the calico-node pods where 0/1 ready. Seems that the bird-ready readiness probe was failing, so I tried commenting it at the line 657 of calico.yaml. After that they started correctly, but sometimes one of them fall in an error state. It happened after hours of operations. Deleting and redeploying it seems to restore its readiness.I am seeing that the calico_node worker pods are not running. Does this matter?
Further down in the the thread chrispokorni advises: "infrastructure networking configuration plays a key role in the behavior of your Kubernetes cluster. " and then goes on to give some generic pointers for an AWS or hypervisor cluster. In my case the is bare metal and I have confirmed Ubuntu firewall ufw is disabled on all nodes "sudo ufw status" shows inactive.
Further in the same thread bkclements indicates he is running on bare metal and appears to have everything working but doesn't list the steps to get to that point. He links to here https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82 which appears to be a helpful resource that I am still slogging through. But as LFD259 is a developer course I would rather not have to get this far into the weeds.
Further down gfalaska mentions that he also was unable to get this working on his bare-metal Ubuntu cluster until "I noticed that the TCP port 179 (which should be open in every node for calico to work properly) was opened on the master node but not on the worker. Once I opened it on the worker, the calico-node pods reached the ready state and everything started working properly."
Perhaps then I just need to open TCP port 179 on my worker nodes. Can anyway assist me with the steps needed to do so?
Any assistance appreciated.
0 -
Hi @WadeBee,
As you were able to discover during your research, Kubernetes is very sensitive to node networking configuration, which may be impacted by node guest OS firewalls, and/or by infrastructure level firewalls.
For raspberry-pi networking I would check their documentation on how to set firewall rules, and more specifically how to open all ports, for all incoming/ingress traffic, from all sources, all protocols.
Your calico agents may not be running because of several reasons: a firewall blocking their communication, and limited CPU, MEM, or storage. You could find out what is going on with your calico agents by running the following command and studying the events at the bottom of the output:
kubectl -n kube-system describe pod calico-node-<one-of-the-two-pods>
If events are no longer shown, you may try to delete that pod and allow the controller to re-deploy another one for you, and then run the
describe
command against the newer pod.Regards,
-Chris0 -
Thanks Chris. Even though these are RPI's they are running a plain ubuntu-18.04.5-server+raspi4.img.xz image load in a headless configuration. So there isn't any special firewalls (just ufw which is inactive)
I checked the failed calico node according to your instruction and found this:
Warning Unhealthy 23s (x677 over 113m) kubelet, rpi-worker1 (combined from similar events): Readiness probe failed: 2020-09-14 03:39:02.843 [INFO][18831] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.0.0.230With this better error message I found this: https://github.com/projectcalico/calico/issues/2561 where a bunch of folks report similar issues with varying solutions.
Any idea on best approach from here? Do I somehow need to open TCP port 179 to allow this BGP traffic to pass or is this something else?
0 -
Thank for your feedback. For the purpose of this course, we recommend opening ALL ports, to all ingress traffic, all protocols, from all sources. Although your ufw is inactive at the guest OS level on each node, RPI performs some network configuration for your nodes - this is your infrastructure level.
Also, commenting out the probe is like applying a band-aid over a deep cut - it really does not fix your networking issue. I would recommend fixing the networking issue by opening all ports and then re-enabling the probes - you would want the cluster to monitor the health-checks of the calico agents.
Regards,
-Chris0 -
Hi Chris,
I appreciate your attempts to help. Although I linked to the article above because the symptoms were similar I did not follow the poster's workaround. I understand your point about the health check being a band-aid but have not disabled my probes.
I see over and over again that I need to open all ports but nowhere have I found instructions for doing so. With fear of sounding like an idiot - how do I do open all my ports?
0 -
I've been Googling to try to figure this out and see a lot of discussion around IPTables. I ran the command 'sudo iptables -L' to see what is currently in place and see 5-6 pages of DROP, ACCEPT, and RETURN rules for various K8S services, docker, calico, etc... I have no idea where to go with this.
0 -
Hi @WadeBee,
You could start by adding a rule to your iptables for the port needed by calico, but that is no guarantee that once you start deploying services to your Kubernetes cluster they will all work as expected. Helpful iptables documentation and usage examples can be found here.
You could also check calico documentation for any installation notes or tips for RPI, and search for any installation samples or tutorials for Kubernetes with calico on RPI.
I have not used Kubernetes on RPI, so I would have to rely on research, trial and error to figure out a way to successfully bootstrap my cluster on RPI. From what I can see, however, it does seem like a lot of work, and as you said yourself, for a developers course this may not be the most productive approach.
Regards,
-Chris0 -
Hello,
How do your raspberry pi nodes physically connect to each other, cross over cable, connected to a switch?
You could run wireshark or save the output of tcpdump and use wireshark on a different node to see if the traffic is leaving one node and then following where it ends up going.
If BIRD isn't working then Calico can't communicate between nodes. Could also be something tied to the chip architecture. Instead of Calico you could use Flannel, which runs almost everywhere. If Flannel works then you know it's something about the RPi and the devices used.
Regards,
0 -
Thank you for your suggestions serewicz. My 3 nodes are connected to my primary home network 10.0.0.x via switch.
I think it's important to point out to chrispokorni that (up to now) I haven't had to do anything non-standard to get the PI's working with the LFD259 courseware. I am installing Ubuntu 18 (the official courseware distro) directly on rpi arm64 hardware. There is no other OS on the PI's - no VM's - just pain old headless Ubuntu like you would find on an x64 machine. The k8sMaster and k8sSecond shell scripts are running without changes or issues. The calico pod on the master is in a ready state but not on the 2 workers
I am starting to think serewicz has a point about calico being a problem with the arm64 architecture of the rpi. In trying to diagnose this I found a few posts about using the calicoctl tool to run troubleshooting diagnostics. The calicoctl instruction page lists 2 different ways of installing I couldn't get either to work and the binary install error message is pretty much pointing right to an incompatibility with arm64.
So I have two routes I can go. 1 is to replace calico with flannel as you suggested. I see in the k8sMaster shell script around line 77 is a 'kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml' which is installing calico. Do you have any guidance on what this should be changed to for a flannel network config? Also - will I be able to complete the rest of the coursework with flannel? Is there anything calico specific in the CKAD exam?
The second route I could go is to switch over to k3sup . A coworker recommended it as a really simple way to setup a lightweight kubernetes on an rpi cluster. My concern is that environment might differ from the traditional k8s environment and I might not be able to complete the courseware or be prepared for the exam.
I think shifting to flannel or k3sup might be my next step - but would like to know if either of you (or anyone else monitoring this thread) have experience with either approach.
Thanks
0 -
I was able to find the flannel install instructions here:
I added the following to k8sMaster (replacing the default calico creation)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlEverything init'd without error and the Flannel pods are showing running/ready
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default basicpod 1/1 Running 0 3m58s 192.168.1.3 rpi-worker1
kube-system coredns-f9fd979d6-hfb2d 1/1 Running 0 59m 192.168.0.3 rpi-controller
kube-system coredns-f9fd979d6-rz4mh 1/1 Running 0 59m 192.168.0.2 rpi-controller
kube-system etcd-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-apiserver-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-controller-manager-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-89j7c 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-sxkn4 1/1 Running 0 16m 10.0.0.231 rpi-worker1
kube-system kube-proxy-65p56 1/1 Running 0 16m 10.0.0.231 rpi-worker1
kube-system kube-proxy-cxnhk 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-scheduler-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controllerI still having the same problem with Exercise 2.3 section 7.
curl http://192.168.1.3 fails from rpi-controller but works from rpi-worker1 where the pod is running.I am still having the same problem with Exercise 2.3 section 11.
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default basicservice ClusterIP 10.107.189.192 80/TCP 13m type=webserver
default kubernetes ClusterIP 10.96.0.1 443/TCP 69m
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 69m k8s-app=kube-dnscurl http://10.107.189.192 fails from rpi-controller but works from rpi-worker1 where the pod is running.
And most crucially I still have the same problem with Exercise 2.3 section 14
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
basicservice NodePort 10.104.58.193 80:32533/TCP 17s type=webserver
kubernetes ClusterIP 10.96.0.1 443/TCP 76mNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi-controller Ready master 81m v1.19.0 10.0.0.230 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6
rpi-worker1 Ready 37m v1.19.0 10.0.0.231 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6curl http://10.0.0.230:32533 fails from anywhere (controller, worker or non-cluster PC on same 10.0.0.x subnet)
Appreciate any suggestions you may have as to why this is still failing.
0 -
Hello,
Using non-standard hardware often introduces the unknown. You are finding issues and bugs no one has seen because no one is using what you are using with the same combination of OS, firmware, software, and configuration.
Flannel was the first network plugin to gain wide acceptance, but it lacks features. We used to use flannel in the labs, but switched to Calico for network security options.
I think you are encountering a routing and networking issue, not Kubernetes:
What is the primary IP of your nodes?
What is the pod network you are using?
What does tcpdump/wireshark show when you curl from another node?
Does your switch pass traffic to all ports, any configuration on the switch?I encourage you to use a more typical environment, unless it's rasberry pi you are trying to learn.
Regards,
0 -
Hi @WadeBee,
Once you reach Lab Exercise 6.5 you will notice that Flannel does not enforce the network policy covered in that exercise. Otherwise Flannel should not cause any issues - at least I can't remember anything else since we've used Flannel as plugin.
Regardless of the CNI plugin used, the expected behavior is still the same in the Kubernetes cluster - all pods should be able to access all pods from all nodes, and all services should be accessible from all nodes.Regards,
-Chris0 -
Thanks for your continued assistance. I have tried to anticipate the troubleshooting information you might need and included that in my posts.
You ask:
What is the primary IP of your nodes?
ANSWER
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi-controller Ready master 81m v1.19.0 10.0.0.230 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6
rpi-worker1 Ready 37m v1.19.0 10.0.0.231 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi docker://19.3.6
What is the pod network you are using?
ANSWER
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default basicpod 1/1 Running 0 3m58s 192.168.1.3 rpi-worker1
kube-system coredns-f9fd979d6-hfb2d 1/1 Running 0 59m 192.168.0.3 rpi-controller
kube-system coredns-f9fd979d6-rz4mh 1/1 Running 0 59m 192.168.0.2 rpi-controller
kube-system etcd-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-apiserver-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-controller-manager-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-89j7c 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-sxkn4 1/1 Running 0 16m 10.0.0.231 rpi-worker1
kube-system kube-proxy-65p56 1/1 Running 0 16m 10.0.0.231 rpi-worker1
kube-system kube-proxy-cxnhk 1/1 Running 0 59m 10.0.0.230 rpi-controller
kube-system kube-scheduler-rpi-controller 1/1 Running 0 59m 10.0.0.230 rpi-controller
What does tcpdump/wireshark show when you curl from another node?
ANSWER
I've never done this before but this link https://opensource.com/article/18/10/introduction-tcpdump gave me some tips.
When I run tcpdump -D I find that I have a whole lot of network interfaces I can capture from. Maybe this is normal for K8S but I was surprised at how many there are:
1.eth0 [Up, Running]
2.cni0 [Up, Running]
3.flannel.1 [Up, Running]
4.vethb026bb91 [Up, Running]
5.veth3f283841 [Up, Running]
6.any (Pseudo-device that captures on all interfaces) [Up, Running]
7.lo [Up, Running, Loopback]
8.docker0 [Up]
9.wlan0
10.nflog (Linux netfilter log (NFLOG) interface)
11.nfqueue (Linux netfilter queue (NFQUEUE) interface)
12.usbmon1 (USB bus number 1)
13.usbmon2 (USB bus number 2)For Exercise 2.3 section 7 curl http://192.168.1.3 from rpi-controller to pod on rpi-worker1 sudo tcpdump -i any host 192.168.1.3 from rpi-controller shows
23:27:36.209253 IP rpi-controller.33060 > 192.168.1.3.http: Flags [S], seq 4165775283, win 64860, options [mss 1410,sackOK,TS val 3038397281 ecr 0,nop,wscale 7], length 0
For Exercise 2.3 section 11 curl http://10.104.58.193 from rpi-contoller to basicservice endpoint sudo tcpdump -i any host 10.104.58.193 from rpi-controller shows nothing at all For Exercise 2.3 section 14 curl http://10.0.0.230:32533 from rpi-controller to host ip and nodeport 32533 sudo tcpdump -i any port 32533 from rpi-controller shows nothing at all However I also tried the same curl http://10.0.0.230:32533 from my client PC on the main subnet 10.0.0.243 and did receive a response 23:47:33.252587 IP 10.0.0.243.43456 > rpi-controller.32533: Flags [S], seq 23451321, win 64240, options [mss 1460,sackOK,TS val 1543955129 ecr 0,nop,wscale 7], length 0
Does your switch pass traffic to all ports, any configuration on the switch?
ANSWER
It is a non-managed Netgear 8 port hub hard-wired into my main home network distributed throughout the house. There is no special firewalls or traffic rules.0 -
No. This is a lot of output but buried inside is the information I asked for.
Let's go one step at a time.
What is the primary IP address of your nodes?
0 -
rpi-controller 10.0.0.230
rpi-worker1 10.0.0.2310 -
Thanks!
What was the full **kubeadm init** command you used to create the cluster?
0 -
I am using the default k8sMaster.sh from the September course ware s_02 folder
sudo kubeadm init --kubernetes-version 1.19.0 --pod-network-cidr 192.168.0.0/16
0 -
Perfect. Just wanted to be sure. Let's take a look at the node and pod conditions.
Please run and show the output for kubectl get pod -o wide --all-namespaces and kubectl get node.0 -
wade@rpi-controller:~$ kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default basicpod 1/1 Running 1 2d2h 192.168.1.6 rpi-worker1
kube-system coredns-f9fd979d6-hfb2d 1/1 Running 2 4d22h 192.168.0.6 rpi-controller
kube-system coredns-f9fd979d6-rz4mh 1/1 Running 2 4d22h 192.168.0.7 rpi-controller
kube-system etcd-rpi-controller 1/1 Running 2 4d22h 10.0.0.230 rpi-controller
kube-system kube-apiserver-rpi-controller 1/1 Running 2 4d22h 10.0.0.230 rpi-controller
kube-system kube-controller-manager-rpi-controller 1/1 Running 3 4d22h 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-89j7c 1/1 Running 3 4d22h 10.0.0.230 rpi-controller
kube-system kube-flannel-ds-sxkn4 1/1 Running 2 4d21h 10.0.0.231 rpi-worker1
kube-system kube-proxy-65p56 1/1 Running 2 4d21h 10.0.0.231 rpi-worker1
kube-system kube-proxy-cxnhk 1/1 Running 2 4d22h 10.0.0.230 rpi-controller
kube-system kube-scheduler-rpi-controller 1/1 Running 4 4d22h 10.0.0.230 rpi-controllerwade@rpi-controller:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
rpi-controller Ready master 4d22h v1.19.0
rpi-worker1 Ready 4d22h v1.19.0Thanks for the help.
0 -
Hello,
Thanks. From this output we can tell that there is some communication between nodes, as there are pods running and showing a ready status on both of the nodes. That is a good sign, both nodes can get to the outside world and download their docker images, and communicate with each other. Now to figure out why only curl seems to not work.
On your RPis, is there only one network interface? Some have a wired NIC and a wifi chip. If you do have a wifi chip is it fully disabled, or do you see that interface when you run ip a?
From the rpi-controller, assuming the pod is running on the worker please check the path and then verify which interface is in use. For example, on my node the pod IP is 192.168.82, running on the worker. From the master I ran tracepath:
student@master:~$ tracepath 192.168.171.82 -p 80
1?: [LOCALHOST] pmtu 1440
1: 192.168.171.64 1.556ms
1: 192.168.171.64 0.401ms
2: 192.168.171.82 0.285ms reached
Resume: pmtu 1440 hops 2 back 2I can see that the packet went to From my master, the localhost interface connected to tunl0@NONE on my worker node, which has the 192.168.171.64 IP. I checked this via ip on my worker, which would be rpi-worker1 for you:
student@worker:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
link/ether 42:01:0a:80:00:3c brd ff:ff:ff:ff:ff:ff
inet 10.128.0.60/32 scope global dynamic ens4
valid_lft 65666sec preferred_lft 65666sec
inet6 fe80::4001:aff:fe80:3c/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:79:9d:28:e9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.171.64/32 brd 192.168.171.64 scope global tunl0
valid_lft forever preferred_lft forever^^^^^^^^^^^^^^^^ Here is the IP and interface the packet went to
24: cali1d89ab8ca26@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft foreverIf the packet is not going via tunl0 by default, attempt to force it and see if it works:
curl --interface tunl0 192.168.171.82
Please run tracepath from your master, and also show the interface configuration on your worker.
Regards,
0 -
ip a from rpi-controller it looks like wifi is DOWN. Does that mean fully disabled or just unable to connect?
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether dc:a6:32:bd:6e:c4 brd ff:ff:ff:ff:ff:ffThe tracepath from rpi-controller to the basicpod running on worker1 is failing. I am not great with networking but it seems to me that 192.68.1.0 is not a valid client ip address?
wade@rpi-controller:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
basicpod 1/1 Running 2 2d20h 192.168.1.7 ** rpi-worker1
wade@rpi-controller:~$ **tracepath 192.168.1.7 -p 80
1?: [LOCALHOST] pmtu 1450
1: 192.168.1.0 0.768ms
1: 192.168.1.0 0.432ms
2: no replyip a from rpi-worker1 below
wade@rpi-worker1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:86:87:41 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.231/24 brd 10.0.0.255 scope global dynamic eth0
valid_lft 79950sec preferred_lft 79950sec
inet6 fe80::dea6:32ff:fe86:8741/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether dc:a6:32:86:87:42 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a5:8b:49:5e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 5a:e9:68:70:57:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.0/32 brd 192.168.1.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::58e9:68ff:fe70:5785/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 4a:36:73:27:b9:1f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::4836:73ff:fe27:b91f/64 scope link
valid_lft forever preferred_lft forever
7: veth9d504c4a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether e6:35:6c:a4:d7:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e435:6cff:fea4:d7b7/64 scope link
valid_lft forever preferred_lft foreverFor your request last step 'curl --interface tunl0 192.168.x.y' I don't have a tunl0 interface or anything bound to the 192.168.1.0 tracert starting hop so not sure what to send you here. I did try curl from various interfaces I do but get the same timeout.
0 -
Hello,
I would fully disable the wifi if you can on all the nodes, just to be sure. It should be okay without an IP, as there shouldn't be a route associated with that interface.
From the output of your commands it looks like the curl is trying to go to 192.168.1.0, which appears to be the IP of flannel.1 on the worker. Which is good. But the traffic is either not going across the wire properly, or is not being handled properly on the worker. I would suppose there is something about the RPi hardware that is not handing traffic across. Let's try to narrow it down:
Please use the flannel.1 interface from you master, such as curl --interface flannel.1 192.168.x.y
Before you run the curl on you master, run tcpdump on your worker node: sudo tcpdump dst 192.168.x.y
When I ran it I saw no traffic until the curl from the master, then I saw the following (you would use a different IP) note on the second line of output it uses tunl0 interface, yours should use flannel.1:
worker$ sudo tcpdump dst 192.168.171.82
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes 19:00:03.760064 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [S], seq 1359678280, win 64400, options [mss 1400,sackOK,TS val 4122292972 ecr 0,nop,wscale 7], length 0 19:00:03.761219 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [.], ack 3070323334, win 504, options [nop,nop,TS val 4122292974 ecr 1590422118], length 0 19:00:03.761243 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [P.], seq 0:78, ack 1, win 504, options [nop,nop,TS val 4122292974 ecr 1590422118], length 78: HTTP: GET / HTTP/1.1 19:00:03.764394 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [.], ack 239, win 503, options [nop,nop,TS val 4122292977 ecr 1590422122], length 0 19:00:03.764549 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [.], ack 851, win 502, options [nop,nop,TS val 4122292977 ecr 1590422123], length 0 19:00:03.765032 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [F.], seq 78, ack 851, win 502, options [nop,nop,TS val 4122292978 ecr 1590422123], length 0 19:00:03.765320 IP 192.168.219.64.57641 > 192.168.171.82.http: Flags [.], ack 852, win 502, options [nop,nop,TS val 4122292978 ecr 1590422123], length 0
From this output on the worker we can see traffic come in to port 80 including the GET as it goes by.
If you don't see this traffic on flannel.1, then I would move over to the master again and try to narrow down what interface the traffic is leaving upon.
If the traffic is leaving the master on flannel.1, but not showing up on flannel.1 on the worker, and you are sure there is no firewall in place, it's something with the hardware you are using.
Regards,
1 -
To completely remove the WLAN0 interface I added the following 2 lines to /etc/modprobe.d/raspi-blacklist.conf
blacklist brcmfmac
blacklist brcmutil
I reran ip a and no longer have a wlan0 interface on either rpi.After rebooting for the above change my basicpod is now running on 192.68.1.9
Something seems to be wrong with tcpdump (or I am doing something wrong) when running on either rpi. I tried the scenario you requested first (tcpdump running on worker while curling from controller) using sudo tcpdump 192.168.1.9. I was not picking up any traffic but what seemed more peculiar to me is that when running this command from either rpi it says
wade@rpi-worker1:~$ sudo tcpdump dst 192.168.1.9 **listening on eth0**, link-type EN10MB (Ethernet), capture size 262144 bytes
Should it be listening on eth0 when eth0 is bound to the primary ipaddress in the 10.0.0.x subnet?
I even went so far as running tcpdump on rpi-worker and curling (from separate SSH) from rpi-worker. I know this works because it return the NGINX root html. Even in that scenario I wasn't getting any traffic from tcpdump.
Lastly, I noticed tcpdump has an option to just watch an interface. So I ran the following on rpi-worker while curling from rpi-controller. After about a minute I cancelled the failed curl and didn't see the HTTP GET that you did in your trace.
wade@rpi-worker1:~$ sudo tcpdump -i flannel.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
23:19:26.200012 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840327739 ecr 0,nop,wscale 7], length 0
23:19:27.206321 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840328746 ecr 0,nop,wscale 7], length 0
23:19:29.222275 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840330762 ecr 0,nop,wscale 7], length 0
23:19:33.446231 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840334986 ecr 0,nop,wscale 7], length 0
23:19:41.638261 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840343178 ecr 0,nop,wscale 7], length 0
23:19:57.766173 IP 192.168.0.0.45370 > 192.168.1.9.http: Flags [S], seq 3329324080, win 64860, options [mss 1410,sackOK,TS val 1840359306 ecr 0,nop,wscale 7], length 00 -
Not sure if this is at all helpful but I just spun up an alternate k8s cluster using k3s https://rancher.com/docs/k3s/latest/en/quick-start/
I know Rancher is using an alternate internal stack than the LFD259 course ware but my NGINX pod curl's without any issue on K3S (using the exact same Ubuntu 18 OS)
wade@rpi-controller:~$ k get nodes -A -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rpi-worker1 Ready 68m v1.18.8+k3s1 10.0.0.231 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi containerd://1.3.3-k3s2
rpi-controller Ready master 72m v1.18.8+k3s1 10.0.0.230 Ubuntu 18.04.5 LTS 5.4.0-1015-raspi containerd://1.3.3-k3s2wade@rpi-controller:~$ k get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system metrics-server-7566d596c8-5cw85 1/1 Running 0 73m 10.42.0.3 rpi-controller
kube-system local-path-provisioner-6d59f47c7-qgx86 1/1 Running 0 73m 10.42.0.5 rpi-controller
kube-system coredns-7944c66d8d-2gn2h 1/1 Running 0 73m 10.42.0.2 rpi-controller
kube-system helm-install-traefik-xbrjd 0/1 Completed 2 73m 10.42.0.4 rpi-controller
kube-system svclb-traefik-88v2w 2/2 Running 0 72m 10.42.0.7 rpi-controller
kube-system traefik-758cd5fc85-9wsmx 1/1 Running 0 72m 10.42.0.6 rpi-controller
kube-system svclb-traefik-txkh4 2/2 Running 0 69m 10.42.1.2 rpi-worker1
default basicpod 1/1 Running 0 6m10s 10.42.0.9 rpi-controllerwade@rpi-controller:~$ curl 10.42.0.9
<!DOCTYPE html>
Welcome to nginx!One things that stands out to me looking at what works and what doesn't is the subnets assigned across the pods. It may be nothing but all the above pods (system and workload) are in the 10.42.0.x subnet whereas in the non-working LFD259 cluster, the system pods are scattered across the 10.0.0.x primary ip addresses, the 192.168.0.x subnet and the workload node is on the 192.168.1.x subnet.
Does CNI know how to route across all these subnets?
I have the LFD259 cluster on a separate set of SD cards so I can swap back and forth between both clusters if there are any checks you want me to run.
0 -
Hello,
Flannel is a flat network, which is why you are seeing all the pods in a single range. As well Flannel is responsible for getting traffic from one node to another. As a result when you ran curl on the same node as the pod you wouldn't see traffic.
The only thing I can think of, other than RPi is hardware that won't work for the class, is that packet forwarding is not on. If you run cat /proc/sys/net/ipv4/ip_forward you should see a 1 returned. If you see a zero instead than the kernel is the reason the packet isn't making it from the inbound to a different interface.
Regards,
0 -
I have run cat /proc/sys/net/ipv4/ip_forward on both controller and worker1 - both are properly set to 1.
You mention "Flannel is a flat network, which is why you are seeing all the pods in a single range" but that is the point of my above concern. The failing LFD259 cluster has service and workload pods scattered across different ip ranges. While the working rancher cluster has all pods in a single range.
0 -
Hi @WadeBee,
Both calico and flannel are using private IP subnets for all client workload Pods and very few of the control plane Pods.
The majority of the control plane Pods are exposed directly on the Node IP (master or worker).
The Service ClusterIPs are managed by the cluster, from a different private virtual network that does not overlap with Pod IPs or Node IPs.
Are you experiencing different behavior in either of the two clusters?
Regards,
-Chris0 -
Hi Chris - I just loaded back up the LFD259 cluster and it is sub-netted the way you describe. I think I confused it with the Rancher setup that seems spread across 3 different subnets.
For anyone who is following along with this thread - I did get the RPi cluster curling and working as expected for lesson 2.3 using the excellent instructions here: https://opensource.com/article/20/6/kubernetes-raspberry-pi
If you go down this path (as you can probably tell from the instructors assistance above) you are kinda going off the reservation and you may not get much support from this forum. The working solution uses Ubuntu v20, k8s v1.19, Docker v19 and Flannel so its not too far off the reservation but...
I expect there are going to be additional hurdles for me. No sooner than I got the curl working - in the very next lab (multi-pod container) the fluentd pod refused to load. I was able to work through that (image: fluent/fluentd:edge-debian-arm64) but the main problem appears to be related to the underlying hardware architecture and support for it among various vendors.
I am now up to lab 3.2 and there came across another issue. The course ware asks you to use Kompose to convert Docker-Compose YAML to k8s yaml. The Kompse install will fail if you try to to do it on the rpi-controller as the courseware suggests. However Kompose can just be installed on your workstation and after it has done its conversion, the resulting YAML moved to the cluster.
Is it a lot of work - fer sure. Is it worth it? I think so. Hitting these roadblocks is forcing me to a deeper understanding of the environment. There is something really satisfying about having 9 SD cards representing 3 completely different cluster configurations and not having to worry if I forgot to shut down all my resources before logging off a cloud provider. And as data centers realize more and more that their primary costs are electricity and cooling, they are embracing ARM architectures at an increasing rate.
0 -
Please complete the steps using x86 based hardware, do the steps work using GCE, AWS, Digital Ocean, VirtualBox, VMWare of x86 laptops?
If so then the issue is with RPi, which we do not support for the labs.
0
Categories
- All Categories
- 207 LFX Mentorship
- 207 LFX Mentorship: Linux Kernel
- 735 Linux Foundation IT Professional Programs
- 339 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 66 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- 5.9K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 221 LFD103 Class Forum
- 17 LFD110 Class Forum
- 34 LFD121 Class Forum
- 17 LFD133 Class Forum
- 6 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 109 LFS101 Class Forum
- LFS111 Class Forum
- 2 LFS112 Class Forum
- 1 LFS116 Class Forum
- 3 LFS118 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 2 LFS147 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 14 LFS158 Class Forum
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 1 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 117 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 50 LFS241 Class Forum
- 43 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 1 LFS245 Class Forum
- 45 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 145 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 6 LFS256 Class Forum
- LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 9 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 155 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 18 LFS268 Class Forum
- 29 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 7 LFW111 Class Forum
- 257 LFW211 Class Forum
- 178 LFW212 Class Forum
- 12 SKF100 Class Forum
- SKF200 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.1K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)