Pod network across nodes does not work
I followed the installation procedure of lab 3.1 to 3.3 closely. Everything looks nice, but whenever I try to establish a network connection between a pod on one node and another pod on another node, that does not work. The calico-node pods are up and running. In their logs I don't see any error messages.calicoctl node status
for the cp node results in:
Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.0.0.7 | node-to-node mesh | up | 13:15:03 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
For the worker node, I get
Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.0.0.6 | node-to-node mesh | up | 13:15:03 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
On the cp node ip route
returns:
default via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.6 168.63.129.16 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 169.254.169.254 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 blackhole 192.168.74.128/26 proto bird 192.168.74.136 dev calie739583d8fa scope link 192.168.74.137 dev cali9270933bb0b scope link 192.168.74.138 dev cali73bd7dd6478 scope link 192.168.74.139 dev cali3344860a0ad scope link 192.168.189.64/26 via 10.0.0.7 dev tunl0 proto bird onlink
On the worker node I see:
default via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.7 168.63.129.16 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 169.254.169.254 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 192.168.74.128/26 via 10.0.0.6 dev tunl0 proto bird onlink blackhole 192.168.189.64/26 proto bird 192.168.189.76 dev cali3140fc1dafd scope link 192.168.189.77 dev calia35b901ce89 scope link
calicoctl get workloadendpoints -A
returns:
NAMESPACE WORKLOAD NODE NETWORKS INTERFACE accounting nginx-one-575f648647-j2rwh worker2 192.168.189.77/32 calia35b901ce89 accounting nginx-one-575f648647-x5c5c worker2 192.168.189.76/32 cali3140fc1dafd default bb2 k8scp 192.168.74.137/32 cali9270933bb0b kube-system calico-kube-controllers-5f6cfd688c-h29qd k8scp 192.168.74.136/32 calie739583d8fa kube-system coredns-74ff55c5b-69n8g k8scp 192.168.74.139/32 cali3344860a0ad kube-system coredns-74ff55c5b-bngtf k8scp 192.168.74.138/32 cali73bd7dd6478
There is the example from lab 9.1 deployed. In addition I used the pod bb2
containing busybox for debug purposes. The problem became obvious to me, when I tried to curl
the nginx pods. This only works when logged into the worker node.
This is my second cluster. I called the cp node k8scp
and the worker worker2
, as in my first cluster it is still master
and worker
. The issue occurs in both clusters. The first one was set up with docker, the second one with cri-o.
The whole setup runs on VMs on Azure.
Is there anything obvious I missed?
One thing that appears odd to me is that the pods do not get addresses out of the PodCIDR range of the according node. If I do kubectl describe node k8scp |grep PodCIDR
, I get
PodCIDR: 192.168.0.0/24 PodCIDRs: 192.168.0.0/24
The pods on that node are in 192.168.74.128/26
, though, as ip route
shows. Is that normal?
Comments
-
Hi @deissnerk,
Azure is not a recommended or supported environment for labs in this course. However, there are learners who ran lab exercises on Azure and shared their findings in the forum. You may use the search option of the forum to locate them for reference.
Regards,
Chris0 -
Thanks for the quick response @chrispokorni. I suppose I'm running into similar issues as @luis-garza has been describing here.
In the beginning of lab 3.1 it is stated:The labs were written using Ubuntu instances running on GoogleCloudPlatform (GCP). They have been written to be vendor-agnostic so could run on AWS, local hardware, or inside of virtualization to give you the most flexibility and options.
I didn't read this as a clear recommendation. After all it should just be about two Ubuntu VMs in an IP subnet. I was prepared to figure out some Azure specifics on my own, but an incompatibility on this level comes to me as a surprise. A warning in section 3.1 that the components used in the lab might have compatibility issues with other cloud providers would be helpful.
Regards,
Klaus
1 -
Got same problem on AWS:
- all firewalls on cp and worker node disabled
- all input / output traffic enabled
Any help?
0 -
Hi @joov,
On AWS the VPC and Security Group configurations directly impact the cluster networking. If you have not done so already, I would invite you to watch the video "Using AWS to set up labs" found in the introductory chapter of this course. The video outlines important settings needed to enable the networking of your cluster.
Also, when provisioning the second EC2 instance, make sure it is placed in the same VPC subnet, and under the same SG as the first instance.
Regards,
-Chris0 -
I followed the video and got it working already. Thank you.
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 145 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)