Pod network across nodes does not work
I followed the installation procedure of lab 3.1 to 3.3 closely. Everything looks nice, but whenever I try to establish a network connection between a pod on one node and another pod on another node, that does not work. The calico-node pods are up and running. In their logs I don't see any error messages.calicoctl node status
for the cp node results in:
Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.0.0.7 | node-to-node mesh | up | 13:15:03 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
For the worker node, I get
Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 10.0.0.6 | node-to-node mesh | up | 13:15:03 | Established | +--------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found.
On the cp node ip route
returns:
default via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.6 168.63.129.16 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 169.254.169.254 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.6 metric 100 blackhole 192.168.74.128/26 proto bird 192.168.74.136 dev calie739583d8fa scope link 192.168.74.137 dev cali9270933bb0b scope link 192.168.74.138 dev cali73bd7dd6478 scope link 192.168.74.139 dev cali3344860a0ad scope link 192.168.189.64/26 via 10.0.0.7 dev tunl0 proto bird onlink
On the worker node I see:
default via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.7 168.63.129.16 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 169.254.169.254 via 10.0.0.1 dev eth0 proto dhcp src 10.0.0.7 metric 100 192.168.74.128/26 via 10.0.0.6 dev tunl0 proto bird onlink blackhole 192.168.189.64/26 proto bird 192.168.189.76 dev cali3140fc1dafd scope link 192.168.189.77 dev calia35b901ce89 scope link
calicoctl get workloadendpoints -A
returns:
NAMESPACE WORKLOAD NODE NETWORKS INTERFACE accounting nginx-one-575f648647-j2rwh worker2 192.168.189.77/32 calia35b901ce89 accounting nginx-one-575f648647-x5c5c worker2 192.168.189.76/32 cali3140fc1dafd default bb2 k8scp 192.168.74.137/32 cali9270933bb0b kube-system calico-kube-controllers-5f6cfd688c-h29qd k8scp 192.168.74.136/32 calie739583d8fa kube-system coredns-74ff55c5b-69n8g k8scp 192.168.74.139/32 cali3344860a0ad kube-system coredns-74ff55c5b-bngtf k8scp 192.168.74.138/32 cali73bd7dd6478
There is the example from lab 9.1 deployed. In addition I used the pod bb2
containing busybox for debug purposes. The problem became obvious to me, when I tried to curl
the nginx pods. This only works when logged into the worker node.
This is my second cluster. I called the cp node k8scp
and the worker worker2
, as in my first cluster it is still master
and worker
. The issue occurs in both clusters. The first one was set up with docker, the second one with cri-o.
The whole setup runs on VMs on Azure.
Is there anything obvious I missed?
One thing that appears odd to me is that the pods do not get addresses out of the PodCIDR range of the according node. If I do kubectl describe node k8scp |grep PodCIDR
, I get
PodCIDR: 192.168.0.0/24 PodCIDRs: 192.168.0.0/24
The pods on that node are in 192.168.74.128/26
, though, as ip route
shows. Is that normal?
Comments
-
Hi @deissnerk,
Azure is not a recommended or supported environment for labs in this course. However, there are learners who ran lab exercises on Azure and shared their findings in the forum. You may use the search option of the forum to locate them for reference.
Regards,
Chris0 -
Thanks for the quick response @chrispokorni. I suppose I'm running into similar issues as @luis-garza has been describing here.
In the beginning of lab 3.1 it is stated:The labs were written using Ubuntu instances running on GoogleCloudPlatform (GCP). They have been written to be vendor-agnostic so could run on AWS, local hardware, or inside of virtualization to give you the most flexibility and options.
I didn't read this as a clear recommendation. After all it should just be about two Ubuntu VMs in an IP subnet. I was prepared to figure out some Azure specifics on my own, but an incompatibility on this level comes to me as a surprise. A warning in section 3.1 that the components used in the lab might have compatibility issues with other cloud providers would be helpful.
Regards,
Klaus
1 -
Got same problem on AWS:
- all firewalls on cp and worker node disabled
- all input / output traffic enabled
Any help?
0 -
Hi @joov,
On AWS the VPC and Security Group configurations directly impact the cluster networking. If you have not done so already, I would invite you to watch the video "Using AWS to set up labs" found in the introductory chapter of this course. The video outlines important settings needed to enable the networking of your cluster.
Also, when provisioning the second EC2 instance, make sure it is placed in the same VPC subnet, and under the same SG as the first instance.
Regards,
-Chris0 -
I followed the video and got it working already. Thank you.
0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 154 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)