Lab 6.1 External Egress not working (No policies defined yet)
I am trying follow instruction on Lab 6.1 to test Egress before I setup any policies. I am not able to reach www.linux.com as asked below:
Now, test egress from a container to the outside world. We’ll use the netcat command to verify
access to a running webserver on port 80. First, test local access to nginx, then a remote server.
student@ckad-1:~/app2$ kubectl exec -it -c busy secondapp sh
/ $ nc -vz 127.0.0.1 80
127.0.0.1 (127.0.0.1:80) open
/ $ nc -vz www.linux.com 80
www.linux.com (151.101.185.5:80) open
I have re-installed Kubeadm but still cannot reach outside. I have running in virtualbox. One master and 3 nodes. What could I check to enable external access?
Thanks
-Ashish
Comments
-
Hi Ashish, do you have your vNics open for all traffic in Vbox? I had a similar issue and that solved it for me.
-Chris0 -
Hi Chris,
Thanks for responding. I am using bridge, so all v instances have ip address from router. I can ping external nodes:
ashish@ashish-ubuntu1:~$ ping www.linux.com
PING n.ssl.fastly.net (151.101.21.5) 56(84) bytes of data.
64 bytes from 151.101.21.5: icmp_seq=1 ttl=57 time=462 ms
64 bytes from 151.101.21.5: icmp_seq=2 ttl=57 time=482 ms
^Cashish@ashish-ubuntu1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ashish-ubuntu0 Ready 15h v1.11.2
ashish-ubuntu1 Ready master 16h v1.11.1
ashish-ubuntu2 Ready 16h v1.11.1
ashish-ubuntu3 Ready 16h v1.11.1ashish@ashish-ubuntu0:~/app2$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
secondapp 2/2 Running 2 2h 192.168.239.133 ashish-ubuntu0ashish@ashish-ubuntu0:~/app2$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
secondapp 2/2 Running 2 2h 192.168.239.133 ashish-ubuntu0-Ashish
0 -
I see that pings to ip address of www.linux.com is working. But nc with ip address doesn't work.
0 -
My bad while testing ip address with nc. Egress was block. So now it seems it is DNS issue is reality. Egress/Ingress block are working so I can carry on with my lab. Not sure what to look for resolving DNS issue.
0 -
Hello,
Could you paste the commands and output which aren't working? Also the output of nslookup to whatever site you are trying to connect to. This will help view the DNS server being used and may help narrow down the issue.
Regards,
0 -
nc -vz www.linux.com 80
/ # nslookup www.linux.com
;; connection timed out; no servers could be reachedLooks like no DNS configuration is getting configured in busybox container.
0 -
Okay. Yes, it is not able to reach the DNS server. FIrst let's see if all the pods are running. Could you please show the outout of these two commands, kubectl get pods -o wide --all-namespaces and kubectl get svc --all-namespaces. Hopefully your kude-dns service is running. Mine says:
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 443/TCP 4d
kube-system calico-etcd ClusterIP 10.96.232.136 6666/TCP 4d
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 4d`
0 -
Thanks for helping me:
Here is the output:
ashish@ashish-ubuntu0:~/app2$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default secondapp 2/2 Running 3 3h 192.168.173.1 ashish-ubuntu2
kube-system calico-etcd-5hh72 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system calico-kube-controllers-74b888b647-9vzcc 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system calico-node-4dn2q 2/2 Running 1 20h 192.168.0.121 ashish-ubuntu3
kube-system calico-node-8dtts 2/2 Running 1 20h 192.168.0.156 ashish-ubuntu2
kube-system calico-node-j5rc2 2/2 Running 1 19h 192.168.0.183 ashish-ubuntu0
kube-system calico-node-kgxk7 2/2 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system coredns-78fcdf6894-59lv8 1/1 Running 0 21h 192.168.231.1 ashish-ubuntu1
kube-system coredns-78fcdf6894-rphc2 1/1 Running 0 21h 192.168.231.2 ashish-ubuntu1
kube-system etcd-ashish-ubuntu1 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system kube-apiserver-ashish-ubuntu1 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system kube-controller-manager-ashish-ubuntu1 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system kube-proxy-8xh9m 1/1 Running 0 19h 192.168.0.183 ashish-ubuntu0
kube-system kube-proxy-8xmnd 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system kube-proxy-s6rng 1/1 Running 0 20h 192.168.0.156 ashish-ubuntu2
kube-system kube-proxy-xlzbw 1/1 Running 0 20h 192.168.0.121 ashish-ubuntu3
kube-system kube-scheduler-ashish-ubuntu1 1/1 Running 0 21h 192.168.0.193 ashish-ubuntu1
kube-system kubernetes-dashboard-9b67c5f9f-lwhpw 1/1 Running 0 20h 192.168.231.5 ashish-ubuntu1
metallb-system controller-558b9b86cd-gn7nx 1/1 Running 0 19h 192.168.231.6 ashish-ubuntu1
metallb-system speaker-t48z2 1/1 Running 0 19h 192.168.0.193 ashish-ubuntu1
ashish@ashish-ubuntu0:~/app2$
ashish@ashish-ubuntu0:~/app2$
ashish@ashish-ubuntu0:~/app2$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 443/TCP 21h
default secondapp NodePort 10.106.147.238 80:32000/TCP 6h
kube-system calico-etcd ClusterIP 10.96.232.136 6666/TCP 21h
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 21h
kube-system kubernetes-dashboard NodePort 10.109.145.247 443:32116/TCP 20h
ashish@ashish-ubuntu0:~/app2$0 -
Here is busybox resolve file:
ashish@ashish-ubuntu0:~/app2$ kubectl exec -it -c busy secondapp sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ #0 -
Are you able to use netcat and connect to your DNS service?** nc 10.96.0.10 53** If it hangs it has made a connection, which is good. You'd have to send some hex code with xxd to get a proper response.
If that works, then it could be that your kube-dns is unable to forward the request to an outside source. My first thought is firewalls. If you have opened all the ports inside AWS/GCE - all ports to all instances - then they shouldn't stop anything and the request will use your node or cluster DNS settings. I assume you can ping/nslookup/dig from the host, only the pod is not able to use DNS.
I note you are running metallb-system, a load balancer. This could be intercepting the return DNS forward query and sending it somewhere else or blocking it. Does it work if you don't have that running?
Regards,
0 -
It hangs on nc command. I see coredns have errors in it. I have disabled firewall on all nodes and removed metallb. Just FYI: After installing kubeadm, I couldn't run dashboard till it was deployed on master node. API server was unreachable from other nodes for dashboard pod. Same with metallb. But running busybox on master node doesn't help here.
2018/09/08 12:37:54 [ERROR] 2 www.linux.com. A: unreachable backend: read udp 192.168.231.1:36874->192.168.0.1:53: i/o timeout
2018/09/08 12:39:27 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:50580->192.168.0.1:53: i/o timeout
2018/09/08 16:18:04 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:57479->192.168.0.1:53: i/o timeout
2018/09/08 16:18:09 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:58589->192.168.0.1:53: i/o timeout
2018/09/08 16:18:14 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:53425->192.168.0.1:53: i/o timeout
2018/09/08 19:11:40 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:53883->192.168.0.1:53: i/o timeout
2018/09/08 19:11:40 [ERROR] 2 www.linux.com. A: unreachable backend: read udp 192.168.231.1:38421->192.168.0.1:53: i/o timeout
2018/09/08 19:11:42 [ERROR] 2 www.linux.com. A: unreachable backend: read udp 192.168.231.1:51105->192.168.0.1:53: i/o timeout
2018/09/08 19:11:42 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:45913->192.168.0.1:53: i/o timeout
2018/09/08 20:15:04 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:35092->192.168.0.1:53: i/o timeout
2018/09/08 20:43:19 [ERROR] 2 www.linux.com. AAAA: unreachable backend: read udp 192.168.231.1:42572->192.168.0.1:53: i/o timeoutashish@ashish-ubuntu0:~/app2$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default secondapp 2/2 Running 0 48m 192.168.231.7 ashish-ubuntu1
kube-system calico-etcd-5hh72 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system calico-kube-controllers-74b888b647-9vzcc 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system calico-node-4dn2q 2/2 Running 1 21h 192.168.0.121 ashish-ubuntu3
kube-system calico-node-8dtts 2/2 Running 1 21h 192.168.0.156 ashish-ubuntu2
kube-system calico-node-j5rc2 2/2 Running 1 20h 192.168.0.183 ashish-ubuntu0
kube-system calico-node-kgxk7 2/2 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system coredns-78fcdf6894-59lv8 1/1 Running 0 22h 192.168.231.1 ashish-ubuntu1
kube-system coredns-78fcdf6894-rphc2 1/1 Running 0 22h 192.168.231.2 ashish-ubuntu1
kube-system etcd-ashish-ubuntu1 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system kube-apiserver-ashish-ubuntu1 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system kube-controller-manager-ashish-ubuntu1 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system kube-proxy-8xh9m 1/1 Running 0 20h 192.168.0.183 ashish-ubuntu0
kube-system kube-proxy-8xmnd 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system kube-proxy-s6rng 1/1 Running 0 21h 192.168.0.156 ashish-ubuntu2
kube-system kube-proxy-xlzbw 1/1 Running 0 21h 192.168.0.121 ashish-ubuntu3
kube-system kube-scheduler-ashish-ubuntu1 1/1 Running 0 22h 192.168.0.193 ashish-ubuntu1
kube-system kubernetes-dashboard-9b67c5f9f-lwhpw 1/1 Running 0 21h 192.168.231.5 ashish-ubuntu1
ashish@ashish-ubuntu0:~/app2$0 -
I have re-installed kubeadm after changing k8sMaster.sh, by adding --feature-gates=CoreDNS=false to kubeadm init. This will use kube-dns instead of coredns.
Still DNS didn't work. Later found one resolution at below link. After this it started to work:
https://github.com/coreos/flannel/issues/983
Copied from above link:
Found the problem which exists in DNS (ip adresses works ok. Helped me config map for dns:apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]0 -
Thanks for letting us know about the fix. You are/were running flannel instead of Calico then?
0 -
No. I was running Calico. But the DNS was CoreDNS. I just tried Kube-Dns to see if that will make difference. I am not concerned how infrastructure is set but mainly to test/learn functionality. Even Metallb is running, so I am able to test Load Balancer service type. This is all setup in virtualbox.
BTW. Thanks for guiding me towards resolution.
0 -
Glad its working!
0 -
Yes its enabled by following the instructions on the documentation, now today a have another problem I turned off my router and started back up but now it seems my router doesn't send any internet connection. Regarding I already talk to the Belkin support Belkin Support expert team. But still, I have been facing the same if you have any assistance about this so please share with me.
0 -
Not which aspect you were asking about this time. One note, ensure that your VirtualBox network is set to allow all traffic. It defaults to deny. This can be found under the advanced tab for the network.
Regards,
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 791 Linux Foundation IT Professional Programs
- 353 Cloud Engineer IT Professional Program
- 178 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 147 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 47 LFC110 Class Forum - Discontinued
- 71 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 38 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 697 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 148 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 29 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 134 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 152 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)