coredns issue on Ubuntu 16.04 on Virtualbox

Hi,
I decided to practice for the exam now that I have a bit of time.
I create a VM using Ubuntu 16.04 and create a NatNetwork on 10.0.2.0
I then did a fresh install of kubernetes using the manual from the class.
I see coredns in the Pending when I do a kubectl get pods -A
I didn't add rbac or calico yet. Just ran the commands with the kubeadm.
I also didn't add the worker node nor do the taint since I figure this should be up and running right out of the box on a fresh install.
I also upgraded kubectl, kubeadm etc to the latest version, but that didn't help.
here is my (note, I tried to use the code function but it was a special character at the fist and last line of every line, is there an easier way for a block of text?)
kubectl describe pod coredns-5c98db65d4-q5zbd -n kube-system
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-68lsj (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-68lsj:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-68lsj
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16s (x54 over 5m21s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Comments
-
Hi @btanoue ,
You need to remove taints from the master for the scheduler to be able to place the coredns pods on the master (since you did not attach a worker node) OR have at least a worker node join the cluster, AND you need calico started on the cluster for coredns pods to run successfully - as they receive their IPs from calico.
The installation steps are in a particular sequence for a good reasonRegards,
-Chris0 -
Thanks crispokorni.
Thank you. I wasn't sure if the default install was supposed to start coredns on the k8smaster. I'm still kind of learning how it all connects and debugging this stuff is fun.
So Calico is like DHCP for the pods based on what you said. I wasn't sure how that was working but now I do.
I guess me now checking what's running where all the time did more harm than good since I was trying to really understand how it is all connected. In this case, I was shooting myself in the foot for no reason LOL.
0 -
OK, I installed rbac and calico.
Installed the second node.
Followed the directions and removed taints.coredns stays ContainerCreating
Warning FailedCreatePodSandBox 78s (x4 over 81s) kubelet, kubemaster (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "54394195e138ac17122a338a6e25ad6de0b1ba544f0bd8560439c5b95aad1cdb" network for pod "coredns-5c98db65d4-4zwlq": NetworkPlugin cni failed to set up pod "coredns-5c98db65d4-4zwlq_kube-system" network: no podCidr for node kubemaster
Normal Scheduled default-scheduler Successfully assigned kube-system/coredns-5c98db65d4-4zwlq to kubemasterI then created the nginx deployment and it stays in that same state with ContainerCreating.
I feel it has something to do with Calico.My network is on 10.0.2.0
Calico and Kubadmin-init were set on 10.0.1.0Any thoughts?
0 -
@btanoue , did you get to Step 6 in Exercise 3.3? It may help with the coredns pods.
-Chris
0 -
Yes, I did delete the coredns containers and when the respawned they went back to the ContainerCreating state.
I'm starting to think it has something to do with the podcidr and cni. I just don't know how to fix it.
0 -
OK, so I fixed it but I'm not sure exactly how this works.
kubectl patch node kubemaster -p '{"spec":{"podCIDR":"10.0.1.0/16"}}'
kubectl patch node kubeworker -p '{"spec":{"podCIDR":"10.0.1.0/16"}}'I understand that I pushed the cidr to the nodes.
But what I don't understand is they the kubeadm-init file and calico didn't set this up?Any ideas? I'd like to understand why it didn't work and also understand how the patch works a little better.
But It did create pods and deployments now. I can scale nginx to 3 replicas and they are Running.0 -
Understanding IP network sizes would help in this case. More specifically understanding what are the minimum and maximum IP addresses in such a range. Understand the size of the default calico pod network 192.168.0.0/16, then the size of 10.0.1.0/16 and its relationship with 10.0.2.0.
After you have this part figured out, keep in mind that IP blocks should not overlap: node IPs, with pod IPs, and with service IPs.Regards,
-Chris0
Categories
- 9.8K All Categories
- 26 LFX Mentorship
- 79 LFX Mentorship: Linux Kernel
- 439 Linux Foundation Boot Camps
- 259 Cloud Engineer Boot Camp
- 88 Advanced Cloud Engineer Boot Camp
- 40 DevOps Engineer Boot Camp
- 19 Cloud Native Developer Boot Camp
- Express Training Courses
- Express Courses - Discussion Forum
- 1.5K Training Courses
- 17 LFC110 Class Forum
- 3 LFC131 Class Forum
- 18 LFD102 Class Forum
- 113 LFD103 Class Forum
- 8 LFD121 Class Forum
- 59 LFD201 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 528 LFD259 Class Forum
- 99 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 19 LFS200 Class Forum
- 736 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- LFS203 Class Forum
- 23 LFS207 Class Forum
- 292 LFS211 Class Forum
- 53 LFS216 Class Forum
- 41 LFS241 Class Forum
- 33 LFS242 Class Forum
- 31 LFS243 Class Forum
- 9 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 125 LFS253 Class Forum
- 922 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 79 LFS260 Class Forum
- 122 LFS261 Class Forum
- 27 LFS262 Class Forum
- 77 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 13 LFS267 Class Forum
- 16 LFS268 Class Forum
- 13 LFS269 Class Forum
- 190 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 201 LFW211 Class Forum
- 147 LFW212 Class Forum
- 888 Hardware
- 211 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 98 Printers & Scanners
- 85 Storage
- 744 Linux Distributions
- 88 Debian
- 64 Fedora
- 12 Linux Mint
- 13 Mageia
- 24 openSUSE
- 132 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 465 Linux System Administration
- 37 Cloud Computing
- 65 Command Line/Scripting
- Github systems admin projects
- 94 Linux Security
- 77 Network Management
- 107 System Management
- 47 Web Management
- 59 Mobile Computing
- 21 Android
- 24 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 523 Off Topic
- 126 Introductions
- 210 Small Talk
- 19 Study Material
- 780 Programming and Development
- 254 Kernel Development
- 492 Software Development
- 918 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 75 Games
- 316 Installation
- 45 All In Program
- 45 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)