About label "system" in Lab Exercise 8.1
The nginx-one.yaml has three places with labels "system: secondary", one place "nodeSelector: system: secondOne"
After I added "kubectl label node lfs458-worker system=secondOne" , the two pods are still not running.
At the step 9, we need into "kubectl get pods -l system=secondary --all-namespaces"
I think there is one mistake with the "system" lable.
Could you please check it? And tell me which one is right?
Thanks,
Wei
Answers
-
Hi Wei / @zhangwe,
The solution is explained in step 8 of exercise 8.1. It instructs you what to do next when the pods are not running.
Otherwise, the labels and selectors are correct in the exercise, provided no typos are introduced when the commands were issued at the terminal.
Regards,
-Chris0 -
Hello,
I followed all things on the step 8, and I still could let the pods running. I even deleted the deployments nginx-one, and re-created it again. It did not work. So I can not do the end point Lab.
As I mentioned before, the label system has two value "secondary" and "secondOne" in the nginx-one.yaml. Which one is right?Thanks,
Wei
0 -
Hello,
Please ensure you are using case-sensitive values.
Please use the diff command to figure out what is different from your yaml file and the nginx-one.yaml file included in the tarball. Paste the output here so we can see what parameters are different. I have a feeling you have a typo, or are editing the incorrect file, or perhaps not looking in the accounting namespace.
Please also show the output of kubectl get nodes --show-labels and kubectl get pods -l system=secondary --all-namespaces after having completed exercise 8.1, step 9.
Regards,
0 -
Hello,
The first Node is a master node, the second node is a worker node.
student@zw-instance-group2-dbk4:~$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
zw-instance-group2-dbk4 Ready master 4d18h v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=zw-instance-group2-dbk4,kubernetes.io/os=linux,node-role.kubernetes.io/master=,sytem=secondary
zw-instance-group2-x732 Ready 4d17h v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=zw-instance-group2-x732,kubernetes.io/os=linux,sytem=secondOne
student@zw-instance-group2-dbk4:~$student@zw-instance-group2-dbk4:~$ kubectl -n accounting delete pod nginx-one-6f9597b9f4-flsnn
pod "nginx-one-6f9597b9f4-flsnn" deletedstudent@zw-instance-group2-dbk4:~$ kubectl get pods -l system=secondary --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
accounting nginx-one-6f9597b9f4-25tkx 0/1 Pending 0 18s
accounting nginx-one-6f9597b9f4-kth9v 0/1 Pending 0 3m51s
student@zw-instance-group2-dbk4:~$student@zw-instance-group2-dbk4:~$ cat nginx-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-one
labels:
system: secondary
namespace: accounting
spec:
selector:
matchLabels:
system: secondary
replicas: 2
template:
metadata:
labels:
system: secondary
spec:
containers:
- image: nginx:1.11.1
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
system: secondOne
student@zw-instance-group2-dbk4:~$0 -
Hello,
First, why are you still using 1.16.1 as the version?
Second, Notice the pods show pending. This indicates something is not quite right - either scheduler or kubelet cannot find a required resource. I have a feeling that if you run kubectl describe on the pods you will find an error which may indicate a typo in your yaml. Did you run the diff command against the tarball?
Third, Did the formatting not get pasted from your cat. There is a way to paste code into the window using the drop down in the post menu-bar. This allows us to see if there is an indentation error. Remember that yaml is sensitive to indentation.
Regards,
0 -
I used lab material "a16ltxxopobd-LFS258-labs_V2019-12-03.pdf", I still using 1.16.1 as the version.
Is there a new Lab material?0 -
Hi zhangwe,
Yes, there is a new lab version updated to 1.17.1. In the course, please navigate to the Table of Contents located on the left-hand side of the screen and select Resources > Files > LFS258 - Lab Exercises (2.7.2020).
Thank you,
Magda0 -
OK. I will download the new material, and re-do the Lab.
Thanks,
Wei
0 -
Hello Wei,
Your detailed output is appreciated. Check carefully the labels on your nodes. Each label instance has a typo in the
system
key parameter.Regards,
-Chris1 -
Also, keep in mind that the
secondary
label is intended for the deployment object and its managed resources (replicaset and pods). ThesecondOne
label is intended for the node.0 -
I downloaded new Lab material and delete the two VM node. on GCP. I createed two new VM nodes by following the video instruction, and installed the master node and a work node. I got following errors:
student@master:~$ sudo kubeadm config print init-defaults
W0310 18:09:16.492656 22717 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0310 18:09:16.492737 22717 validation.go:28] Cannot validate kubelet config - no validator is availablestudent@master:~$ sudo kubeadm token create
W0310 18:14:39.820271 27833 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0310 18:14:39.820330 27833 validation.go:28] Cannot validate kubelet config - no validator is availableroot@worker1:~# kubeadm join \
--token 53gnpy.frzk9xxue4g932e5 \
k8smaster:6443 \
--discovery-token-ca-cert-hash \
sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1Command 'kubeadm' not found, but can be installed with:
snap install kubeadm
student@master:~$ history
1 wget https://training.linuxfoundation.org/cm/LFS258/LFS258_V2020-02-07_SOLUTIONS.tar.bz2 --user=xxxxx --password=xxxxxx
2 tar -xvf LFS258_V2020-02-07_SOLUTIONS.tar.bz2
3 sudo -i
4 history
5 sudo -i
6 mkdir -p $HOME/.kube
7 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
8 sudo chown $(id -u):$(id -g) $HOME/.kube/config
9 chmod
10 chmod --help
11 less .kube/config
12 sudo cp /root/rbac-kdd.yaml .
13 ls
14 ls -l
15 cd LFS258/
16 ls -l
17 cd ..
18 kubectl apply -f rbac-kdd.yaml
19 cat rbac-kdd.yaml
20 sudo cp /root/calico.yaml .
21 kubectl apply -f calico.yaml
22 sudo apt-get install bash-completion -y
23 source <(kubectl completion bash)
24 ls -l
25 $source
26 source
27 kubectl completion bash
28 echo "source <(kubectl completion bash)" >> ~/.bashrc
29 ls -a
30 cd .bashrc
31 ls -a -l
32 less .bashrc
33 kubectl get nodes
34 kubectl describe nodes master
35 kubectl -n kube-system get po
36 sudo kubeadm config print init-defaults
37 ip addr show ens4 | grep inet
38 ip a
39 sudo kubeadm token list
40 sudo kubeadm token create
41 sudo kubeadm token list
42 historyroot@master:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y vim
3 apt-get install -y docker.io
4 vim /etc/apt/sources.list.d/kubernetes.list
5 cat /etc/apt/sources.list.d/kubernetes.list
6 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
7 apt-get update
8 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
9 history
10 apt-mark hold kubelet kubeadm kubectl
11 wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml
12 wget wget https://docs.projectcalico.org/manifests/calico.yaml
13 less calico.yaml
14 ip -a
15 ip a
16 ip addr show
17 root@lfs458-node-1a0a:~# vim /etc/hosts
18 vim /etc/hosts
19 cat /etc/hosts
20 vim kubeadm-config.yaml
21 cat kubeadm-config.yaml
22 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
23 exit
24 history
root@master:~#0 -
Hi Wei,
The messages prefixed with a "W" are warnings, not errors.
From your history, it seems that maybe yourcalico.yaml
file did not get downloaded, as the download (wget
) command seems to be incomplete. Also, revise thewget
command and make sure you apply it only once.
Without seeing the output generated by each command, it is hard to tell what else may be the issue here.Regards,
-Chris0 -
For the worker node:
root@worker1:~# kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
W0310 18:46:41.066900 19913 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higherroot@worker1:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y docker.io
3 kubeadm join k8smaster:6443 --token 53gnpy.frzk9xxue4g932e5 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
4 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
5 apt-get install -y vim
6 vim /etc/apt/sources.list.d/kubernetes.list
7 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
8 apt-get update
9 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
10 apt-mark hold kubeadm kubelet kubectl
11 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
12 history0 -
For the worker node:
root@worker1:~# kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
W0310 18:46:41.066900 19913 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higherroot@worker1:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y docker.io
3 kubeadm join k8smaster:6443 --token 53gnpy.frzk9xxue4g932e5 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
4 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
5 apt-get install -y vim
6 vim /etc/apt/sources.list.d/kubernetes.list
7 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
8 apt-get update
9 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
10 apt-mark hold kubeadm kubelet kubectl
11 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
12 history0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)