About label "system" in Lab Exercise 8.1

The nginx-one.yaml has three places with labels "system: secondary", one place "nodeSelector: system: secondOne"
After I added "kubectl label node lfs458-worker system=secondOne" , the two pods are still not running.
At the step 9, we need into "kubectl get pods -l system=secondary --all-namespaces"
I think there is one mistake with the "system" lable.
Could you please check it? And tell me which one is right?
Thanks,
Wei
Answers
-
Hi Wei / @zhangwe,
The solution is explained in step 8 of exercise 8.1. It instructs you what to do next when the pods are not running.
Otherwise, the labels and selectors are correct in the exercise, provided no typos are introduced when the commands were issued at the terminal.
Regards,
-Chris0 -
Hello,
I followed all things on the step 8, and I still could let the pods running. I even deleted the deployments nginx-one, and re-created it again. It did not work. So I can not do the end point Lab.
As I mentioned before, the label system has two value "secondary" and "secondOne" in the nginx-one.yaml. Which one is right?Thanks,
Wei
0 -
Hello,
Please ensure you are using case-sensitive values.
Please use the diff command to figure out what is different from your yaml file and the nginx-one.yaml file included in the tarball. Paste the output here so we can see what parameters are different. I have a feeling you have a typo, or are editing the incorrect file, or perhaps not looking in the accounting namespace.
Please also show the output of kubectl get nodes --show-labels and kubectl get pods -l system=secondary --all-namespaces after having completed exercise 8.1, step 9.
Regards,
0 -
Hello,
The first Node is a master node, the second node is a worker node.
[email protected]:~$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
zw-instance-group2-dbk4 Ready master 4d18h v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=zw-instance-group2-dbk4,kubernetes.io/os=linux,node-role.kubernetes.io/master=,sytem=secondary
zw-instance-group2-x732 Ready 4d17h v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=zw-instance-group2-x732,kubernetes.io/os=linux,sytem=secondOne
[email protected]:~$[email protected]:~$ kubectl -n accounting delete pod nginx-one-6f9597b9f4-flsnn
pod "nginx-one-6f9597b9f4-flsnn" deleted[email protected]:~$ kubectl get pods -l system=secondary --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
accounting nginx-one-6f9597b9f4-25tkx 0/1 Pending 0 18s
accounting nginx-one-6f9597b9f4-kth9v 0/1 Pending 0 3m51s
[email protected]:~$[email protected]:~$ cat nginx-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-one
labels:
system: secondary
namespace: accounting
spec:
selector:
matchLabels:
system: secondary
replicas: 2
template:
metadata:
labels:
system: secondary
spec:
containers:
- image: nginx:1.11.1
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
system: secondOne
[email protected]:~$0 -
Hello,
First, why are you still using 1.16.1 as the version?
Second, Notice the pods show pending. This indicates something is not quite right - either scheduler or kubelet cannot find a required resource. I have a feeling that if you run kubectl describe on the pods you will find an error which may indicate a typo in your yaml. Did you run the diff command against the tarball?
Third, Did the formatting not get pasted from your cat. There is a way to paste code into the window using the drop down in the post menu-bar. This allows us to see if there is an indentation error. Remember that yaml is sensitive to indentation.
Regards,
0 -
I used lab material "a16ltxxopobd-LFS258-labs_V2019-12-03.pdf", I still using 1.16.1 as the version.
Is there a new Lab material?0 -
Hi zhangwe,
Yes, there is a new lab version updated to 1.17.1. In the course, please navigate to the Table of Contents located on the left-hand side of the screen and select Resources > Files > LFS258 - Lab Exercises (2.7.2020).
Thank you,
Magda0 -
OK. I will download the new material, and re-do the Lab.
Thanks,
Wei
0 -
Hello Wei,
Your detailed output is appreciated. Check carefully the labels on your nodes. Each label instance has a typo in the
system
key parameter.Regards,
-Chris1 -
Also, keep in mind that the
secondary
label is intended for the deployment object and its managed resources (replicaset and pods). ThesecondOne
label is intended for the node.0 -
I downloaded new Lab material and delete the two VM node. on GCP. I createed two new VM nodes by following the video instruction, and installed the master node and a work node. I got following errors:
[email protected]:~$ sudo kubeadm config print init-defaults
W0310 18:09:16.492656 22717 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0310 18:09:16.492737 22717 validation.go:28] Cannot validate kubelet config - no validator is available[email protected]:~$ sudo kubeadm token create
W0310 18:14:39.820271 27833 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0310 18:14:39.820330 27833 validation.go:28] Cannot validate kubelet config - no validator is available[email protected]:~# kubeadm join \
--token 53gnpy.frzk9xxue4g932e5 \
k8smaster:6443 \
--discovery-token-ca-cert-hash \
sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1Command 'kubeadm' not found, but can be installed with:
snap install kubeadm
[email protected]:~$ history
1 wget https://training.linuxfoundation.org/cm/LFS258/LFS258_V2020-02-07_SOLUTIONS.tar.bz2 --user=xxxxx --password=xxxxxx
2 tar -xvf LFS258_V2020-02-07_SOLUTIONS.tar.bz2
3 sudo -i
4 history
5 sudo -i
6 mkdir -p $HOME/.kube
7 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
8 sudo chown $(id -u):$(id -g) $HOME/.kube/config
9 chmod
10 chmod --help
11 less .kube/config
12 sudo cp /root/rbac-kdd.yaml .
13 ls
14 ls -l
15 cd LFS258/
16 ls -l
17 cd ..
18 kubectl apply -f rbac-kdd.yaml
19 cat rbac-kdd.yaml
20 sudo cp /root/calico.yaml .
21 kubectl apply -f calico.yaml
22 sudo apt-get install bash-completion -y
23 source <(kubectl completion bash)
24 ls -l
25 $source
26 source
27 kubectl completion bash
28 echo "source <(kubectl completion bash)" >> ~/.bashrc
29 ls -a
30 cd .bashrc
31 ls -a -l
32 less .bashrc
33 kubectl get nodes
34 kubectl describe nodes master
35 kubectl -n kube-system get po
36 sudo kubeadm config print init-defaults
37 ip addr show ens4 | grep inet
38 ip a
39 sudo kubeadm token list
40 sudo kubeadm token create
41 sudo kubeadm token list
42 history[email protected]:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y vim
3 apt-get install -y docker.io
4 vim /etc/apt/sources.list.d/kubernetes.list
5 cat /etc/apt/sources.list.d/kubernetes.list
6 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
7 apt-get update
8 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
9 history
10 apt-mark hold kubelet kubeadm kubectl
11 wget https://tinyurl.com/yb4xturm -O rbac-kdd.yaml
12 wget wget https://docs.projectcalico.org/manifests/calico.yaml
13 less calico.yaml
14 ip -a
15 ip a
16 ip addr show
17 [email protected]:~# vim /etc/hosts
18 vim /etc/hosts
19 cat /etc/hosts
20 vim kubeadm-config.yaml
21 cat kubeadm-config.yaml
22 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
23 exit
24 history
[email protected]:~#0 -
Hi Wei,
The messages prefixed with a "W" are warnings, not errors.
From your history, it seems that maybe yourcalico.yaml
file did not get downloaded, as the download (wget
) command seems to be incomplete. Also, revise thewget
command and make sure you apply it only once.
Without seeing the output generated by each command, it is hard to tell what else may be the issue here.Regards,
-Chris0 -
For the worker node:
[email protected]:~# kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
W0310 18:46:41.066900 19913 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higher[email protected]:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y docker.io
3 kubeadm join k8smaster:6443 --token 53gnpy.frzk9xxue4g932e5 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
4 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
5 apt-get install -y vim
6 vim /etc/apt/sources.list.d/kubernetes.list
7 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
8 apt-get update
9 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
10 apt-mark hold kubeadm kubelet kubectl
11 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
12 history0 -
For the worker node:
[email protected]:~# kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
W0310 18:46:41.066900 19913 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higher[email protected]:~# history
1 apt-get update && apt-get upgrade -y
2 apt-get install -y docker.io
3 kubeadm join k8smaster:6443 --token 53gnpy.frzk9xxue4g932e5 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
4 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
5 apt-get install -y vim
6 vim /etc/apt/sources.list.d/kubernetes.list
7 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
8 apt-get update
9 apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
10 apt-mark hold kubeadm kubelet kubectl
11 kubeadm join --token 53gnpy.frzk9xxue4g932e5 k8smaster:6443 --discovery-token-ca-cert-hash sha256:18d6ccbe2874afe0544f82b566342222966f4fef9798f4caa6bea3ffdb7e09d1
12 history0
Categories
- 10.1K All Categories
- 35 LFX Mentorship
- 88 LFX Mentorship: Linux Kernel
- 502 Linux Foundation Boot Camps
- 278 Cloud Engineer Boot Camp
- 103 Advanced Cloud Engineer Boot Camp
- 47 DevOps Engineer Boot Camp
- 41 Cloud Native Developer Boot Camp
- 2 Express Training Courses
- 2 Express Courses - Discussion Forum
- 1.7K Training Courses
- 17 LFC110 Class Forum
- 4 LFC131 Class Forum
- 19 LFD102 Class Forum
- 148 LFD103 Class Forum
- 12 LFD121 Class Forum
- 61 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 566 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 22 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 44 LFS207 Class Forum
- 298 LFS211 Class Forum
- 53 LFS216 Class Forum
- 46 LFS241 Class Forum
- 40 LFS242 Class Forum
- 37 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 993 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 87 LFS260 Class Forum
- 126 LFS261 Class Forum
- 31 LFS262 Class Forum
- 79 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 17 LFS268 Class Forum
- 21 LFS269 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 212 LFW211 Class Forum
- 153 LFW212 Class Forum
- 899 Hardware
- 217 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 208 Networking
- 101 Printers & Scanners
- 85 Storage
- 749 Linux Distributions
- 88 Debian
- 64 Fedora
- 14 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 355 Ubuntu
- 473 Linux System Administration
- 38 Cloud Computing
- 69 Command Line/Scripting
- Github systems admin projects
- 94 Linux Security
- 77 Network Management
- 108 System Management
- 49 Web Management
- 63 Mobile Computing
- 22 Android
- 27 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 527 Off Topic
- 127 Introductions
- 213 Small Talk
- 19 Study Material
- 794 Programming and Development
- 262 Kernel Development
- 498 Software Development
- 922 Software
- 257 Applications
- 182 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 53 All In Program
- 53 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)