error execution phase preflight Lab 3.2
I got the next error in step 15 of lab 3.2:
"error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get "https://k8scp:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp: lookup k8scp on 127.0.0.53:53: server misbehaving
To see the stack trace of this error execute with --v=5 or higher"
How can I solve this issue? All help is welcomed.
Answers
-
Hi @elenabc,
First I'd recommend verifying that the
/etc/hosts
files on both nodes are set with the correct control plane node private IP and the k8scp alias, together with the127.0.0.1 localhost
entry (step 14 and lab 3.1 step 21).Second, create a new token and extract the hash (steps 12 and 13) for a new join command. However, prior to running the join command again on the worker node, please run
kubeadm reset
on the worker to clear any possible incomplete configuration.Regards,
-Chris0 -
Similar to the experience listed here, I get stuck on the preflight checks since it starts but hangs w/o resolution for a long period of time:
What should I do?
0 -
The worker node is unable to connect to the Control-plane. Ensure you are able to connect to control-plane node from worker node. (Firewalls, ports, etc)
The kubeadm join command has the apiserver-advertise-address - For example : 192.168.10.100:6443 which is missing in your screenshot, I am assuming you have masked it for security purposes. If not, then ensure the join command has reference to apiserver and also ensure that you are able to connect to control-plane
"kubeadm join 192.168.10.100:6443 --token xyz --discovery-token-ca-cert-hash abcxyz"
0 -
Hi @nicocerquera,
I would still, however, for the purpose of this course, stick with the recommended approach from the lab guide and run both
kubeadm init
andkubeadm join
with thek8scp
alias instead of a specific node IP address.What type of infrastructure do you use for lab VMs? Cloud or a local hypervisor? What about firewalls, VPCs, subnets? Are the
/etc/hosts
files on both nodes populated with the private IP address of the control plane node and thek8scp
alias?Regards,
-Chris0 -
Hi All,
I use cloud - AWS, I have two nodes on AWS one is CP and the other is the worker node. Both of them have the ufw status as disabled.
The error that I get is :
kubeadm join --token pum9dm.3m2y93x9a98j4lvn k8scp:6443 --dis covery-token-ca-cert-hash sha256:763941a24.......e1d929c73 e82c5d8a9109a6428 [preflight] Running pre-flight checks error execution phase preflight: couldn't validate the identity of the API Serve r: Get "https://k8scp:6443/api/v1/namespaces/kube-public/configmaps/cluster-info ?timeout=10s": dial tcp: lookup k8scp on 8.8.4.4:53: no such host To see the stack trace of this error execute with --v=5 or higher
Then I made sure I had the correct IP address on both nodes for /etc/hosts and I got:
kubeadm join --token pum9dm.3m2y93x9a98j4lvn k8scp:6443 --discovery-token-ca-cert-hash sha256:763941a2426dbd98 b41b1daa......a9109a6428 [preflight] Running pre-flight checks error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster- info ConfigMap for token ID "pum9dm" To see the stack trace of this error execute with --v=5 or higher
Did steps 12 and 13 again since the token had to be renewed and the command is working well now, thanks!
0 -
Now in section 3.3 of the lab, I am seeing that the coredns pods are not running, that is, they remain on a pending state:
~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-5992r 0/1 Pending 0 42m kube-system coredns-6d4b75cb6d-g5hhf 0/1 Pending 0 42m
That is, even after being deleted the pods recreate on a pending status, and are not running, what can I do to get them into running mode?
0 -
Hi @nicocerquera,
Are these the only two pods that are not in a Running state?
Before deciding what to do we need to determine what prevents them from reaching the desired Running state. Can you run
kubectl -n kube-system describe pod coredns-6d4b75cb6d-5992r
and provide the output?Regards,
-Chris0 -
Yes, those are the only ones not running, the rest are ok
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-5992r 0/1 Pending 0 4d3h kube-system coredns-6d4b75cb6d-g5hhf 0/1 Pending 0 4d3h kube-system etcd-dev-node128 1/1 Running 0 6d22h kube-system kube-apiserver-dev-node128 1/1 Running 0 6d22h kube-system kube-controller-manager-dev-node128 1/1 Running 0 6d22h kube-system kube-proxy-hqtfx 1/1 Running 0 4d3h kube-system kube-proxy-lbqcp 1/1 Running 0 6d22h kube-system kube-scheduler-dev-node128 1/1 Running 0 6d22h
and, here is the output of what you have asked @chrispokorni
kubectl -n kube-system describe pod coredns-6d4b75cb6d-5992r Name: coredns-6d4b75cb6d-5992r Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: <none> Labels: k8s-app=kube-dns pod-template-hash=6d4b75cb6d Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/coredns-6d4b75cb6d Containers: coredns: Image: k8s.gcr.io/coredns/coredns:v1.8.6 Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6mkh (ro) Conditions: Type Status PodScheduled False Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false kube-api-access-t6mkh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m23s (x1193 over 4d3h) default-scheduler 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
0 -
warning mentions 'untolerated taint'
0 -
Hi @nicocerquera,
Please use the solution in the comment linked below to install the calico network plugin in your cluster. If the coredns pods do not reach a running state, please delete them and the controller will automatically recreate them for you.
https://forum.linuxfoundation.org/discussion/comment/36843/#Comment_36843
Regards,
-Chris0 -
I have installed the calico file and applied it:
now I have the same issue, containers are on a pending state:
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-bpgsw 0/1 Pending 0 20m kube-system coredns-6d4b75cb6d-rbs2q 0/1 Pending 0 20m kube-system etcd-dev-node128 1/1 Running 1 29m kube-system kube-apiserver-dev-node128 1/1 Running 1 29m kube-system kube-controller-manager-dev-node128 1/1 Running 1 29m kube-system kube-proxy-7t8qg 1/1 Running 0 6m2s kube-system kube-proxy-cq6t7 1/1 Running 0 28m kube-system kube-scheduler-dev-node128 1/1 Running 1 29m
Here is a description of one of the nodes that is still in the pending state
kubectl -n kube-system describe pod coredns-6d4b75cb6d-bpgsw Name: coredns-6d4b75cb6d-bpgsw Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: <none> Labels: k8s-app=kube-dns pod-template-hash=6d4b75cb6d Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/coredns-6d4b75cb6d Containers: coredns: Image: k8s.gcr.io/coredns/coredns:v1.8.6 Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nxv6 (ro) Conditions: Type Status PodScheduled False Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false kube-api-access-2nxv6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m35s default-scheduler 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Maybe it has to do with the untolerated taint warning?
I have deleted them and the recreation is still pending state
Any other tests/ideas to know why they are not running?
0 -
Also, I noticed that there are no calico pods on the pod list
0 -
Here is the response after I made sure the calico.yaml is applied:
kubectl apply -f calico.yaml configmap/calico-config unchanged serviceaccount/calico-node unchanged resource mapping not found for name: "calico-node" namespace: "kube-system" from "calico.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1" ensure CRDs are installed first resource mapping not found for name: "globalfelixconfigs.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" ensure CRDs are installed first resource mapping not found for name: "bgppeers.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" ensure CRDs are installed first resource mapping not found for name: "globalbgpconfigs.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" ensure CRDs are installed first resource mapping not found for name: "ippools.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" ensure CRDs are installed first resource mapping not found for name: "globalnetworkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" ensure CRDs are installed first
Maybe it is a version issue?, please let me know
0 -
Hi @nicocerquera,
As recommended earlier, please use the solution in the comment linked below to install the calico network plugin in your cluster (it provides the updated link for the calico.yaml definition file). The coredns pods should become ready once all calico components are successfully installed and operational.
https://forum.linuxfoundation.org/discussion/comment/36843/#Comment_36843
Regards,
-Chris0 -
oooookey,
As mentioned earlier, that was the calico installation file I used, that is why I added the output.
It is working now, difference was that the previous calico installation file had to be deleted, that is prior to having and downloading the calico yaml file from that link, ideally there would be no other calico installation file in place.
Thanks!
0 -
regarding section 3.4 number 19;
when I do the tcpdump command:sudo tcpdump -i tun10 tcpdump: tun10: No such device exists (SIOCGIFHWADDR: No such device)
Any idea why that is the case?, maybe I should use another name
0 -
Hi @nicocerquera,
Any idea why that is the case?, maybe I should use another name
Yes, most definitely you should use the correct name as it is presented in the lab guide. To help understand the name, I would recommend reading the description of the step as well, where the author breaks down the name of the device:
to view traffic on the tunl0 , as in tunnel zero, interface
Regards,
-Chris0 -
I was using "1" instead of "l" on the tunl0 call.
That part is ok now, yet on the next item:
20. I see that I am not able to access the clusterkubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.101.35.17 <none> 80/TCP 25h 128:~$ kubectl get ep nginx NAME ENDPOINTS AGE nginx 192.168.128.72:80 25h 128:~$ curl 10.101.35.17:80 curl: (28) Failed to connect to 10.101.35.17 port 80: Connection timed out 128:~$ curl 192.168.128.72:80 curl: (28) Failed to connect to 192.168.128.72 port 80: Connection timed out
Can you pls guide me as to how to get access to the cluster?
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-6c8b449b8f-cmc4w 1/1 Running 0 38m kube-system calico-kube-controllers-55fc758c88-mjr95 1/1 Running 0 27h kube-system calico-node-92vnk 0/1 Running 0 15s kube-system calico-node-j9nrz 0/1 Running 0 15s kube-system coredns-6d4b75cb6d-bpgsw 1/1 Running 0 47h kube-system coredns-6d4b75cb6d-rbs2q 1/1 Running 0 47h kube-system etcd-dev-node128 1/1 Running 1 2d kube-system kube-apiserver-dev-node128 1/1 Running 1 2d kube-system kube-controller-manager-dev-node128 1/1 Running 1 2d kube-system kube-proxy-7t8qg 1/1 Running 0 47h kube-system kube-proxy-cq6t7 1/1 Running 0 2d kube-system kube-scheduler-dev-node128 1/1 Running 1 2d
0 -
Hi @nicocerquera,
I would start by troubleshooting the calico-node pods, more precisely listing the events of these pods. What does the following command display?
kubectl -n kube-system describe pod calico-node-92vnk
And, how did you configure your AWS VPC and the SG for your EC2 instances? Did you follow the demo video from the introductory chapter? I would recommend taking a closer look to understand the networking requirements of the lab environment.
Regards,
-Chris0 -
Hi Chris,
Here is the output of the command you suggested, pasted in two parts due to limitations of characters:
kubectl -n kube-system describe pod calico-node-92vnk Name: calico-node-92vnk Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Node: dev-node128/10.163.0.101 Start Time: Thu, 23 Feb 2023 01:51:06 +0000 Labels: controller-revision-hash=574d4d8fcb k8s-app=calico-node pod-template-generation=1 Annotations: <none> Status: Running IP: 10.163.0.101 IPs: IP: 10.163.0.101 Controlled By: DaemonSet/calico-node Init Containers: upgrade-ipam: Container ID: containerd://0d62c8a8c3493abedf1a6877081177ddef38f7b6ebe80205a44878dd82c2017e Image: docker.io/calico/cni:v3.25.0 Image ID: docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 Port: <none> Host Port: <none> Command: /opt/cni/bin/calico-ipam -upgrade State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 23 Feb 2023 01:51:07 +0000 Finished: Thu, 23 Feb 2023 01:51:07 +0000 Ready: True Restart Count: 0 Environment Variables from: kubernetes-services-endpoint ConfigMap Optional: true Environment: KUBERNETES_NODE_NAME: (v1:spec.nodeName) CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false Mounts: /host/opt/cni/bin from cni-bin-dir (rw) /var/lib/cni/networks from host-local-net-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt5gc (ro) install-cni: Container ID: containerd://fb463a616118918e15fb7df37e5a9793f72cfdf11b5acc198d827d2e28adc6cc Image: docker.io/calico/cni:v3.25.0 Image ID: docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 Port: <none> Host Port: <none> Command: /opt/cni/bin/install State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 23 Feb 2023 01:51:08 +0000 Finished: Thu, 23 Feb 2023 01:51:09 +0000 Ready: True Restart Count: 0 Environment Variables from: kubernetes-services-endpoint ConfigMap Optional: true Environment: CNI_CONF_NAME: 10-calico.conflist CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false KUBERNETES_NODE_NAME: (v1:spec.nodeName) CNI_MTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false SLEEP: false Mounts: /host/etc/cni/net.d from cni-net-dir (rw) /host/opt/cni/bin from cni-bin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt5gc (ro) mount-bpffs: Container ID: containerd://83ee8f9296724122a887b7a243054d84a20faccd87bbbec7c2e002de64685f28 Image: docker.io/calico/node:v3.25.0 Image ID: docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 Port: <none> Host Port: <none> Command: calico-node -init -best-effort State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 23 Feb 2023 01:51:10 +0000 Finished: Thu, 23 Feb 2023 01:51:10 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /nodeproc from nodeproc (ro) /sys/fs from sys-fs (rw) /var/run/calico from var-run-calico (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt5gc (ro) Containers: calico-node: Container ID: containerd://237ba58b6b9588797d49ee549ab3854212398451a4b4e52d9f23ab68e71ded7d Image: docker.io/calico/node:v3.25.0 Image ID: docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 Port: <none> Host Port: <none> State: Running Started: Thu, 23 Feb 2023 01:51:11 +0000 Ready: False Restart Count: 0 Requests: cpu: 250m Liveness: exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6 Readiness: exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3 Environment Variables from: kubernetes-services-endpoint ConfigMap Optional: true Environment: DATASTORE_TYPE: kubernetes WAIT_FOR_DATASTORE: true NODENAME: (v1:spec.nodeName) CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false CLUSTER_TYPE: k8s,bgp IP: autodetect CALICO_IPV4POOL_IPIP: Always CALICO_IPV4POOL_VXLAN: Never CALICO_IPV6POOL_VXLAN: Never FELIX_IPINIPMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false FELIX_VXLANMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false FELIX_WIREGUARDMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false CALICO_DISABLE_FILE_LOGGING: true FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT FELIX_IPV6SUPPORT: false FELIX_HEALTHENABLED: true Mounts: /host/etc/cni/net.d from cni-net-dir (rw) /lib/modules from lib-modules (ro) /run/xtables.lock from xtables-lock (rw) /sys/fs/bpf from bpffs (rw) /var/lib/calico from var-lib-calico (rw) /var/log/calico/cni from cni-log-dir (ro) /var/run/calico from var-run-calico (rw) /var/run/nodeagent from policysync (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt5gc (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True ...
Regarding the configuration of my nodes, I will review and get back to you, in the mean time, is there a glaring issue you see with this shared output?
0 -
Here is the second part:
Volumes: lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: var-run-calico: Type: HostPath (bare host directory volume) Path: /var/run/calico HostPathType: var-lib-calico: Type: HostPath (bare host directory volume) Path: /var/lib/calico HostPathType: xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate sys-fs: Type: HostPath (bare host directory volume) Path: /sys/fs/ HostPathType: DirectoryOrCreate bpffs: Type: HostPath (bare host directory volume) Path: /sys/fs/bpf HostPathType: Directory nodeproc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: cni-bin-dir: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: cni-net-dir: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: cni-log-dir: Type: HostPath (bare host directory volume) Path: /var/log/calico/cni HostPathType: host-local-net-dir: Type: HostPath (bare host directory volume) Path: /var/lib/cni/networks HostPathType: policysync: Type: HostPath (bare host directory volume) Path: /var/run/nodeagent HostPathType: DirectoryOrCreate kube-api-access-xt5gc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: :NoSchedule op=Exists :NoExecute op=Exists CriticalAddonsOnly op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 2m16s (x9027 over 22h) kubelet (combined from similar events): Readiness probe failed: 2023-02-24 00:01:16.841 [INFO][228533] confd/health.go 180: Number of node(s) with BGP peering established = 0 calico/node is not ready: BIRD is not ready: BGP not established with 10.163.0.108
0 -
Hi @nicocerquera,
The events point at a possible networking issues, so I'd take a close look at the video titled "Using AWS to Set Up the Lab Environment" from the introductory chapter.
Regards,
-Chris0 -
yeah I looked the AWS video description and it summarizes the set up that I have at the moment
0 -
0
-
yes
Distributor ID: Ubuntu Description: Ubuntu 20.04.5 LTS Release: 20.04 Codename: focal
0 -
Is there a way I can share my screen and someone can walk me though my issue, I am stuck here.
Also, can I continue with the other parts of the course if this lab doesn't work for me? I want to move forward but do not know how to...
0 -
Hi @nicocerquera,
Without a properly configured Kubernetes cluster, most of the following lab exercises will produce inconsistent results, or will simply not work at all.
If enrolled in a boot camp, I encourage you to join the Office Hour session dedicated to the LFS258 course from the boot camp, where the instructor can help you live to troubleshoot your cluster. The schedule of boot camp office hours (such as days of the week and times), and meeting access links can be found in the Logistics course of your boot camp.
Regards,
-Chris0 -
Thanks!
I will attend the bootcamp tomorrow.
Do I have to sign up for the live troubleshooting or, who does the hour of office hours gets allocated among multiple participants?
0 -
0
-
Hi Chris,
Here is what I got w.r.t the potential firewall configuration:
There is no vCenter firewall. There is a basic firewall on ESXi host to control connections into and out of the hypervisor (e.g. NFS traffic, ICMP, web traffic on the management UI, etc). There is no vSphere-related product in place to control traffic between guests/VMs.
Is that basic firewall what may be causing the issue?
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)