LFS258 - Lab 3.1 - Install Kubernetes
hello,
i'm facing this issue ?
Tanks a lot
root@ahmed-KVM:~# kubeadm init
Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock
To see the stack trace of this error execute with --v=5 or higher
root@ahmed-KVM:~#
Comments
-
hello any answer ???????
0 -
Hello,
I must have missed your earlier post. If you follow the lab instructions you'll find there are several commands prior to kubeadm init, and the kubeadm init command is also much different. Please start with the first part of the first lab and continue from there.
Regards,
0 -
hello serewicz,
i strated from the begining0 -
Glad we were able to sort out the issue during the office hours.
0 -
thanks TIM
0 -
hello Tim,
i tried to Initialize the master with the command kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review, but unfortunately still some issue.
root@ahmed-KVM:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review
W0717 00:18:32.515432 48472 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0717 00:18:33.756066 48472 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0717 00:18:33.757665 48472 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
root@ahmed-KVM:~#0 -
i tried to found the failling containers and the logs but it's .......
root@ahmed-KVM:~# docker ps -a | grep kube | grep -v pause
e345fcca90cd a595af0107f9 "kube-apiserver --ad…" 6 minutes ago Up 6 minutes k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
827278890b41 d1ccdd18e6ed "kube-controller-man…" 6 minutes ago Up 6 minutes k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
c0519799b6ba 303ce5db0e90 "etcd --advertise-cl…" 6 minutes ago Up 6 minutes k8s_etcd_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
f34e682347ea 6c9320041a7b "kube-scheduler --au…" 6 minutes ago Up 6 minutes k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
ea93c584861e 6c9320041a7b "kube-scheduler --au…" 4 days ago Exited (2) 4 days ago k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_363a5bee1d59c51a98e345162db75755_0
7839edb43093 303ce5db0e90 "etcd --advertise-cl…" 4 days ago Exited (0) 4 days ago k8s_etcd_etcd-ahmed-kvm_kube-system_8f85a1e7362830d40135fe27577b3b98_0
c5f5eda5eddf d1ccdd18e6ed "kube-controller-man…" 4 days ago Exited (2) 4 days ago k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_e38b41b40faba85a648fd189b91a6ae9_0
99522185c1cc a595af0107f9 "kube-apiserver --ad…" 4 days ago Exited (137) 4 days ago k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_2ea6f6485a8cf5c423901ecdd0d323a3_0
root@ahmed-KVM:~#
root@ahmed-KVM:~#
root@ahmed-KVM:~# docker logs e345fcca90cd
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0716 22:18:40.967046 1 server.go:656] external host was not specified, using 192.168.122.106
I0716 22:18:40.968313 1 server.go:153] Version: v1.18.1
I0716 22:18:41.299089 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0716 22:18:41.299112 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0716 22:18:41.300633 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0716 22:18:41.300649 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0716 22:18:41.309520 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.309702 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.319067 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.319094 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.326471 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.326497 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.361638 1 master.go:270] Using reconciler: lease
I0716 22:18:41.362158 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.362201 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.372429 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.372461 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.379924 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.379953 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.385279 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.385300 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.391293 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.391384 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.398967 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.399088 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.404717 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.404739 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.410504 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.410535 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.416673 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.416692 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.423196 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.423228 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.429130 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.429158 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.434229 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.434250 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]0 -
W0716 22:18:41.957049 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0716 22:18:41.964681 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0716 22:18:41.979135 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0716 22:18:41.989528 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0716 22:18:41.991459 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0716 22:18:41.999622 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0716 22:18:42.010926 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0716 22:18:42.010949 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0716 22:18:42.016460 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0716 22:18:42.016475 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0716 22:18:42.017588 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:42.017612 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:42.022635 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:42.022658 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:42.284833 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:42.284859 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:43.278589 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I0716 22:18:43.278612 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key
I0716 22:18:43.278599 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I0716 22:18:43.279056 1 secure_serving.go:178] Serving securely on [::]:6443
I0716 22:18:43.279111 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0716 22:18:43.279120 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0716 22:18:43.279136 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0716 22:18:43.279288 1 available_controller.go:387] Starting AvailableConditionController
I0716 22:18:43.279461 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0716 22:18:43.279683 1 autoregister_controller.go:141] Starting autoregister controller
I0716 22:18:43.279720 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0716 22:18:43.279725 1 crd_finalizer.go:266] Starting CRDFinalizer
I0716 22:18:43.279737 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0716 22:18:43.279765 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0716 22:18:43.279766 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I0716 22:18:43.279712 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0716 22:18:43.279775 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0716 22:18:43.279783 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I0716 22:18:43.279807 1 controller.go:86] Starting OpenAPI controller
I0716 22:18:43.279847 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0716 22:18:43.279855 1 naming_controller.go:291] Starting NamingConditionController
I0716 22:18:43.279868 1 establishing_controller.go:76] Starting EstablishingController
I0716 22:18:43.279877 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0716 22:18:43.279886 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0716 22:18:43.279950 1 controller.go:81] Starting OpenAPI AggregationController
E0716 22:18:43.281329 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.122.106, ResourceVersion: 0, AdditionalErrorMsg:
I0716 22:18:43.379332 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0716 22:18:43.379617 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0716 22:18:43.379822 1 cache.go:39] Caches are synced for autoregister controller
I0716 22:18:43.379822 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0716 22:18:43.379830 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0716 22:18:44.278592 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0716 22:18:44.278640 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0716 22:18:44.282562 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0716 22:18:44.285902 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0716 22:18:44.285925 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0716 22:18:44.491608 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0716 22:18:44.510559 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0716 22:18:44.609597 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.122.106]
I0716 22:18:44.610122 1 controller.go:606] quota admission added evaluator for: endpoints
I0716 22:18:44.612164 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
root@ahmed-KVM:~#0 -
kubelte is running also.
thanks for your help
0 -
Hello,
The two primary reasons I have seen this error in the past has been networking and having enough resources. Near the end of the messages you'll see it is trying to start etcd. If that pod does not start or access is blocked by some firewall or security software like apparmor or SELinux then it will not be able to respond a Ready state.
Are you using Docker only, and have not run any of the cri-o steps.
An error suggests kubelet is not running. What does the output of **sudo systemctl status kubelet **say? Any reasons in the details?
Are you sure the vms have access to each other and the outside world with no firewalls in place, on the node instance or elsewhere?
Is the master node a vm with at least 2vCPUs and 7.5G of memory dedicated to it?
Do you see any containers running on the master when you type sudo docker ps - are any of them etcd, kube-apiserver, kube-scheduler?
Regards,
0 -
hello Tim,
thanks for your response,
so i'm using only Docker.oot@ahmed-KVM:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e345fcca90cd a595af0107f9 "kube-apiserver --ad…" 31 minutes ago Up 31 minutes k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
827278890b41 d1ccdd18e6ed "kube-controller-man…" 31 minutes ago Up 31 minutes k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
c0519799b6ba 303ce5db0e90 "etcd --advertise-cl…" 31 minutes ago Up 31 minutes k8s_etcd_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
f34e682347ea 6c9320041a7b "kube-scheduler --au…" 31 minutes ago Up 31 minutes k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
efdf716bb5f3 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
c1f254145772 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
c8ceb6e32422 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
2b23a62e04e4 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
root@ahmed-KVM:~#0 -
i will check the other elements that you are referring to.
0 -
What network IP range are you using for your host, VMs and given to the kubeadm init command? If they overlap it also cause this issue.
0 -
I0716 22:18:41.659710 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.659776 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.665624 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.665645 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.670818 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.671022 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.677050 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.677189 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.683488 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.683528 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.689177 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.689196 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.694668 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.694817 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.700991 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.701012 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.708941 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.708973 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.714328 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.714350 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.738907 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.738927 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.744926 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.744946 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.750892 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.750915 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.755918 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.755939 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.762163 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.762184 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.767052 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.767188 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.772294 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.772319 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.777530 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.777552 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.783632 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.783660 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.789392 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.789440 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.828164 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.828192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.833295 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.833319 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.840151 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.840175 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.849963 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.850064 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.855262 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.855280 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.861276 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.861302 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0716 22:18:41.867017 1 client.go:361] parsed scheme: "endpoint"
I0716 22:18:41.867080 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]0 -
I see a lot of calls to 127.0.0.1, not k8smaster. You may have missed an earlier step where you set the alias and called that alias from the kubeadm config file. Please review that those steps have been completed properly.
Regards,
0 -
ok TIM i will check thanks
0 -
I am working on kubernetes chapter 03: Installation and Configuration > section installation and Configuration > sub-section Installation Considerations, the link about Picking the Right Solution article is not working, a Not Found error message is thrown.
0 -
It worked when I just clicked on it. Did you copy/paste or some other process? This is the link which I just used: http://kubernetes.io/docs/getting-started-guides/ which then forwards to the new page https://kubernetes.io/docs/setup/
Do you have the ability to follow a link disabled in your browser?
Regards,
0 -
Tim, Hello I am David Theroux. Pleasure to meet you. I made it to item #14 on the lab exercise 3-1 for item 10 what ip address are we supposed to use? also i cant get past item 14 without a bunch of errors? I checked the kubelet and it is running, it may be the ip address i amusing I don't know? I have been at this for a week. I cannot figure it out unless I am using the wrong ip address. see the printout please: W1213 01:02:05.058406 11734 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs
W1213 01:03:26.445598 11926 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
everything went great until i got to item 14. it has to be the network somewhere. I went up and down the list i just don't know what it is. please HELP! Thanks.0 -
Hi @dctheroux,
I believe there are several reasons you are getting all the errors above. Here are a few pointers to keep in mind when bootstrapping you Kubernetes cluster:
1 - STEP 4 (a). Install ONLY Docker. DO NOT install CRI-O.
2 - STEPS 10, 11. Extract the private IP address of your master node and use it to setup thek8smaster
alias in the /etc/hosts file. Pay close attention to your spelling, as typos will cause future issues with your cluster!
3 - STEP 13. Pay close attention to the format and indentation of yourkubeadm-config.yaml
file. The last line has to be indented 2 spaces from the edge. Your file has to match what you see in the lab manual (there is no need to chance anything). You can find this file ready to use in the SOLUTIONS tarball, you downloaded earlier at the beginning of this lab 3.
4 - STEP 14. Run the entire command as presented in the lab. From your output it is clear that you are not running the entire command. Both lines are part of the command, so copy and paste both lines at the same time. Please pay close attention to multi-line commands, they cannot be broken into separate execution cycles.
5 - Running the same incorrectkubeadm init ...
command on the master node several times in a row will not fix your issues - it is like hammering an already broken nail. You need to runkubeadm reset
as root on the master node to cleanup the partially misconfigured artifacts, and run the correct/completeinit
command from STEP 14.Regards,
-Chris0 -
@dctheroux, your duplicate post has been removed.
Regards,
-Chris0 -
Thank you for your help Chris I appreciate it. I went back and read the lab again and I had to choose between crio and docker. It said not to install both. I will be more careful next time. BTW it worked. I ran the init command because i worked a long time that day and lost track, your right running it over and over wasn't a good idea i agree LOL! Thanks for deleting the other post and I apologize for that again. We are good Thank you again. Merry Xmas and Happy Holiday's.
0 -
Chris, I am having trouble joining the cluster? just wondering if it is because I have re done everything? I tried to do this in the morning. i looked Online told me i have to wait 24 hours to try and add it again? it will go through the first few lines of the preflight check and then froze. I used the command in the lab which I will list for you: kubeadm join --token 27eee4.6e66ff60318da929 \k8smaster:6443 --discovery-token-ca-cert-hash \sha256:6d541678b05652e1fa5d43908e75e67376e994c3483d6683f2a18673e5d2a1b0. after i have generated the hashes.
when I issue the command it says :W1215 23:52:31.905345 14024 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/. Then it freezes. Did I max a file out somewhere or do I just need to wait the 24 hours like it said online. Also, what does this error mean : error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable. Just to be clear I have been practicing this so I can get a good working knowledge of everything, Been deleting everything and starting over a lot just to get familiar with the process. The first time i got through the first lab from chapter 3 just fine, everything went through without a problem. then I started the second lab from chapter3 and am having problems joining the cluster. I just tried again and got a little different error: root@worker:~# kubeadm join 10.2.0.19:6443 --token zf63bb.1stlapdvytwn33hp --discovery-token-ca-cert-hash sha256:3cb1d375a4f0e8c99071631ea9bd60915fce710aff9586f5fdbdbc327b433b3e
W1216 00:12:11.217983 17855 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://k8smaster:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp: lookup k8smaster on 127.0.0.53:53: server misbehaving
To see the stack trace of this error execute with --v=5 or higher
Also, what does the stack trace of this error execute with --v5 or higher mean. Thanks again and sorry for the long winded post LOL!0 -
Not sure how long this process takes.
0 -
iI also get -bash: seds/ˆ.* //: No such file or directory when I run the command : openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed's/ˆ.* //' . Except I ran the command the way it is layed out in the lab. This happens when I try to create the Create and use a Discovery Token CA Cert Hash. I can create the other token no problem or it seems that way anyway. i used the sudo kubeadm token create to create this one and it is fine every time,
0 -
is there any way to shorten the life of a token key, so I can generate a new one? or do I have to wait the 24 hours.
0 -
root@master:~# sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
1dwk5s.xoo1u7p3ivht4cer 21h 2020-12-16T23:20:02Z authentication,signing system:bootstrappers:kubeadm:default-node-token
84gqlf.m14vz5gifsljqkmr 21h 2020-12-16T23:29:30Z authentication,signing system:bootstrappers:kubeadm:default-node-token
m1ogim.eagq5w3tyapzeh32 21h 2020-12-16T22:43:54Z authentication,signing system:bootstrappers:kubeadm:default-node-token
p9wyza.me95qeslt63740x4 23h 2020-12-17T01:28:51Z authentication,signing system:bootstrappers:kubeadm:default-node-token
zf63bb.1stlapdvytwn33hp 21h 2020-12-16T23:37:06Z authentication,signing system:bootstrappers:kubeadm:default-node-token0 -
Hi @dctheroux,
A new token can be created at any time, there is no need to wait 24 hours for a prior token to expire.
You have a typo in your
openssl
command. There should be a space betweensed
and's/ˆ.* //'
.Please pay close attention to the lab exercises, especially when long/multi-line commands are involved. Syntax errors and typos will either prevent you from continuing, or they will introduce errors that will surface in a later exercise step.
Rejoining a worker node with the cluster may need some cleanup, so before running the
kubectl join ...
command again on the worker node that needs to be re-added to the cluster, the worker node needs to be deleted from the cluster withkubectl delete node <node-name>
command, then prior to joining, thesudo kubeadm reset
command will help cleaning up the worker node from all prior TLS artifacts, config files, etc. which have been created during the previousjoin
.Regards,
-Chris0 -
Hi @dctheroux,
We would recommend posting new issues related to the LFS258 course in its dedicated forum. This particular discussion thread of the Cloud Engineer Bootcamp forum was started for lab 3.1 of LFS258, and your new comments are no longer relevant to the topic of this thread.
If you encounter new issues (which have not been reported in previous discussions) in other lab exercises of LFS258, please start a new discussion in the LFS258 forum and ensure the discussion title reflects the reported issue.
This will ensure that reported issues and their proposed solutions are kept in an organized fashion, which helps users to faster locate relevant information and solutions in the forum in the case they run into similar issues.
Regards,
-Chris0 -
Hello All,
I am facing issue, with the kubeadm join on the worker nodes. I am installing the kubernetes cluster on centos 7 . I am facing issue with the kubeadm join with the private port . Failed to connect to API Server "10.x.x.x:6xxx9. Please advice me how to fix it.0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 112 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 152 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 260 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 743 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)