Welcome to the Linux Foundation Forum!

LFS258 - Lab 3.1 - Install Kubernetes

ahmedzaidiahmedzaidi Posts: 15

hello,
i'm facing this issue ?
Tanks a lot

[email protected]:~# kubeadm init
Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock
To see the stack trace of this error execute with --v=5 or higher
[email protected]:~#

Comments

  • ahmedzaidiahmedzaidi Posts: 15

    hello any answer ???????

  • serewiczserewicz Posts: 652

    Hello,

    I must have missed your earlier post. If you follow the lab instructions you'll find there are several commands prior to kubeadm init, and the kubeadm init command is also much different. Please start with the first part of the first lab and continue from there.

    Regards,

  • ahmedzaidiahmedzaidi Posts: 15

    hello serewicz,
    i strated from the begining

  • serewiczserewicz Posts: 652

    Glad we were able to sort out the issue during the office hours.

  • ahmedzaidiahmedzaidi Posts: 15

    thanks TIM

  • ahmedzaidiahmedzaidi Posts: 15

    hello Tim,

    i tried to Initialize the master with the command kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review, but unfortunately still some issue.

    [email protected]:~# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out # Save output for future review
    W0717 00:18:32.515432 48472 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [init] Using Kubernetes version: v1.18.1
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Using existing ca certificate authority
    [certs] Using existing apiserver certificate and key on disk
    [certs] Using existing apiserver-kubelet-client certificate and key on disk
    [certs] Using existing front-proxy-ca certificate authority
    [certs] Using existing front-proxy-client certificate and key on disk
    [certs] Using existing etcd/ca certificate authority
    [certs] Using existing etcd/server certificate and key on disk
    [certs] Using existing etcd/peer certificate and key on disk
    [certs] Using existing etcd/healthcheck-client certificate and key on disk
    [certs] Using existing apiserver-etcd-client certificate and key on disk
    [certs] Using the existing "sa" key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W0717 00:18:33.756066 48472 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W0717 00:18:33.757665 48472 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
        timed out waiting for the condition
    
    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
    
    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
    
    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.
    
    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
    

    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    To see the stack trace of this error execute with --v=5 or higher
    [email protected]:~#

  • ahmedzaidiahmedzaidi Posts: 15

    i tried to found the failling containers and the logs but it's .......

    [email protected]:~# docker ps -a | grep kube | grep -v pause
    e345fcca90cd a595af0107f9 "kube-apiserver --ad…" 6 minutes ago Up 6 minutes k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
    827278890b41 d1ccdd18e6ed "kube-controller-man…" 6 minutes ago Up 6 minutes k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
    c0519799b6ba 303ce5db0e90 "etcd --advertise-cl…" 6 minutes ago Up 6 minutes k8s_etcd_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
    f34e682347ea 6c9320041a7b "kube-scheduler --au…" 6 minutes ago Up 6 minutes k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
    ea93c584861e 6c9320041a7b "kube-scheduler --au…" 4 days ago Exited (2) 4 days ago k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_363a5bee1d59c51a98e345162db75755_0
    7839edb43093 303ce5db0e90 "etcd --advertise-cl…" 4 days ago Exited (0) 4 days ago k8s_etcd_etcd-ahmed-kvm_kube-system_8f85a1e7362830d40135fe27577b3b98_0
    c5f5eda5eddf d1ccdd18e6ed "kube-controller-man…" 4 days ago Exited (2) 4 days ago k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_e38b41b40faba85a648fd189b91a6ae9_0
    99522185c1cc a595af0107f9 "kube-apiserver --ad…" 4 days ago Exited (137) 4 days ago k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_2ea6f6485a8cf5c423901ecdd0d323a3_0
    [email protected]:~#
    [email protected]:~#
    [email protected]:~# docker logs e345fcca90cd
    Flag --insecure-port has been deprecated, This flag will be removed in a future version.
    I0716 22:18:40.967046 1 server.go:656] external host was not specified, using 192.168.122.106
    I0716 22:18:40.968313 1 server.go:153] Version: v1.18.1
    I0716 22:18:41.299089 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
    I0716 22:18:41.299112 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
    I0716 22:18:41.300633 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
    I0716 22:18:41.300649 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
    I0716 22:18:41.309520 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.309702 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.319067 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.319094 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.326471 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.326497 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.361638 1 master.go:270] Using reconciler: lease
    I0716 22:18:41.362158 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.362201 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.372429 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.372461 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.379924 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.379953 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.385279 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.385300 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.391293 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.391384 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.398967 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.399088 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.404717 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.404739 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.410504 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.410535 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.416673 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.416692 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.423196 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.423228 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.429130 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.429158 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.434229 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.434250 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]

  • ahmedzaidiahmedzaidi Posts: 15

    W0716 22:18:41.957049 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
    W0716 22:18:41.964681 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
    W0716 22:18:41.979135 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
    W0716 22:18:41.989528 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
    W0716 22:18:41.991459 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
    W0716 22:18:41.999622 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
    W0716 22:18:42.010926 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
    W0716 22:18:42.010949 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
    I0716 22:18:42.016460 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
    I0716 22:18:42.016475 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
    I0716 22:18:42.017588 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:42.017612 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:42.022635 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:42.022658 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:42.284833 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:42.284859 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:43.278589 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
    I0716 22:18:43.278612 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key
    I0716 22:18:43.278599 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
    I0716 22:18:43.279056 1 secure_serving.go:178] Serving securely on [::]:6443
    I0716 22:18:43.279111 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
    I0716 22:18:43.279120 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
    I0716 22:18:43.279136 1 tlsconfig.go:240] Starting DynamicServingCertificateController
    I0716 22:18:43.279288 1 available_controller.go:387] Starting AvailableConditionController
    I0716 22:18:43.279461 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
    I0716 22:18:43.279683 1 autoregister_controller.go:141] Starting autoregister controller
    I0716 22:18:43.279720 1 cache.go:32] Waiting for caches to sync for autoregister controller
    I0716 22:18:43.279725 1 crd_finalizer.go:266] Starting CRDFinalizer
    I0716 22:18:43.279737 1 crdregistration_controller.go:111] Starting crd-autoregister controller
    I0716 22:18:43.279765 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
    I0716 22:18:43.279766 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
    I0716 22:18:43.279712 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
    I0716 22:18:43.279775 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
    I0716 22:18:43.279783 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
    I0716 22:18:43.279807 1 controller.go:86] Starting OpenAPI controller
    I0716 22:18:43.279847 1 customresource_discovery_controller.go:209] Starting DiscoveryController
    I0716 22:18:43.279855 1 naming_controller.go:291] Starting NamingConditionController
    I0716 22:18:43.279868 1 establishing_controller.go:76] Starting EstablishingController
    I0716 22:18:43.279877 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
    I0716 22:18:43.279886 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
    I0716 22:18:43.279950 1 controller.go:81] Starting OpenAPI AggregationController
    E0716 22:18:43.281329 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.122.106, ResourceVersion: 0, AdditionalErrorMsg:
    I0716 22:18:43.379332 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
    I0716 22:18:43.379617 1 cache.go:39] Caches are synced for AvailableConditionController controller
    I0716 22:18:43.379822 1 cache.go:39] Caches are synced for autoregister controller
    I0716 22:18:43.379822 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
    I0716 22:18:43.379830 1 shared_informer.go:230] Caches are synced for crd-autoregister
    I0716 22:18:44.278592 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
    I0716 22:18:44.278640 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
    I0716 22:18:44.282562 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
    I0716 22:18:44.285902 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
    I0716 22:18:44.285925 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
    I0716 22:18:44.491608 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
    I0716 22:18:44.510559 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
    W0716 22:18:44.609597 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.122.106]
    I0716 22:18:44.610122 1 controller.go:606] quota admission added evaluator for: endpoints
    I0716 22:18:44.612164 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
    [email protected]:~#

  • ahmedzaidiahmedzaidi Posts: 15

    kubelte is running also.

    thanks for your help

  • serewiczserewicz Posts: 652

    Hello,

    The two primary reasons I have seen this error in the past has been networking and having enough resources. Near the end of the messages you'll see it is trying to start etcd. If that pod does not start or access is blocked by some firewall or security software like apparmor or SELinux then it will not be able to respond a Ready state.

    Are you using Docker only, and have not run any of the cri-o steps.

    An error suggests kubelet is not running. What does the output of **sudo systemctl status kubelet **say? Any reasons in the details?

    Are you sure the vms have access to each other and the outside world with no firewalls in place, on the node instance or elsewhere?

    Is the master node a vm with at least 2vCPUs and 7.5G of memory dedicated to it?

    Do you see any containers running on the master when you type sudo docker ps - are any of them etcd, kube-apiserver, kube-scheduler?

    Regards,

  • ahmedzaidiahmedzaidi Posts: 15

    hello Tim,
    thanks for your response,
    so i'm using only Docker.

    [email protected]:~# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    e345fcca90cd a595af0107f9 "kube-apiserver --ad…" 31 minutes ago Up 31 minutes k8s_kube-apiserver_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
    827278890b41 d1ccdd18e6ed "kube-controller-man…" 31 minutes ago Up 31 minutes k8s_kube-controller-manager_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
    c0519799b6ba 303ce5db0e90 "etcd --advertise-cl…" 31 minutes ago Up 31 minutes k8s_etcd_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
    f34e682347ea 6c9320041a7b "kube-scheduler --au…" 31 minutes ago Up 31 minutes k8s_kube-scheduler_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
    efdf716bb5f3 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_etcd-ahmed-kvm_kube-system_18c6a64ce7bd0b7e0784f71ac8338858_0
    c1f254145772 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-controller-manager-ahmed-kvm_kube-system_ed98fcfdac4a8a68336cab304c1797a2_0
    c8ceb6e32422 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-scheduler-ahmed-kvm_kube-system_0cb1764a17d7be4d43b3f06a989ecaf4_0
    2b23a62e04e4 k8s.gcr.io/pause:3.2 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-apiserver-ahmed-kvm_kube-system_4695d5bd39ab6579579152127705d0b8_0
    [email protected]:~#

  • ahmedzaidiahmedzaidi Posts: 15

    i will check the other elements that you are referring to.

  • serewiczserewicz Posts: 652

    What network IP range are you using for your host, VMs and given to the kubeadm init command? If they overlap it also cause this issue.

  • ahmedzaidiahmedzaidi Posts: 15

    I0716 22:18:41.659710 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.659776 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.665624 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.665645 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.670818 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.671022 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.677050 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.677189 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.683488 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.683528 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.689177 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.689196 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.694668 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.694817 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.700991 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.701012 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.708941 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.708973 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.714328 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.714350 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.738907 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.738927 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.744926 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.744946 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.750892 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.750915 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.755918 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.755939 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.762163 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.762184 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.767052 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.767188 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.772294 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.772319 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.777530 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.777552 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.783632 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.783660 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.789392 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.789440 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.828164 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.828192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.833295 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.833319 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.840151 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.840175 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.849963 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.850064 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.855262 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.855280 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.861276 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.861302 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
    I0716 22:18:41.867017 1 client.go:361] parsed scheme: "endpoint"
    I0716 22:18:41.867080 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]

  • serewiczserewicz Posts: 652

    I see a lot of calls to 127.0.0.1, not k8smaster. You may have missed an earlier step where you set the alias and called that alias from the kubeadm config file. Please review that those steps have been completed properly.

    Regards,

  • ahmedzaidiahmedzaidi Posts: 15

    ok TIM i will check thanks

  • I am working on kubernetes chapter 03: Installation and Configuration > section installation and Configuration > sub-section Installation Considerations, the link about Picking the Right Solution article is not working, a Not Found error message is thrown.

  • serewiczserewicz Posts: 652

    It worked when I just clicked on it. Did you copy/paste or some other process? This is the link which I just used: http://kubernetes.io/docs/getting-started-guides/ which then forwards to the new page https://kubernetes.io/docs/setup/

    Do you have the ability to follow a link disabled in your browser?

    Regards,

Sign In or Register to comment.