Welcome to the Linux Foundation Forum!

lab 4.2

At the end of lab 4.2, i started getting

The connection to the server k8smaster:6443 was refused - did you specify the right host or port?

What went wrong?


  • serewicz
    serewicz Posts: 1,000


    If the kube-apiserver is not running for some reason you may see this error. Have you changed docker in any way, or Kubelet?
    Which command did you run before the issue, and then one that exposed the issue?

    If you run sudo docker ps, do you see the control plane pods running?


  • serewicz
    serewicz Posts: 1,000


    Also what size VMs are you using, cpu and memory?


  • i ran kubectl create -f hog2.yaml (step 10 in lab 4.4)
    and then got the error on getting deployments

    kubectl get deployments --all-namespaces

    it happened abruptly, all of a sudden..

    My VMs are n1-standard-2 (2 vCPUs, 7.5 GB memory)

  • serewicz
    serewicz Posts: 1,000

    Hmm, Well check that you didn't over-allocate the memory. Adding an extra zero would consume too much. When the master gets low on cpu or memory the kube-apiserver just stops working for a while. And it takes a bit to figure out what is happening.

    Try to kill the deployment with kubectl delete -f hot2.yaml. If it gets better right away the resources were overallocated. If you can't run the command at all use ps -ef and kill -9 to terminate the hog containers. Then monitor kube-apiserver returning and delete the deployment as quickly as possible (I use a watch loop) so that it doens't just start running again and consume all the resources.


  • chrispokorni
    chrispokorni Posts: 2,012
    edited August 2020

    Another solution would be to restart the kubelet service and try to delete the hog and hog2 deployments immediately after, but before the kube-apiserver crashes again. The kubelet restart forces the apiserver to restart as well, and you may have a short window to run kubectl commands.


  • i decided to re-create the cluster. Can you help me locate the instructions? I recall a video from Tim walking through the steps of provisioning master and worker nodes in GCP. Can't find it anymore. Thx!

  • fcioanca
    fcioanca Posts: 1,756

    @sergeizak Please check the intro chapter towards the end, marked Important. You will find the video there

  • Make sure the name "k8smaster" will resolve, try a simple "ping, if not fix that, see if your hosts file settings are still correct. If that is working then next make sure you have an environment value for "KUBECONFIG", echo $KUBECONFIG if this is not set that a quick fix is to set it again using "export KUBECONFIG="$HOME/.kube/config". Add this to your .bashrc if not there already.

  • maybel
    maybel Posts: 45

    Hi guys, I wonder if I did something wrong because lab 4.3.10 should result in the following:
    I0927 21:09:23.514921 1 main.go:26] Allocating "0" memory, in "4ki" chunks, with a 1ms
    ,→ sleep \
    between allocations
    I0927 21:09:23.514984 1 main.go:39] Spawning a thread to consume CPU
    I0927 21:09:23.514991 1 main.go:39] Spawning a thread to consume CPU
    I0927 21:09:23.514997 1 main.go:29] Allocated "0" memory

    and I got:
    student@cp:~$ kubectl logs hog-559d5585fd-jzvsf
    I0320 18:15:00.416686 1 main.go:26] Allocating "950.Mi" memory, in "100Mi" chunks, with a 1s sleep between allocations
    I0320 18:15:00.416797 1 main.go:39] Spawning a thread to consume CPU
    I0320 18:15:00.416815 1 main.go:39] Spawning a thread to consume CPU
    I0320 18:15:13.583421 1 main.go:29] Allocated "950.Mi" memory
    student@cp:~$ Connection to closed by remote host.

    The hog.yaml file was modified and to me, my results make sense but the pdf results are different than mine.

  • maybel
    maybel Posts: 45

    I change my view; it doesn't make sense because of my resource limits.

    cpu: "1"
    memory: "4Gi"
    cpu: "0.5"
    memory: "500Mi"
    Then I don't understand how it is happening.


Upcoming Training