Welcome to the Linux Foundation Forum!

Lab 3.2 worker connecting to cp issues

Hello. I'm working on two AWS EC2 instances, and my worker node is having trouble connecting to the control pane.

It's getting hung up on the preflight checks at the moment, with the following log output

I1207 22:26:01.963500 49517 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://k8scp:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Interestingly enough, on the cp node, i got the following when trying to do kubeadm init

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out
W1207 22:17:31.649377 100230 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://dl.k8s.io/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1207 22:17:31.649450 100230 version.go:105] falling back to the local client version: v1.27.1
[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

However, I was still able to generate a join command after looking through the forums here:

kubeadm join k8scp:6443 --token "TOKEN" --discovery-token-ca-cert-hash sha256:"HASH"

It's possible that there are other angles to go at it, but they're in the same subnet, same security group and ssh is open on port 22. They're also part of the same VPC. Not sure what else I can do to make the connection but any help would be appreciated it!

Comments

  • Hi @nmoy,

    You may be experiencing connectivity issues, possibly from a misconfigured SG. I'd recommend following closely the AWS infra configuration video from the introductory chapter when setting up the networking and configuring the EC2 instances.

    In addition, please ensure the cp script runs only once on the control plane VM, and the worker script runs only once on the second VM. The kubeadm init and kubeadm join should also run only once respectively.

    Regards,
    -Chris

  • nmoy
    nmoy Posts: 4
    edited December 2023

    Which CP script are you referring to? And which Worker script? Also, my ingress and egress rules are wide open Started from scratch to see what happens. 3.1 seems to run fine, but it's when it's 3.2 that seems to not go well.

  • Hi @nmoy,

    I meant the sequence of installation and config commands specific to each node...

    Assuming all your VMs are protected by the same inbound SG and same outbound SG, both VMs running Ubuntu 20.04 LTS, have the minimum 2 CPU, 8 GB RAM and 15-20 GB disk, the error indicates that the init command was executed on a VM where either an init or a join has completed earlier. This is what it means where a port is in use already, or when a file or directory already exists.

    If the init is not successful, even if you generate the join command for the worker node, the joining of the two nodes will not be successful.

    I will attempt the cluster installation on AWS once again, to see if I can reproduce your errors...

    Regards,
    -Chris

  • nmoy
    nmoy Posts: 4

    Any such luck with the repro @chrispokorni ?

  • Hi @nmoy,

    The bad news is... I cannot reproduce your issue.

    I closely followed the AWS setup video from the intro chapter and I ran through every single installation and config step starting with lab exercise 3.1... and it all works as expected.

    There are no errors during kubeadm init about ports in use or files already available. These errors only occur when kubeadm is run multiple times in a row on the same system. Both the init and join commands are evaluating the state of the VM and produce such errors when the VM was already initialized through an init or has joined a cluster through a join command.

    Correctly setting up the /etc/hosts file on both machines is also critical. The cp-private-IP-address k8scp entry needs to be identical in both files, on both systems respectively, set with the private IP address of the control plane node. I also made sure every single command was copied in its entirety and ran successfully without errors in the terminal.

    The AWS setup video does not call for an outbound rule, yet I see one defined on your infrastructure. Without seeing the entire history of commands ran on each instance I cannot say what causes the errors reported above, whether it is a step that may have been modified, skipped, or added to the commands sequence from the lab guide...

    Regards,
    -Chris

  • nmoy
    nmoy Posts: 4

    It's possible that there's some mystery x factor in my companys account that's preventing it working. But thanks for trying!

Categories

Upcoming Training