Welcome to the Linux Foundation Forum!

Lab 3.2, step 13. Set up on 2 vm on aws

josepmaria
josepmaria Posts: 81
edited December 8 in LFS258 Class Forum

Hello,

I have 2 vm set up on aws, t2. large (2 cores 8 gb of memory each)

I have also read the problems another user exposed on this forum in this question.

Everything went well on Lab 3.1.

I have followed instructions for Lab 3.2, I have even doubled checked that the hostname-i from the CP node is saved on the file /etc/hosts for both instances (CP and Worker).

The problem is on the step 13 from Lab 2. Perhaps I am doing something wrong here, what I do is to copy the output of the command sudo kubeadm create --print-join-command executed on the CP node to the Worker node via CLI. Note: output from command executed on CP includes the private IP of the CP node.

However, when looking at the instructions from Lab2, while the sha256 pasted on the CLI of the worker node matches the one form the CP node, the token does not match. That is why I wonder if I could be doing something wrong here.

[SCREENSHOTS OF THE COURSE MATERIALS REMOVED BY THE FORUM ADMINISTRATOR]

Here is the output I get when attempting step 113 on the worker noe (image provides 2 trials, with and without --node-name=worker)

I have also checked NACL and SG of both instances and they are correct.

Finally, I have entered the commands sudo kubeadm reset and sudo kubeadm initbefore attempting to reproduce step 13 of lab 3.2

I have also checked that the kubelet and containerd are running with commands below
sudo systemctl status kubelet
sudo systemctl status containerd

Unfortunately, the output was the same.

I would appreciate if someone could shed some light on this.

Thanks in advance for your help.

Josep Maria

Answers

  • on the /etc/hosts I have also added the address of the worker node, just below the one from the cp node, as worker

    xx.xx.xx.xx k8cp
    xx.xx.xx.xx worker
    127.0.0.1 localhost

    then the outcome is still not satisfactory

  • chrispokorni
    chrispokorni Posts: 2,376
    edited December 8

    Hi @josepmaria,

    One of the benefits of EC2 host naming convention is that an instance's hostname is derived from the private IP address of the instance. This is helpful in scenarios when other commands such as hostname -i or ip a do not work.

    When working with AWS SG it is important to ensure both EC2 instances are in the same SG, and that they are not provisioned in their own dedicated SGs. The order they are provisioned is unimportant. It is essential, however, that the SG protecting your VMs allows all ingress traffic from all sources, all protocols, to all port destinations.

    Populating the hosts files on both VMs with k8scp, cp, and worker entries and their associated private IP addresses is indeed a wise choice.

    Make sure that the --node-name=cp option is appended to the full kubeadm init command as it is presented in the lab guide, and that --node-name=worker option is appended to the kubeadm join command.

    And last, but not least, please refrain from sharing copyrighted course content (lecture content or lab content) screenshots in this public forum.

    Regards,
    -Chris

  • Hi @chrispokorni ,

    Thank you very much for your detailed explanation, I appreciate it.

    I learn a lot from your courses and explanations.

    I shall follow your advise and let you know the outcome.

    As for the screenshots from the Labs shared, I sincerely apologize. There was no intention to violate any copyright rules. I shall request forum moderators to allow me to modify this post, so that question can be reformulated by removing the screenshots.

    Sincerely,

    Josep Maria

  • Hi @chrispokorni ,

    I have followed your instructions and set up the ec2 nodes again. Both are on the default VPC, and share the same security group (default security node created for the CP was used when creating the ec2 for the worker node)

    I have verified that the configuration is correct.

    Unfortunately, when entering the output of the command sudo kubeadm token create --print-join-command (from the CP node) on the Worker node (having added --node-name=worker), I keep getting the same output:

    I would appreciate if you could give me another advise.

    Thank you,

    Sincerely,

    Josep Maria
    Ps: upon my request, forum moderators deleted copyrighted content from the former post on this conversation. I apologize again for the inconvenience.

  • Hi @chrispokorni ,

    Thanks for your patience, I finally found the solution.

    I added another inbound rule custom tcp pot 6443 on the SG and it worked.

  • chrispokorni
    chrispokorni Posts: 2,376

    Hi @josepmaria,

    The default SG of a VPC typically blocks many protocols and many ports that are required by Kubernetes and its plugins. You can add individual rules to the SG as you make progress through the lab exercises, or you can follow the instructions from the AWS video guide from the introductory chapter for a quicker way to enable all traffic to the EC2 VMs.

    Regards,
    -Chris

  • Hi @chrispokorni ,

    Thanks for your message. I appreciate your time and information provided.

    Sincerely,

    Josep Maria

Categories

Upcoming Training