Welcome to the new Linux Foundation Forum!

Question on Lab 3.2 Step 10

trancewu0317trancewu0317 Posts: 2
edited April 2018 in LFS258 Class Forum

Hi, all,

I was trying the lab on AWS cloud. I successfully started a master and a node, and the node has successfully joined the cluster. However, when I start the deployment, there is no available nginx in the deployment.

=========== output ============

[email protected]:~$ kubectl get deployments


nginx     1         1         1            0           23m

=========== output ============

The event is as follows, which shows the pod sandbox fails.

=========== output ============

26m         26m          1         nginx-8586cf59-ppkph.1526b633df971998   Pod                      Normal    Scheduled                 default-scheduler          Successfully assigned nginx-8586cf59-ppkph to ip-10-0-0-5

26m         26m          1         nginx-8586cf59-ppkph.1526b633f408be87   Pod                      Normal    SuccessfulMountVolume     kubelet, ip-10-0-0-5       MountVolume.SetUp succeeded for volume "default-token-m995p" 

26m         26m          12        nginx-8586cf59-ppkph.1526b6340363db54   Pod                      Warning   FailedCreatePodSandBox    kubelet, ip-10-0-0-5       Failed create pod sandbox.

26m         26m          12        nginx-8586cf59-ppkph.1526b6343c62c19d   Pod                      Normal    SandboxChanged            kubelet, ip-10-0-0-5       Pod sandbox changed, it will be killed and re-created.

12m         12m          1         nginx-8586cf59-ppkph.1526b6f60dca5608   Pod                      Normal    SuccessfulMountVolume     kubelet, ip-10-0-0-5       MountVolume.SetUp succeeded for volume "default-token-m995p" 

7m          12m          298       nginx-8586cf59-ppkph.1526b6f636c470e4   Pod                      Warning   FailedCreatePodSandBox    kubelet, ip-10-0-0-5       Failed create pod sandbox.

2m          12m          584       nginx-8586cf59-ppkph.1526b6f6548f8d19   Pod                      Normal    SandboxChanged            kubelet, ip-10-0-0-5       Pod sandbox changed, it will be killed and re-created.

26m         26m          1         nginx-8586cf59.1526b633ded64e2d         ReplicaSet               Normal    SuccessfulCreate          replicaset-controller      Created pod: nginx-8586cf59-ppkph

26m         26m          1         nginx.1526b633dd5d3b91                  Deployment               Normal    ScalingReplicaSet         deployment-controller      Scaled up replica set nginx-8586cf59 to 1

=========== output ============

Does anyone have any idea about what went wrong in this?

Also, does anyone have any idea about the steps to turn down the EC2 instance when I finish practicing, and then turn on the instances to resume practicing? Thanks.


  • serewiczserewicz Posts: 467


    Please paste the output of kubectl get nodes, to ensure that both nodes have joined the cluster and are in good shape. As well, ensure that you have removed the taint on the master so that it is able to run non-internal pods?

    As far as AWS, If you log into your console.aws.amazon.com page, navigate to Compute -> EC2 -> Instances -> Instances page you will see the running instances. Select the ones you want to work with then using the drop-down "Action" menu above you can perform actions to stop or terminate the instances.  This tutorials covers the basic workflow/lifecycle of an instance: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html 


  • chrispokornichrispokorni Posts: 219
    edited May 2018


    In case you missed this earlier, I attempted the Labs from Chapter 3 on 2 EC2 instances on AWS and I did not run into any issues. In addition to possible join issue, or taints, as mentioned by @serewicz, I also researched the issue you have described, and I found several discussions on similar error/warning messages, and most of them suggest it may be networking related. It may have to do with the pod network used (Flannel, Calico) and/or the networking/firewall rules (security group, VPC, network ACL) configured for the EC2 instances. 

    I setup my 2 EC2's as Ubuntu VMs on t2.micro, with public IPs, inside a default security group where I opened all inbound TCP traffic (clearly not best practice, but for the purpose of these exercises it works). 

    Here is a list of ports used by Kubernetes which need to be open on each node:


    Aside from providing the outputs requested by @serewicz, can you also provide some details about your EC2 setup in AWS? It may also help in identifying the source of your issues.

    Good luck!


Sign In or Register to comment.