Welcome to the Linux Foundation Forum!

Issues in lab 3.2: hostIp set without hostPort; curl fails to connect when verifying repo

Hey folks!
I'm unable to proceed beyond lab 3.2.

When creating easyregistry.yaml, I get the following warning:

Warning: spec.template.spec.containers[0].ports[0]: hostIP set without hostPort: {Name: HostPort:0 ContainerPort:5000 Protocol:TCP HostIP:127.0.0.1}

When I runkubectl get svc | grep registry, I get the following output:

registry     ClusterIP   10.97.40.62    <none>        5000/TCP   174m

When I run the next step, #3, to verify the repo, it times out:

curl 10.97.40.62:5000/v2/_catalog

I'm on AWS and my inbound and outbound security group rules are the following:

  • IP version: IPv4
  • Type: All trafic
  • Protocol: All
  • Port range: All
  • Source: 0.0.0.0/0

If I proceed to step #4 and run the following:

. $HOME/local-repo-setup.sh

The output confirms the repo was configured:

Local Repo configured, follow the next steps

No issues running the following:

sudo podman pull docker.io/library/alpine
sudo podman tag alpine $repo/tagtest

But when I run the following command, it hangs:

sudo podman push $repo/tagtest

And I get the following warning before it times out after three attempts:

Getting image source signatures
WARN[0120] Failed, retrying in 1s ... (1/3). Error: trying to reuse blob sha256:cc2447e1835a40530975ab80bb1f872fbab0f2a0faecf2ab16fbbb89b3589438 at destination: pinging container registry 10.97.40.62:5000: Get "http://10.97.40.62:5000/v2/": dial tcp 10.97.40.62:5000: i/o timeout 

I assume this is related to the first warning about the hostPort, but I'm not sure how to correct that. What am I missing? Any help is greatly appreciated. Thanks!

Comments

  • Hi @jarednielsen,

    The hostPort warning is not an error, and it can be disregarded.

    What is the output of the following command?

    kubectl get po,svc,ep -o wide -l io.kompose.service

    Regards,
    -Chris

  • Hey @chrispokorni!
    Here's the output of kubectl get po,svc,ep -o wide -l io.kompose.service:

    NAME                            READY   STATUS    RESTARTS      AGE     IP           NODE              NOMINATED NODE   READINESS GATES
    pod/nginx-6b47bcc6c6-97rfh      1/1     Running   2 (37s ago)   5d22h   10.0.1.240   ip-172-31-17-33   <none>           <none>
    pod/registry-66dbfdc555-qr5ss   1/1     Running   2 (37s ago)   5d22h   10.0.1.126   ip-172-31-17-33   <none>           <none>
    
    NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE     SELECTOR
    service/nginx      ClusterIP   10.103.74.92   <none>        443/TCP    5d22h   io.kompose.service=nginx
    service/registry   ClusterIP   10.97.40.62    <none>        5000/TCP   5d22h   io.kompose.service=registry
    
    NAME                 ENDPOINTS         AGE
    endpoints/nginx      10.0.1.21:443     5d22h
    endpoints/registry   10.0.1.242:5000   5d22h
    
  • Hi @jarednielsen,

    Thank you for the detailed output.

    The first reason for concern is the recent restart (37s ago) of both pods - nginx and registry. Is this a recent node restart? Or a recent run of easyregistry.yaml?

    The second reason for concern is the discrepancy between the pod IP addresses and the endpoint IP addresses. The endpoint IPs should match the pod IPs, respectively.

    Did you happen to run the easyregistry.yaml several times in a row?
    Is the node with IP ip-172-31-17-33 your worker?
    What are the events (the very last section of the output) displayed by the following commands?
    kubectl describe po nginx-6b47bcc6c6-97rfh
    kubectl describe po registry-66dbfdc555-qr5ss

    What is the output of:
    find $HOME -name local-repo-setup.sh

    What are the outputs of the following commands (from each node - cp and worker):
    sudo cat /etc/containers/registries.conf.d/registry.conf
    grep endpoint /etc/containerd/config.toml

    Are the following commands listing multiple nginx and registry pods?
    kubectl get po nginx -o wide
    kubectl get po registry -o wide

    ... and multiple endpoints?
    kubectl get ep nginx
    kubectl get ep registry

    Regards,
    -Chris

  • Hey @chrispokorni!
    The recent restart is due to starting and stopping AWS instances. There's no discrepancy after restarting the instance:

    NAME                            READY   STATUS    RESTARTS        AGE    IP           NODE              NOMINATED NODE   READINESS GATES
    pod/nginx-6b47bcc6c6-97rfh      1/1     Running   3 (5m56s ago)   6d3h   10.0.1.174   ip-172-31-17-33   <none>           <none>
    pod/registry-66dbfdc555-qr5ss   1/1     Running   3 (5m57s ago)   6d3h   10.0.1.197   ip-172-31-17-33   <none>           <none>
    
    NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE    SELECTOR
    service/nginx      ClusterIP   10.103.74.92   <none>        443/TCP    6d3h   io.kompose.service=nginx
    service/registry   ClusterIP   10.97.40.62    <none>        5000/TCP   6d3h   io.kompose.service=registry
    
    NAME                 ENDPOINTS         AGE
    endpoints/nginx      10.0.1.174:443    6d3h
    endpoints/registry   10.0.1.197:5000   6d3h
    

    I only ran easyregistry.yaml once (as far as I recall).

    Node ip-172-31-17-33 is my control plane.

  • Here's the output of kubectl describe po nginx-6b47bcc6c6-97rfh:

    Name:             nginx-6b47bcc6c6-97rfh
    Namespace:        default
    Priority:         0
    Service Account:  default
    Node:             ip-172-31-17-33/172.31.17.33
    Start Time:       Wed, 22 Nov 2023 15:50:01 +0000
    Labels:           io.kompose.service=nginx
                      pod-template-hash=6b47bcc6c6
    Annotations:      <none>
    Status:           Running
    IP:               10.0.1.253
    IPs:
      IP:           10.0.1.253
    Controlled By:  ReplicaSet/nginx-6b47bcc6c6
    Containers:
      nginx:
        Container ID:   containerd://a5fb3bb989266311c4b71b172c2e637e9a2c1e729bed976a88d9a85d1718125b
        Image:          nginx:1.12
        Image ID:       docker.io/library/nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728
        Port:           443/TCP
        Host Port:      0/TCP
        State:          Running
          Started:      Tue, 28 Nov 2023 19:04:37 +0000
        Last State:     Terminated
          Reason:       Unknown
          Exit Code:    255
          Started:      Tue, 28 Nov 2023 18:55:17 +0000
          Finished:     Tue, 28 Nov 2023 19:04:23 +0000
        Ready:          True
        Restart Count:  4
        Environment:    <none>
        Mounts:
          /etc/nginx/conf.d from nginx-claim0 (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtwbk (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      nginx-claim0:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  nginx-claim0
        ReadOnly:   false
      kube-api-access-mtwbk:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                  From               Message
      ----     ------            ----                 ----               -------
      Warning  FailedScheduling  6d3h                 default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
      Warning  FailedScheduling  6d3h (x2 over 6d3h)  default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
      Normal   Scheduled         6d3h                 default-scheduler  Successfully assigned default/nginx-6b47bcc6c6-97rfh to ip-172-31-17-33
      Normal   Pulled            6d3h                 kubelet            Container image "nginx:1.12" already present on machine
      Normal   Created           6d3h                 kubelet            Created container nginx
      Normal   Started           6d3h                 kubelet            Started container nginx
      Normal   SandboxChanged    6d                   kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            6d                   kubelet            Container image "nginx:1.12" already present on machine
      Normal   Created           6d                   kubelet            Created container nginx
      Normal   Started           6d                   kubelet            Started container nginx
      Normal   SandboxChanged    5h8m                 kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            5h8m                 kubelet            Container image "nginx:1.12" already present on machine
      Normal   Created           5h8m                 kubelet            Created container nginx
      Normal   Started           5h8m                 kubelet            Started container nginx
      Normal   SandboxChanged    11m                  kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            10m                  kubelet            Container image "nginx:1.12" already present on machine
      Normal   Created           10m                  kubelet            Created container nginx
      Normal   Started           10m                  kubelet            Started container nginx
      Warning  NodeNotReady      5m30s                node-controller    Node is not ready
      Normal   SandboxChanged    109s                 kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            97s                  kubelet            Container image "nginx:1.12" already present on machine
      Normal   Created           97s                  kubelet            Created container nginx
      Normal   Started           97s                  kubelet            Started container nginx
    
  • Here's the output of kubectl describe po registry-66dbfdc555-qr5ss:

    Name:             registry-66dbfdc555-qr5ss
    Namespace:        default
    Priority:         0
    Service Account:  default
    Node:             ip-172-31-17-33/172.31.17.33
    Start Time:       Wed, 22 Nov 2023 15:50:01 +0000
    Labels:           io.kompose.service=registry
                      pod-template-hash=66dbfdc555
    Annotations:      <none>
    Status:           Running
    IP:               10.0.1.210
    IPs:
      IP:           10.0.1.210
    Controlled By:  ReplicaSet/registry-66dbfdc555
    Containers:
      registry:
        Container ID:   containerd://f1f2ca571d40c1b16bc5c210107886704e3939cf817c637a0da0855d39cb09bb
        Image:          registry:2
        Image ID:       docker.io/library/registry@sha256:8a60daaa55ab0df4607c4d8625b96b97b06fd2e6ca8528275472963c4ae8afa0
        Port:           5000/TCP
        Host Port:      0/TCP
        State:          Running
          Started:      Tue, 28 Nov 2023 19:04:37 +0000
        Last State:     Terminated
          Reason:       Unknown
          Exit Code:    255
          Started:      Tue, 28 Nov 2023 18:55:17 +0000
          Finished:     Tue, 28 Nov 2023 19:04:23 +0000
        Ready:          True
        Restart Count:  4
        Environment:
          REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY:  /data
        Mounts:
          /data from registry-claim0 (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsjz9 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      registry-claim0:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  registry-claim0
        ReadOnly:   false
      kube-api-access-dsjz9:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                  From               Message
      ----     ------            ----                 ----               -------
      Warning  FailedScheduling  6d3h                 default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
      Warning  FailedScheduling  6d3h (x2 over 6d3h)  default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
      Normal   Scheduled         6d3h                 default-scheduler  Successfully assigned default/registry-66dbfdc555-qr5ss to ip-172-31-17-33
      Normal   Pulled            6d3h                 kubelet            Container image "registry:2" already present on machine
      Normal   Created           6d3h                 kubelet            Created container registry
      Normal   Started           6d3h                 kubelet            Started container registry
      Normal   SandboxChanged    6d                   kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            6d                   kubelet            Container image "registry:2" already present on machine
      Normal   Created           6d                   kubelet            Created container registry
      Normal   Started           6d                   kubelet            Started container registry
      Normal   SandboxChanged    5h9m                 kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            5h8m                 kubelet            Container image "registry:2" already present on machine
      Normal   Created           5h8m                 kubelet            Created container registry
      Normal   Started           5h8m                 kubelet            Started container registry
      Normal   SandboxChanged    11m                  kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            11m                  kubelet            Container image "registry:2" already present on machine
      Normal   Created           11m                  kubelet            Created container registry
      Normal   Started           11m                  kubelet            Started container registry
      Warning  NodeNotReady      5m56s                node-controller    Node is not ready
      Normal   SandboxChanged    2m16s                kubelet            Pod sandbox changed, it will be killed and re-created.
      Normal   Pulled            2m4s                 kubelet            Container image "registry:2" already present on machine
      Normal   Created           2m4s                 kubelet            Created container registry
      Normal   Started           2m4s                 kubelet            Started container registry
    
  • Here's the output offind $HOME -name local-repo-setup.sh:

    /home/ubuntu/local-repo-setup.sh
    /home/ubuntu/LFD259/SOLUTIONS/s_03/local-repo-setup.sh
    

    Here's the output of sudo cat /etc/containers/registries.conf.d/registry.conf and grep endpoint /etc/containerd/config.toml, on the CP respectively:

    [[registry]]
    location = "10.97.40.62:5000"
    insecure = true
    
        endpoint = ""
    endpoint = ["http://10.97.40.62:5000"]
    

    Here's the output of sudo cat /etc/containers/registries.conf.d/registry.conf and grep endpoint /etc/containerd/config.toml, on the WORKER respectively:

    cat: /etc/containers/registries.conf.d/registry.conf: No such file or directory
    
        endpoint = ""
    

    The following commands are not listing multiple nginx and registry pods:

    kubectl get po nginx -o wide
    kubectl get po registry -o wide
    

    On the CP:

    Error from server (NotFound): pods "nginx" not found
    Error from server (NotFound): pods "registry" not found
    

    On the worker:

    E1128 19:13:22.296309    1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    E1128 19:13:22.296882    1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    E1128 19:13:22.298355    1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    E1128 19:13:22.299715    1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    E1128 19:13:22.301115    1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    The endpoints on the control plane:
    ```
    nginx 10.0.1.253:443 6d3h
    registry 10.0.1.210:5000 6d3h
    ...

    And on the worker the connection is refused.

    Thanks for your help!

  • Hi @jarednielsen,

    It seems the worker node does not show the local repo to be configured. The cp config seems to be correct. I suspect you may have missed the steps in the lab guide that configure the local repo on the worker node. Please revisit lab exercise 3.2 and run the steps showing "@worker" in the prompt on your worker node to correct the issue.

    Once completed and rebooted your worker node, try the podman push command again. Try to repeat it a few times if it hangs on the first run.

    Regards,
    -Chris

  • Hey @chrispokorni !
    There are no instructions for configuring the worker node in lab 3, exercises 1 and 2. The last command issued on the worker was in 2.2.

  • Hi @jarednielsen,

    Perhaps steps 9, 10, 15 of lab 3.2?

    Also, what OS is running your EC2s?

    Regards,
    -Chris

  • Hi,
    I have the same problem

    output of kubectl get po,svc,ep -o wide -l io.kompose.service

    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    pod/nginx-6b47bcc6c6-2n6f8 0/1 Pending 0 24m
    pod/registry-c8d64bf8c-r5fsp 0/1 Pending 0 24m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    service/nginx ClusterIP 10.101.11.122 443/TCP 24m io.kompose.service=nginx
    service/registry ClusterIP 10.97.40.62 5000/TCP 24m io.kompose.service=registry

    NAME ENDPOINTS AGE
    endpoints/nginx 24m
    endpoints/registry 24m

  • chrispokorni
    chrispokorni Posts: 2,340

    Hi @margarita.salitrennik,

    What type of infrastructure hosts your Kubernetes cluster?

    Please provide the outputs of the following commands:

    kubectl get nodes -o wide
    kubectl get pods -A -o wide

    kubectl describe pod nginx-6b47bcc6c6-2n6f8
    kubectl describe pod registry-c8d64bf8c-r5fsp

    Regards,
    -Chris

Categories

Upcoming Training