Welcome to the Linux Foundation Forum!

Issues in lab 3.2: hostIp set without hostPort; curl fails to connect when verifying repo

Hey folks!
I'm unable to proceed beyond lab 3.2.

When creating easyregistry.yaml, I get the following warning:

  1. Warning: spec.template.spec.containers[0].ports[0]: hostIP set without hostPort: {Name: HostPort:0 ContainerPort:5000 Protocol:TCP HostIP:127.0.0.1}

When I runkubectl get svc | grep registry, I get the following output:

  1. registry ClusterIP 10.97.40.62 <none> 5000/TCP 174m

When I run the next step, #3, to verify the repo, it times out:

  1. curl 10.97.40.62:5000/v2/_catalog

I'm on AWS and my inbound and outbound security group rules are the following:

  • IP version: IPv4
  • Type: All trafic
  • Protocol: All
  • Port range: All
  • Source: 0.0.0.0/0

If I proceed to step #4 and run the following:

  1. . $HOME/local-repo-setup.sh

The output confirms the repo was configured:

  1. Local Repo configured, follow the next steps

No issues running the following:

  1. sudo podman pull docker.io/library/alpine
  2. sudo podman tag alpine $repo/tagtest

But when I run the following command, it hangs:

  1. sudo podman push $repo/tagtest

And I get the following warning before it times out after three attempts:

  1. Getting image source signatures
  2. WARN[0120] Failed, retrying in 1s ... (1/3). Error: trying to reuse blob sha256:cc2447e1835a40530975ab80bb1f872fbab0f2a0faecf2ab16fbbb89b3589438 at destination: pinging container registry 10.97.40.62:5000: Get "http://10.97.40.62:5000/v2/": dial tcp 10.97.40.62:5000: i/o timeout

I assume this is related to the first warning about the hostPort, but I'm not sure how to correct that. What am I missing? Any help is greatly appreciated. Thanks!

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Hi @jarednielsen,

    The hostPort warning is not an error, and it can be disregarded.

    What is the output of the following command?

    kubectl get po,svc,ep -o wide -l io.kompose.service

    Regards,
    -Chris

  • Hey @chrispokorni!
    Here's the output of kubectl get po,svc,ep -o wide -l io.kompose.service:

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. pod/nginx-6b47bcc6c6-97rfh 1/1 Running 2 (37s ago) 5d22h 10.0.1.240 ip-172-31-17-33 <none> <none>
    3. pod/registry-66dbfdc555-qr5ss 1/1 Running 2 (37s ago) 5d22h 10.0.1.126 ip-172-31-17-33 <none> <none>
    4.  
    5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    6. service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 5d22h io.kompose.service=nginx
    7. service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 5d22h io.kompose.service=registry
    8.  
    9. NAME ENDPOINTS AGE
    10. endpoints/nginx 10.0.1.21:443 5d22h
    11. endpoints/registry 10.0.1.242:5000 5d22h
  • Hi @jarednielsen,

    Thank you for the detailed output.

    The first reason for concern is the recent restart (37s ago) of both pods - nginx and registry. Is this a recent node restart? Or a recent run of easyregistry.yaml?

    The second reason for concern is the discrepancy between the pod IP addresses and the endpoint IP addresses. The endpoint IPs should match the pod IPs, respectively.

    Did you happen to run the easyregistry.yaml several times in a row?
    Is the node with IP ip-172-31-17-33 your worker?
    What are the events (the very last section of the output) displayed by the following commands?
    kubectl describe po nginx-6b47bcc6c6-97rfh
    kubectl describe po registry-66dbfdc555-qr5ss

    What is the output of:
    find $HOME -name local-repo-setup.sh

    What are the outputs of the following commands (from each node - cp and worker):
    sudo cat /etc/containers/registries.conf.d/registry.conf
    grep endpoint /etc/containerd/config.toml

    Are the following commands listing multiple nginx and registry pods?
    kubectl get po nginx -o wide
    kubectl get po registry -o wide

    ... and multiple endpoints?
    kubectl get ep nginx
    kubectl get ep registry

    Regards,
    -Chris

  • Hey @chrispokorni!
    The recent restart is due to starting and stopping AWS instances. There's no discrepancy after restarting the instance:

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. pod/nginx-6b47bcc6c6-97rfh 1/1 Running 3 (5m56s ago) 6d3h 10.0.1.174 ip-172-31-17-33 <none> <none>
    3. pod/registry-66dbfdc555-qr5ss 1/1 Running 3 (5m57s ago) 6d3h 10.0.1.197 ip-172-31-17-33 <none> <none>
    4.  
    5. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    6. service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 6d3h io.kompose.service=nginx
    7. service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 6d3h io.kompose.service=registry
    8.  
    9. NAME ENDPOINTS AGE
    10. endpoints/nginx 10.0.1.174:443 6d3h
    11. endpoints/registry 10.0.1.197:5000 6d3h

    I only ran easyregistry.yaml once (as far as I recall).

    Node ip-172-31-17-33 is my control plane.

  • Here's the output of kubectl describe po nginx-6b47bcc6c6-97rfh:

    1. Name: nginx-6b47bcc6c6-97rfh
    2. Namespace: default
    3. Priority: 0
    4. Service Account: default
    5. Node: ip-172-31-17-33/172.31.17.33
    6. Start Time: Wed, 22 Nov 2023 15:50:01 +0000
    7. Labels: io.kompose.service=nginx
    8. pod-template-hash=6b47bcc6c6
    9. Annotations: <none>
    10. Status: Running
    11. IP: 10.0.1.253
    12. IPs:
    13. IP: 10.0.1.253
    14. Controlled By: ReplicaSet/nginx-6b47bcc6c6
    15. Containers:
    16. nginx:
    17. Container ID: containerd://a5fb3bb989266311c4b71b172c2e637e9a2c1e729bed976a88d9a85d1718125b
    18. Image: nginx:1.12
    19. Image ID: docker.io/library/nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728
    20. Port: 443/TCP
    21. Host Port: 0/TCP
    22. State: Running
    23. Started: Tue, 28 Nov 2023 19:04:37 +0000
    24. Last State: Terminated
    25. Reason: Unknown
    26. Exit Code: 255
    27. Started: Tue, 28 Nov 2023 18:55:17 +0000
    28. Finished: Tue, 28 Nov 2023 19:04:23 +0000
    29. Ready: True
    30. Restart Count: 4
    31. Environment: <none>
    32. Mounts:
    33. /etc/nginx/conf.d from nginx-claim0 (rw)
    34. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtwbk (ro)
    35. Conditions:
    36. Type Status
    37. Initialized True
    38. Ready True
    39. ContainersReady True
    40. PodScheduled True
    41. Volumes:
    42. nginx-claim0:
    43. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    44. ClaimName: nginx-claim0
    45. ReadOnly: false
    46. kube-api-access-mtwbk:
    47. Type: Projected (a volume that contains injected data from multiple sources)
    48. TokenExpirationSeconds: 3607
    49. ConfigMapName: kube-root-ca.crt
    50. ConfigMapOptional: <nil>
    51. DownwardAPI: true
    52. QoS Class: BestEffort
    53. Node-Selectors: <none>
    54. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    55. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    56. Events:
    57. Type Reason Age From Message
    58. ---- ------ ---- ---- -------
    59. Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
    60. Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
    61. Normal Scheduled 6d3h default-scheduler Successfully assigned default/nginx-6b47bcc6c6-97rfh to ip-172-31-17-33
    62. Normal Pulled 6d3h kubelet Container image "nginx:1.12" already present on machine
    63. Normal Created 6d3h kubelet Created container nginx
    64. Normal Started 6d3h kubelet Started container nginx
    65. Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created.
    66. Normal Pulled 6d kubelet Container image "nginx:1.12" already present on machine
    67. Normal Created 6d kubelet Created container nginx
    68. Normal Started 6d kubelet Started container nginx
    69. Normal SandboxChanged 5h8m kubelet Pod sandbox changed, it will be killed and re-created.
    70. Normal Pulled 5h8m kubelet Container image "nginx:1.12" already present on machine
    71. Normal Created 5h8m kubelet Created container nginx
    72. Normal Started 5h8m kubelet Started container nginx
    73. Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
    74. Normal Pulled 10m kubelet Container image "nginx:1.12" already present on machine
    75. Normal Created 10m kubelet Created container nginx
    76. Normal Started 10m kubelet Started container nginx
    77. Warning NodeNotReady 5m30s node-controller Node is not ready
    78. Normal SandboxChanged 109s kubelet Pod sandbox changed, it will be killed and re-created.
    79. Normal Pulled 97s kubelet Container image "nginx:1.12" already present on machine
    80. Normal Created 97s kubelet Created container nginx
    81. Normal Started 97s kubelet Started container nginx
  • Here's the output of kubectl describe po registry-66dbfdc555-qr5ss:

    1. Name: registry-66dbfdc555-qr5ss
    2. Namespace: default
    3. Priority: 0
    4. Service Account: default
    5. Node: ip-172-31-17-33/172.31.17.33
    6. Start Time: Wed, 22 Nov 2023 15:50:01 +0000
    7. Labels: io.kompose.service=registry
    8. pod-template-hash=66dbfdc555
    9. Annotations: <none>
    10. Status: Running
    11. IP: 10.0.1.210
    12. IPs:
    13. IP: 10.0.1.210
    14. Controlled By: ReplicaSet/registry-66dbfdc555
    15. Containers:
    16. registry:
    17. Container ID: containerd://f1f2ca571d40c1b16bc5c210107886704e3939cf817c637a0da0855d39cb09bb
    18. Image: registry:2
    19. Image ID: docker.io/library/registry@sha256:8a60daaa55ab0df4607c4d8625b96b97b06fd2e6ca8528275472963c4ae8afa0
    20. Port: 5000/TCP
    21. Host Port: 0/TCP
    22. State: Running
    23. Started: Tue, 28 Nov 2023 19:04:37 +0000
    24. Last State: Terminated
    25. Reason: Unknown
    26. Exit Code: 255
    27. Started: Tue, 28 Nov 2023 18:55:17 +0000
    28. Finished: Tue, 28 Nov 2023 19:04:23 +0000
    29. Ready: True
    30. Restart Count: 4
    31. Environment:
    32. REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    33. Mounts:
    34. /data from registry-claim0 (rw)
    35. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsjz9 (ro)
    36. Conditions:
    37. Type Status
    38. Initialized True
    39. Ready True
    40. ContainersReady True
    41. PodScheduled True
    42. Volumes:
    43. registry-claim0:
    44. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    45. ClaimName: registry-claim0
    46. ReadOnly: false
    47. kube-api-access-dsjz9:
    48. Type: Projected (a volume that contains injected data from multiple sources)
    49. TokenExpirationSeconds: 3607
    50. ConfigMapName: kube-root-ca.crt
    51. ConfigMapOptional: <nil>
    52. DownwardAPI: true
    53. QoS Class: BestEffort
    54. Node-Selectors: <none>
    55. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    56. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    57. Events:
    58. Type Reason Age From Message
    59. ---- ------ ---- ---- -------
    60. Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
    61. Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
    62. Normal Scheduled 6d3h default-scheduler Successfully assigned default/registry-66dbfdc555-qr5ss to ip-172-31-17-33
    63. Normal Pulled 6d3h kubelet Container image "registry:2" already present on machine
    64. Normal Created 6d3h kubelet Created container registry
    65. Normal Started 6d3h kubelet Started container registry
    66. Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created.
    67. Normal Pulled 6d kubelet Container image "registry:2" already present on machine
    68. Normal Created 6d kubelet Created container registry
    69. Normal Started 6d kubelet Started container registry
    70. Normal SandboxChanged 5h9m kubelet Pod sandbox changed, it will be killed and re-created.
    71. Normal Pulled 5h8m kubelet Container image "registry:2" already present on machine
    72. Normal Created 5h8m kubelet Created container registry
    73. Normal Started 5h8m kubelet Started container registry
    74. Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
    75. Normal Pulled 11m kubelet Container image "registry:2" already present on machine
    76. Normal Created 11m kubelet Created container registry
    77. Normal Started 11m kubelet Started container registry
    78. Warning NodeNotReady 5m56s node-controller Node is not ready
    79. Normal SandboxChanged 2m16s kubelet Pod sandbox changed, it will be killed and re-created.
    80. Normal Pulled 2m4s kubelet Container image "registry:2" already present on machine
    81. Normal Created 2m4s kubelet Created container registry
    82. Normal Started 2m4s kubelet Started container registry
  • Here's the output offind $HOME -name local-repo-setup.sh:

    1. /home/ubuntu/local-repo-setup.sh
    2. /home/ubuntu/LFD259/SOLUTIONS/s_03/local-repo-setup.sh

    Here's the output of sudo cat /etc/containers/registries.conf.d/registry.conf and grep endpoint /etc/containerd/config.toml, on the CP respectively:

    1. [[registry]]
    2. location = "10.97.40.62:5000"
    3. insecure = true
    1. endpoint = ""
    2. endpoint = ["http://10.97.40.62:5000"]

    Here's the output of sudo cat /etc/containers/registries.conf.d/registry.conf and grep endpoint /etc/containerd/config.toml, on the WORKER respectively:

    1. cat: /etc/containers/registries.conf.d/registry.conf: No such file or directory
    1. endpoint = ""

    The following commands are not listing multiple nginx and registry pods:

    1. kubectl get po nginx -o wide
    2. kubectl get po registry -o wide

    On the CP:

    1. Error from server (NotFound): pods "nginx" not found
    2. Error from server (NotFound): pods "registry" not found

    On the worker:

    1. E1128 19:13:22.296309 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    2. E1128 19:13:22.296882 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    3. E1128 19:13:22.298355 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    4. E1128 19:13:22.299715 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    5. E1128 19:13:22.301115 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
    6. The connection to the server localhost:8080 was refused - did you specify the right host or port?

    The endpoints on the control plane:
    ```
    nginx 10.0.1.253:443 6d3h
    registry 10.0.1.210:5000 6d3h
    ...

    And on the worker the connection is refused.

    Thanks for your help!

  • Hi @jarednielsen,

    It seems the worker node does not show the local repo to be configured. The cp config seems to be correct. I suspect you may have missed the steps in the lab guide that configure the local repo on the worker node. Please revisit lab exercise 3.2 and run the steps showing "@worker" in the prompt on your worker node to correct the issue.

    Once completed and rebooted your worker node, try the podman push command again. Try to repeat it a few times if it hangs on the first run.

    Regards,
    -Chris

  • Hey @chrispokorni !
    There are no instructions for configuring the worker node in lab 3, exercises 1 and 2. The last command issued on the worker was in 2.2.

  • Hi @jarednielsen,

    Perhaps steps 9, 10, 15 of lab 3.2?

    Also, what OS is running your EC2s?

    Regards,
    -Chris

  • Hi,
    I have the same problem

    output of kubectl get po,svc,ep -o wide -l io.kompose.service

    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    pod/nginx-6b47bcc6c6-2n6f8 0/1 Pending 0 24m
    pod/registry-c8d64bf8c-r5fsp 0/1 Pending 0 24m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
    service/nginx ClusterIP 10.101.11.122 443/TCP 24m io.kompose.service=nginx
    service/registry ClusterIP 10.97.40.62 5000/TCP 24m io.kompose.service=registry

    NAME ENDPOINTS AGE
    endpoints/nginx 24m
    endpoints/registry 24m

  • Posts: 2,449

    Hi @margarita.salitrennik,

    What type of infrastructure hosts your Kubernetes cluster?

    Please provide the outputs of the following commands:

    kubectl get nodes -o wide
    kubectl get pods -A -o wide

    kubectl describe pod nginx-6b47bcc6c6-2n6f8
    kubectl describe pod registry-c8d64bf8c-r5fsp

    Regards,
    -Chris

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training