Issues in lab 3.2: hostIp set without hostPort; curl fails to connect when verifying repo
Hey folks!
I'm unable to proceed beyond lab 3.2.
When creating easyregistry.yaml
, I get the following warning:
Warning: spec.template.spec.containers[0].ports[0]: hostIP set without hostPort: {Name: HostPort:0 ContainerPort:5000 Protocol:TCP HostIP:127.0.0.1}
When I runkubectl get svc | grep registry
, I get the following output:
registry ClusterIP 10.97.40.62 <none> 5000/TCP 174m
When I run the next step, #3, to verify the repo, it times out:
curl 10.97.40.62:5000/v2/_catalog
I'm on AWS and my inbound and outbound security group rules are the following:
- IP version: IPv4
- Type: All trafic
- Protocol: All
- Port range: All
- Source: 0.0.0.0/0
If I proceed to step #4 and run the following:
. $HOME/local-repo-setup.sh
The output confirms the repo was configured:
Local Repo configured, follow the next steps
No issues running the following:
sudo podman pull docker.io/library/alpine sudo podman tag alpine $repo/tagtest
But when I run the following command, it hangs:
sudo podman push $repo/tagtest
And I get the following warning before it times out after three attempts:
Getting image source signatures WARN[0120] Failed, retrying in 1s ... (1/3). Error: trying to reuse blob sha256:cc2447e1835a40530975ab80bb1f872fbab0f2a0faecf2ab16fbbb89b3589438 at destination: pinging container registry 10.97.40.62:5000: Get "http://10.97.40.62:5000/v2/": dial tcp 10.97.40.62:5000: i/o timeout
I assume this is related to the first warning about the hostPort, but I'm not sure how to correct that. What am I missing? Any help is greatly appreciated. Thanks!
Comments
-
Hi @jarednielsen,
The
hostPort
warning is not an error, and it can be disregarded.What is the output of the following command?
kubectl get po,svc,ep -o wide -l io.kompose.service
Regards,
-Chris0 -
Hey @chrispokorni!
Here's the output ofkubectl get po,svc,ep -o wide -l io.kompose.service
:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-6b47bcc6c6-97rfh 1/1 Running 2 (37s ago) 5d22h 10.0.1.240 ip-172-31-17-33 <none> <none> pod/registry-66dbfdc555-qr5ss 1/1 Running 2 (37s ago) 5d22h 10.0.1.126 ip-172-31-17-33 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 5d22h io.kompose.service=nginx service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 5d22h io.kompose.service=registry NAME ENDPOINTS AGE endpoints/nginx 10.0.1.21:443 5d22h endpoints/registry 10.0.1.242:5000 5d22h
0 -
Hi @jarednielsen,
Thank you for the detailed output.
The first reason for concern is the recent restart (37s ago) of both pods - nginx and registry. Is this a recent node restart? Or a recent run of
easyregistry.yaml
?The second reason for concern is the discrepancy between the pod IP addresses and the endpoint IP addresses. The endpoint IPs should match the pod IPs, respectively.
Did you happen to run the
easyregistry.yaml
several times in a row?
Is the node with IPip-172-31-17-33
your worker?
What are the events (the very last section of the output) displayed by the following commands?kubectl describe po nginx-6b47bcc6c6-97rfh
kubectl describe po registry-66dbfdc555-qr5ss
What is the output of:
find $HOME -name local-repo-setup.sh
What are the outputs of the following commands (from each node - cp and worker):
sudo cat /etc/containers/registries.conf.d/registry.conf
grep endpoint /etc/containerd/config.toml
Are the following commands listing multiple
nginx
andregistry
pods?kubectl get po nginx -o wide
kubectl get po registry -o wide
... and multiple endpoints?
kubectl get ep nginx
kubectl get ep registry
Regards,
-Chris0 -
Hey @chrispokorni!
The recent restart is due to starting and stopping AWS instances. There's no discrepancy after restarting the instance:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-6b47bcc6c6-97rfh 1/1 Running 3 (5m56s ago) 6d3h 10.0.1.174 ip-172-31-17-33 <none> <none> pod/registry-66dbfdc555-qr5ss 1/1 Running 3 (5m57s ago) 6d3h 10.0.1.197 ip-172-31-17-33 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 6d3h io.kompose.service=nginx service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 6d3h io.kompose.service=registry NAME ENDPOINTS AGE endpoints/nginx 10.0.1.174:443 6d3h endpoints/registry 10.0.1.197:5000 6d3h
I only ran
easyregistry.yaml
once (as far as I recall).Node ip-172-31-17-33 is my control plane.
0 -
Here's the output of
kubectl describe po nginx-6b47bcc6c6-97rfh
:Name: nginx-6b47bcc6c6-97rfh Namespace: default Priority: 0 Service Account: default Node: ip-172-31-17-33/172.31.17.33 Start Time: Wed, 22 Nov 2023 15:50:01 +0000 Labels: io.kompose.service=nginx pod-template-hash=6b47bcc6c6 Annotations: <none> Status: Running IP: 10.0.1.253 IPs: IP: 10.0.1.253 Controlled By: ReplicaSet/nginx-6b47bcc6c6 Containers: nginx: Container ID: containerd://a5fb3bb989266311c4b71b172c2e637e9a2c1e729bed976a88d9a85d1718125b Image: nginx:1.12 Image ID: docker.io/library/nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728 Port: 443/TCP Host Port: 0/TCP State: Running Started: Tue, 28 Nov 2023 19:04:37 +0000 Last State: Terminated Reason: Unknown Exit Code: 255 Started: Tue, 28 Nov 2023 18:55:17 +0000 Finished: Tue, 28 Nov 2023 19:04:23 +0000 Ready: True Restart Count: 4 Environment: <none> Mounts: /etc/nginx/conf.d from nginx-claim0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtwbk (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nginx-claim0: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nginx-claim0 ReadOnly: false kube-api-access-mtwbk: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Normal Scheduled 6d3h default-scheduler Successfully assigned default/nginx-6b47bcc6c6-97rfh to ip-172-31-17-33 Normal Pulled 6d3h kubelet Container image "nginx:1.12" already present on machine Normal Created 6d3h kubelet Created container nginx Normal Started 6d3h kubelet Started container nginx Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 6d kubelet Container image "nginx:1.12" already present on machine Normal Created 6d kubelet Created container nginx Normal Started 6d kubelet Started container nginx Normal SandboxChanged 5h8m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 5h8m kubelet Container image "nginx:1.12" already present on machine Normal Created 5h8m kubelet Created container nginx Normal Started 5h8m kubelet Started container nginx Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 10m kubelet Container image "nginx:1.12" already present on machine Normal Created 10m kubelet Created container nginx Normal Started 10m kubelet Started container nginx Warning NodeNotReady 5m30s node-controller Node is not ready Normal SandboxChanged 109s kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 97s kubelet Container image "nginx:1.12" already present on machine Normal Created 97s kubelet Created container nginx Normal Started 97s kubelet Started container nginx
0 -
Here's the output of
kubectl describe po registry-66dbfdc555-qr5ss
:Name: registry-66dbfdc555-qr5ss Namespace: default Priority: 0 Service Account: default Node: ip-172-31-17-33/172.31.17.33 Start Time: Wed, 22 Nov 2023 15:50:01 +0000 Labels: io.kompose.service=registry pod-template-hash=66dbfdc555 Annotations: <none> Status: Running IP: 10.0.1.210 IPs: IP: 10.0.1.210 Controlled By: ReplicaSet/registry-66dbfdc555 Containers: registry: Container ID: containerd://f1f2ca571d40c1b16bc5c210107886704e3939cf817c637a0da0855d39cb09bb Image: registry:2 Image ID: docker.io/library/registry@sha256:8a60daaa55ab0df4607c4d8625b96b97b06fd2e6ca8528275472963c4ae8afa0 Port: 5000/TCP Host Port: 0/TCP State: Running Started: Tue, 28 Nov 2023 19:04:37 +0000 Last State: Terminated Reason: Unknown Exit Code: 255 Started: Tue, 28 Nov 2023 18:55:17 +0000 Finished: Tue, 28 Nov 2023 19:04:23 +0000 Ready: True Restart Count: 4 Environment: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data Mounts: /data from registry-claim0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsjz9 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: registry-claim0: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim0 ReadOnly: false kube-api-access-dsjz9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Normal Scheduled 6d3h default-scheduler Successfully assigned default/registry-66dbfdc555-qr5ss to ip-172-31-17-33 Normal Pulled 6d3h kubelet Container image "registry:2" already present on machine Normal Created 6d3h kubelet Created container registry Normal Started 6d3h kubelet Started container registry Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 6d kubelet Container image "registry:2" already present on machine Normal Created 6d kubelet Created container registry Normal Started 6d kubelet Started container registry Normal SandboxChanged 5h9m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 5h8m kubelet Container image "registry:2" already present on machine Normal Created 5h8m kubelet Created container registry Normal Started 5h8m kubelet Started container registry Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 11m kubelet Container image "registry:2" already present on machine Normal Created 11m kubelet Created container registry Normal Started 11m kubelet Started container registry Warning NodeNotReady 5m56s node-controller Node is not ready Normal SandboxChanged 2m16s kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 2m4s kubelet Container image "registry:2" already present on machine Normal Created 2m4s kubelet Created container registry Normal Started 2m4s kubelet Started container registry
0 -
Here's the output of
find $HOME -name local-repo-setup.sh
:/home/ubuntu/local-repo-setup.sh /home/ubuntu/LFD259/SOLUTIONS/s_03/local-repo-setup.sh
Here's the output of
sudo cat /etc/containers/registries.conf.d/registry.conf
andgrep endpoint /etc/containerd/config.toml
, on the CP respectively:[[registry]] location = "10.97.40.62:5000" insecure = true
endpoint = "" endpoint = ["http://10.97.40.62:5000"]
Here's the output of
sudo cat /etc/containers/registries.conf.d/registry.conf
andgrep endpoint /etc/containerd/config.toml
, on the WORKER respectively:cat: /etc/containers/registries.conf.d/registry.conf: No such file or directory
endpoint = ""
The following commands are not listing multiple nginx and registry pods:
kubectl get po nginx -o wide kubectl get po registry -o wide
On the CP:
Error from server (NotFound): pods "nginx" not found Error from server (NotFound): pods "registry" not found
On the worker:
E1128 19:13:22.296309 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1128 19:13:22.296882 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1128 19:13:22.298355 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1128 19:13:22.299715 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1128 19:13:22.301115 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port?
The endpoints on the control plane:
```
nginx 10.0.1.253:443 6d3h
registry 10.0.1.210:5000 6d3h
...And on the worker the connection is refused.
Thanks for your help!
0 -
Hi @jarednielsen,
It seems the worker node does not show the local repo to be configured. The cp config seems to be correct. I suspect you may have missed the steps in the lab guide that configure the local repo on the worker node. Please revisit lab exercise 3.2 and run the steps showing "@worker" in the prompt on your worker node to correct the issue.
Once completed and rebooted your worker node, try the
podman push
command again. Try to repeat it a few times if it hangs on the first run.Regards,
-Chris0 -
Hey @chrispokorni !
There are no instructions for configuring the worker node in lab 3, exercises 1 and 2. The last command issued on the worker was in 2.2.0 -
Hi @jarednielsen,
Perhaps steps 9, 10, 15 of lab 3.2?
Also, what OS is running your EC2s?
Regards,
-Chris0 -
Hi,
I have the same problemoutput of kubectl get po,svc,ep -o wide -l io.kompose.service
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-6b47bcc6c6-2n6f8 0/1 Pending 0 24m
pod/registry-c8d64bf8c-r5fsp 0/1 Pending 0 24mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx ClusterIP 10.101.11.122 443/TCP 24m io.kompose.service=nginx
service/registry ClusterIP 10.97.40.62 5000/TCP 24m io.kompose.service=registryNAME ENDPOINTS AGE
endpoints/nginx 24m
endpoints/registry 24m0 -
What type of infrastructure hosts your Kubernetes cluster?
Please provide the outputs of the following commands:
kubectl get nodes -o wide
kubectl get pods -A -o wide
kubectl describe pod nginx-6b47bcc6c6-2n6f8
kubectl describe pod registry-c8d64bf8c-r5fsp
Regards,
-Chris0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.1K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 36 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 693 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 144 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 150 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 370 Off Topic
- 114 Introductions
- 173 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)