Issues in lab 3.2: hostIp set without hostPort; curl fails to connect when verifying repo

Hey folks!
I'm unable to proceed beyond lab 3.2.
When creating easyregistry.yaml
, I get the following warning:
- Warning: spec.template.spec.containers[0].ports[0]: hostIP set without hostPort: {Name: HostPort:0 ContainerPort:5000 Protocol:TCP HostIP:127.0.0.1}
When I runkubectl get svc | grep registry
, I get the following output:
- registry ClusterIP 10.97.40.62 <none> 5000/TCP 174m
When I run the next step, #3, to verify the repo, it times out:
- curl 10.97.40.62:5000/v2/_catalog
I'm on AWS and my inbound and outbound security group rules are the following:
- IP version: IPv4
- Type: All trafic
- Protocol: All
- Port range: All
- Source: 0.0.0.0/0
If I proceed to step #4 and run the following:
- . $HOME/local-repo-setup.sh
The output confirms the repo was configured:
- Local Repo configured, follow the next steps
No issues running the following:
- sudo podman pull docker.io/library/alpine
- sudo podman tag alpine $repo/tagtest
But when I run the following command, it hangs:
- sudo podman push $repo/tagtest
And I get the following warning before it times out after three attempts:
- Getting image source signatures
- WARN[0120] Failed, retrying in 1s ... (1/3). Error: trying to reuse blob sha256:cc2447e1835a40530975ab80bb1f872fbab0f2a0faecf2ab16fbbb89b3589438 at destination: pinging container registry 10.97.40.62:5000: Get "http://10.97.40.62:5000/v2/": dial tcp 10.97.40.62:5000: i/o timeout
I assume this is related to the first warning about the hostPort, but I'm not sure how to correct that. What am I missing? Any help is greatly appreciated. Thanks!
Comments
-
Hi @jarednielsen,
The
hostPort
warning is not an error, and it can be disregarded.What is the output of the following command?
kubectl get po,svc,ep -o wide -l io.kompose.service
Regards,
-Chris0 -
Hey @chrispokorni!
Here's the output ofkubectl get po,svc,ep -o wide -l io.kompose.service
:- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod/nginx-6b47bcc6c6-97rfh 1/1 Running 2 (37s ago) 5d22h 10.0.1.240 ip-172-31-17-33 <none> <none>
- pod/registry-66dbfdc555-qr5ss 1/1 Running 2 (37s ago) 5d22h 10.0.1.126 ip-172-31-17-33 <none> <none>
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
- service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 5d22h io.kompose.service=nginx
- service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 5d22h io.kompose.service=registry
- NAME ENDPOINTS AGE
- endpoints/nginx 10.0.1.21:443 5d22h
- endpoints/registry 10.0.1.242:5000 5d22h
0 -
Hi @jarednielsen,
Thank you for the detailed output.
The first reason for concern is the recent restart (37s ago) of both pods - nginx and registry. Is this a recent node restart? Or a recent run of
easyregistry.yaml
?The second reason for concern is the discrepancy between the pod IP addresses and the endpoint IP addresses. The endpoint IPs should match the pod IPs, respectively.
Did you happen to run the
easyregistry.yaml
several times in a row?
Is the node with IPip-172-31-17-33
your worker?
What are the events (the very last section of the output) displayed by the following commands?kubectl describe po nginx-6b47bcc6c6-97rfh
kubectl describe po registry-66dbfdc555-qr5ss
What is the output of:
find $HOME -name local-repo-setup.sh
What are the outputs of the following commands (from each node - cp and worker):
sudo cat /etc/containers/registries.conf.d/registry.conf
grep endpoint /etc/containerd/config.toml
Are the following commands listing multiple
nginx
andregistry
pods?kubectl get po nginx -o wide
kubectl get po registry -o wide
... and multiple endpoints?
kubectl get ep nginx
kubectl get ep registry
Regards,
-Chris0 -
Hey @chrispokorni!
The recent restart is due to starting and stopping AWS instances. There's no discrepancy after restarting the instance:- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod/nginx-6b47bcc6c6-97rfh 1/1 Running 3 (5m56s ago) 6d3h 10.0.1.174 ip-172-31-17-33 <none> <none>
- pod/registry-66dbfdc555-qr5ss 1/1 Running 3 (5m57s ago) 6d3h 10.0.1.197 ip-172-31-17-33 <none> <none>
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
- service/nginx ClusterIP 10.103.74.92 <none> 443/TCP 6d3h io.kompose.service=nginx
- service/registry ClusterIP 10.97.40.62 <none> 5000/TCP 6d3h io.kompose.service=registry
- NAME ENDPOINTS AGE
- endpoints/nginx 10.0.1.174:443 6d3h
- endpoints/registry 10.0.1.197:5000 6d3h
I only ran
easyregistry.yaml
once (as far as I recall).Node ip-172-31-17-33 is my control plane.
0 -
Here's the output of
kubectl describe po nginx-6b47bcc6c6-97rfh
:- Name: nginx-6b47bcc6c6-97rfh
- Namespace: default
- Priority: 0
- Service Account: default
- Node: ip-172-31-17-33/172.31.17.33
- Start Time: Wed, 22 Nov 2023 15:50:01 +0000
- Labels: io.kompose.service=nginx
- pod-template-hash=6b47bcc6c6
- Annotations: <none>
- Status: Running
- IP: 10.0.1.253
- IPs:
- IP: 10.0.1.253
- Controlled By: ReplicaSet/nginx-6b47bcc6c6
- Containers:
- nginx:
- Container ID: containerd://a5fb3bb989266311c4b71b172c2e637e9a2c1e729bed976a88d9a85d1718125b
- Image: nginx:1.12
- Image ID: docker.io/library/nginx@sha256:72daaf46f11cc753c4eab981cbf869919bd1fee3d2170a2adeac12400f494728
- Port: 443/TCP
- Host Port: 0/TCP
- State: Running
- Started: Tue, 28 Nov 2023 19:04:37 +0000
- Last State: Terminated
- Reason: Unknown
- Exit Code: 255
- Started: Tue, 28 Nov 2023 18:55:17 +0000
- Finished: Tue, 28 Nov 2023 19:04:23 +0000
- Ready: True
- Restart Count: 4
- Environment: <none>
- Mounts:
- /etc/nginx/conf.d from nginx-claim0 (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtwbk (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- ContainersReady True
- PodScheduled True
- Volumes:
- nginx-claim0:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: nginx-claim0
- ReadOnly: false
- kube-api-access-mtwbk:
- Type: Projected (a volume that contains injected data from multiple sources)
- TokenExpirationSeconds: 3607
- ConfigMapName: kube-root-ca.crt
- ConfigMapOptional: <nil>
- DownwardAPI: true
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
- node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
- Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
- Normal Scheduled 6d3h default-scheduler Successfully assigned default/nginx-6b47bcc6c6-97rfh to ip-172-31-17-33
- Normal Pulled 6d3h kubelet Container image "nginx:1.12" already present on machine
- Normal Created 6d3h kubelet Created container nginx
- Normal Started 6d3h kubelet Started container nginx
- Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 6d kubelet Container image "nginx:1.12" already present on machine
- Normal Created 6d kubelet Created container nginx
- Normal Started 6d kubelet Started container nginx
- Normal SandboxChanged 5h8m kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 5h8m kubelet Container image "nginx:1.12" already present on machine
- Normal Created 5h8m kubelet Created container nginx
- Normal Started 5h8m kubelet Started container nginx
- Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 10m kubelet Container image "nginx:1.12" already present on machine
- Normal Created 10m kubelet Created container nginx
- Normal Started 10m kubelet Started container nginx
- Warning NodeNotReady 5m30s node-controller Node is not ready
- Normal SandboxChanged 109s kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 97s kubelet Container image "nginx:1.12" already present on machine
- Normal Created 97s kubelet Created container nginx
- Normal Started 97s kubelet Started container nginx
0 -
Here's the output of
kubectl describe po registry-66dbfdc555-qr5ss
:- Name: registry-66dbfdc555-qr5ss
- Namespace: default
- Priority: 0
- Service Account: default
- Node: ip-172-31-17-33/172.31.17.33
- Start Time: Wed, 22 Nov 2023 15:50:01 +0000
- Labels: io.kompose.service=registry
- pod-template-hash=66dbfdc555
- Annotations: <none>
- Status: Running
- IP: 10.0.1.210
- IPs:
- IP: 10.0.1.210
- Controlled By: ReplicaSet/registry-66dbfdc555
- Containers:
- registry:
- Container ID: containerd://f1f2ca571d40c1b16bc5c210107886704e3939cf817c637a0da0855d39cb09bb
- Image: registry:2
- Image ID: docker.io/library/registry@sha256:8a60daaa55ab0df4607c4d8625b96b97b06fd2e6ca8528275472963c4ae8afa0
- Port: 5000/TCP
- Host Port: 0/TCP
- State: Running
- Started: Tue, 28 Nov 2023 19:04:37 +0000
- Last State: Terminated
- Reason: Unknown
- Exit Code: 255
- Started: Tue, 28 Nov 2023 18:55:17 +0000
- Finished: Tue, 28 Nov 2023 19:04:23 +0000
- Ready: True
- Restart Count: 4
- Environment:
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
- Mounts:
- /data from registry-claim0 (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsjz9 (ro)
- Conditions:
- Type Status
- Initialized True
- Ready True
- ContainersReady True
- PodScheduled True
- Volumes:
- registry-claim0:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: registry-claim0
- ReadOnly: false
- kube-api-access-dsjz9:
- Type: Projected (a volume that contains injected data from multiple sources)
- TokenExpirationSeconds: 3607
- ConfigMapName: kube-root-ca.crt
- ConfigMapOptional: <nil>
- DownwardAPI: true
- QoS Class: BestEffort
- Node-Selectors: <none>
- Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
- node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Warning FailedScheduling 6d3h default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
- Warning FailedScheduling 6d3h (x2 over 6d3h) default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..
- Normal Scheduled 6d3h default-scheduler Successfully assigned default/registry-66dbfdc555-qr5ss to ip-172-31-17-33
- Normal Pulled 6d3h kubelet Container image "registry:2" already present on machine
- Normal Created 6d3h kubelet Created container registry
- Normal Started 6d3h kubelet Started container registry
- Normal SandboxChanged 6d kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 6d kubelet Container image "registry:2" already present on machine
- Normal Created 6d kubelet Created container registry
- Normal Started 6d kubelet Started container registry
- Normal SandboxChanged 5h9m kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 5h8m kubelet Container image "registry:2" already present on machine
- Normal Created 5h8m kubelet Created container registry
- Normal Started 5h8m kubelet Started container registry
- Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 11m kubelet Container image "registry:2" already present on machine
- Normal Created 11m kubelet Created container registry
- Normal Started 11m kubelet Started container registry
- Warning NodeNotReady 5m56s node-controller Node is not ready
- Normal SandboxChanged 2m16s kubelet Pod sandbox changed, it will be killed and re-created.
- Normal Pulled 2m4s kubelet Container image "registry:2" already present on machine
- Normal Created 2m4s kubelet Created container registry
- Normal Started 2m4s kubelet Started container registry
0 -
Here's the output of
find $HOME -name local-repo-setup.sh
:- /home/ubuntu/local-repo-setup.sh
- /home/ubuntu/LFD259/SOLUTIONS/s_03/local-repo-setup.sh
Here's the output of
sudo cat /etc/containers/registries.conf.d/registry.conf
andgrep endpoint /etc/containerd/config.toml
, on the CP respectively:- [[registry]]
- location = "10.97.40.62:5000"
- insecure = true
- endpoint = ""
- endpoint = ["http://10.97.40.62:5000"]
Here's the output of
sudo cat /etc/containers/registries.conf.d/registry.conf
andgrep endpoint /etc/containerd/config.toml
, on the WORKER respectively:- cat: /etc/containers/registries.conf.d/registry.conf: No such file or directory
- endpoint = ""
The following commands are not listing multiple nginx and registry pods:
- kubectl get po nginx -o wide
- kubectl get po registry -o wide
On the CP:
- Error from server (NotFound): pods "nginx" not found
- Error from server (NotFound): pods "registry" not found
On the worker:
- E1128 19:13:22.296309 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
- E1128 19:13:22.296882 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
- E1128 19:13:22.298355 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
- E1128 19:13:22.299715 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
- E1128 19:13:22.301115 1998 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
- The connection to the server localhost:8080 was refused - did you specify the right host or port?
The endpoints on the control plane:
```
nginx 10.0.1.253:443 6d3h
registry 10.0.1.210:5000 6d3h
...And on the worker the connection is refused.
Thanks for your help!
0 -
Hi @jarednielsen,
It seems the worker node does not show the local repo to be configured. The cp config seems to be correct. I suspect you may have missed the steps in the lab guide that configure the local repo on the worker node. Please revisit lab exercise 3.2 and run the steps showing "@worker" in the prompt on your worker node to correct the issue.
Once completed and rebooted your worker node, try the
podman push
command again. Try to repeat it a few times if it hangs on the first run.Regards,
-Chris0 -
Hey @chrispokorni !
There are no instructions for configuring the worker node in lab 3, exercises 1 and 2. The last command issued on the worker was in 2.2.0 -
Hi @jarednielsen,
Perhaps steps 9, 10, 15 of lab 3.2?
Also, what OS is running your EC2s?
Regards,
-Chris0 -
Hi,
I have the same problemoutput of kubectl get po,svc,ep -o wide -l io.kompose.service
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-6b47bcc6c6-2n6f8 0/1 Pending 0 24m
pod/registry-c8d64bf8c-r5fsp 0/1 Pending 0 24mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx ClusterIP 10.101.11.122 443/TCP 24m io.kompose.service=nginx
service/registry ClusterIP 10.97.40.62 5000/TCP 24m io.kompose.service=registryNAME ENDPOINTS AGE
endpoints/nginx 24m
endpoints/registry 24m0 -
What type of infrastructure hosts your Kubernetes cluster?
Please provide the outputs of the following commands:
kubectl get nodes -o wide
kubectl get pods -A -o wide
kubectl describe pod nginx-6b47bcc6c6-2n6f8
kubectl describe pod registry-c8d64bf8c-r5fsp
Regards,
-Chris0
Categories
- All Categories
- 143 LFX Mentorship
- 143 LFX Mentorship: Linux Kernel
- 817 Linux Foundation IT Professional Programs
- 368 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 83 DevOps Engineer IT Professional Program
- 132 Cloud Native Developer IT Professional Program
- 122 Express Training Courses
- 122 Express Courses - Discussion Forum
- Microlearning - Discussion Forum
- 6.6K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 66 LFC131 Class Forum
- 39 LFD102 Class Forum
- 237 LFD103 Class Forum
- 22 LFD110 Class Forum
- 44 LFD121 Class Forum
- 1 LFD123 Class Forum
- LFD125 Class Forum
- 17 LFD133 Class Forum
- 9 LFD134 Class Forum
- 17 LFD137 Class Forum
- 70 LFD201 Class Forum
- 3 LFD210 Class Forum
- 2 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 3 LFD237 Class Forum
- 23 LFD254 Class Forum
- 689 LFD259 Class Forum
- 110 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 251 LFS101 Class Forum
- 2 LFS111 Class Forum
- 3 LFS112 Class Forum
- 3 LFS116 Class Forum
- 3 LFS118 Class Forum
- 1 LFS120 Class Forum
- 3 LFS142 Class Forum
- 3 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 16 LFS148 Class Forum
- 8 LFS151 Class Forum
- 1 LFS157 Class Forum
- 70 LFS158 Class Forum
- LFS158-JP クラス フォーラム
- 5 LFS162 Class Forum
- 1 LFS166 Class Forum
- 3 LFS167 Class Forum
- 1 LFS170 Class Forum
- 1 LFS171 Class Forum
- 2 LFS178 Class Forum
- 2 LFS180 Class Forum
- 2 LFS182 Class Forum
- 4 LFS183 Class Forum
- 30 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 17 LFS203 Class Forum
- 118 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 55 LFS216 Class Forum
- 54 LFS241 Class Forum
- 43 LFS242 Class Forum
- 37 LFS243 Class Forum
- 13 LFS244 Class Forum
- 6 LFS245 Class Forum
- LFS246 Class Forum
- LFS248 Class Forum
- 110 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 145 LFS253 Class Forum
- LFS254 Class Forum
- 2 LFS255 Class Forum
- 13 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 11 LFS258-JP クラス フォーラム
- 116 LFS260 Class Forum
- 156 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 23 LFS267 Class Forum
- 25 LFS268 Class Forum
- 29 LFS269 Class Forum
- 7 LFS270 Class Forum
- 200 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 2 LFS147 Class Forum
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 18 LFW111 Class Forum
- 257 LFW211 Class Forum
- 179 LFW212 Class Forum
- 15 SKF100 Class Forum
- SKF200 Class Forum
- 2 SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 754 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 149 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 351 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 95 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 18 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 26 Study Material
- 534 Programming and Development
- 304 Kernel Development
- 223 Software Development
- 1.8K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 311 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)