Lab 10.1. "Connection refused" after Ingress setup
Hi!
I'm following guide from "Lab 10.1. Advanced Service Exposure". All steps succeeded, but after adopting ingress.rule.yaml
(I needed to change it a bit after format changes like serviceName
-> service.name
etc) and kubectl create -f ingress.rule.yaml
I get following error while trying to test my setup:
$ curl -H "Host: www.example.com" http://k8smaster/ curl: (7) Failed to connect to k8smaster port 80: Connection refused
Seems like something is not setup correctly, but I'm new to k8s and have no idea how to investigate and solve it.
Could anybody help me?
Thanks in advance.
Comments
-
Hi @Gim6626,
On the machine where the
curl
command is run, isk8smaster
configured as the alias to your control-plane/master node, or any other node where a traefik controller is running?Regards,
-Chris0 -
Thank you for trying to help me!
Tried again to use original config, here is result:
$ kubectl create -f ingress.rule.yaml error: error validating "ingress.rule.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false
And here is the link from where I got a fix - https://stackoverflow.com/questions/64125048/get-error-unknown-field-servicename-in-io-k8s-api-networking-v1-ingressbacken
Sure:
$ grep k8smaster /etc/hosts 192.168.56.104 k8smaster $ ping k8smaster PING k8smaster (192.168.56.104) 56(84) bytes of data. 64 bytes from k8smaster (192.168.56.104): icmp_seq=1 ttl=64 time=0.050 ms
0 -
Hi @Gim6626,
The validation error is YAML related. Just recently the Ingress API resource matured from beta to stable support. This introduced a several changes into the YAML definition of the object. The first affected property is the API version, which changed from
networking.k8s.io/v1beta1
tonetworking.k8s.io/v1
. While the beta version is being deprecated (before it is no longer supported), the API will accept both versions: beta (v1beta1) and stable (v1).The stable level of support also introduces changes in the format of the YAML manifest that defines the Ingress resource. During this transition phase, the Kubernetes API supports both Ingress resource API formats, but the API version declared at the top of the definition file and the included properties have to match the correct version.
From your output it seems that your nodes may be assigned IP addresses (
192.168.56.104
) from the same range as the default pod network range192.168.0.0/16
managed y the Calico plugin. Is this your case? This causes networking, routing, and DNS issues in your cluster. The network IP addresses of your nodes should not overlap with the pods network and with the services Cluster IP subnet.Regards,
-Chris0 -
Hi there,
I'm having the same issue for the same reason (i think!), but I'm not sure how to resolve.
When I runkubectl create -f ingress.rule.yaml
using the course tarball ingress.rule.yaml i get:
error: error validating "ingress.rule.orig.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false
Which is due to the v1beta ->v1 switch. Switching networking.k8s.io/v1 to networking.k8s.io/v1beta1 resolves this issue for me.The nginx (secondapp) is available to ClusterIPs ( curl -H "Host: www.example.com" http://192.168.226.65:80/ and curl -H "Host: www.example.com" http://10.97.198.188:80/ both return the service), and the service is available on my external IP (due to the NodePort using the high default port 30713). However, I cant seem to allow external access via port 80 - which leads me to think this is still an Ingress configuration issue.
For reference - my master node IP is 10.2.0.7 (and this is the alias set in /etc/hosts for k8smaster)
A couple of other things I have noticed:
1) kubectl get services shows no external IP set:
elf@master-1:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 109m
secondapp NodePort 10.97.198.188 80:30713/TCP 84m2) kubectl get ingress shows no address for teh ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-test www.example.com 80 12mSo im reasonably certain there's some configuration issue with the ingress, even though it is being accepted - but it could also be the configuration of the backend service.
Any pointers/suggestions would be much appreciated!
Cheers0 -
As a follow up - kubectl describe ingress ended up showing that this was due to an error with the default-http-backend (which was why the backend wasn't pointing to realisic cluster IPs). Output:
kubectl describe ingress
Name: ingress-test
Namespace: default
Address:
Default backend: default-http-backend:80 ()
Rules:
Host Path Backends
---- ---- --------
www.example.com
/ secondapp:80 (192.168.226.65:80)
thirdpage.org
/ thirdpage:80 (192.168.226.66:80)
Annotations: kubernetes.io/ingress.class: traefik
Events:0 -
Hi @kaiwhata,
The Ingress resource just graduated to stable level of support, and the format of its yaml manifest slightly changed. If you take a look at the documentation on Ingress, you will find this section most up-to-date yaml format that can be used for this lab exercise.
If you pay close attention to the lab steps where the ingress is tested,
curl
is run against IP addresses of the Nodes (both public and private), not the Pod IPs, and not the Service ClusterIPs.Regards,
-Chris0 -
Thanks Chris. For clarity on my part - my node IP in this instance is 10.2.0.7 (which is running on the master, the same as the setup given in the lab 10.1). Attempting to curl this (or the k8smaster alias) yields:
curl -H "Host: www.example.com" http://10.2.0.7/ curl: (7) Failed to connect to 10.2.0.7 port 80: Connection refused
My curling of the other IPs was simply to demonstrate that the service appears to be running correctly. For reference (and to help anyone else stuck with this issue) the two ingress.rule.yaml files that is used for the api/v1 and the apiv1beta are:
cat ingress.rule.v1betav1.yaml
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-test annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: www.example.com http: paths: - path: / backend: serviceName: secondapp servicePort: 80
cat ingress.rule.v1.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-test annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: "www.example.com" http: paths: - pathType: ImplementationSpecific path: "/" backend: service: name: secondapp port: number: 80
If you have any more debugging advice that would be much appreciated0 -
Hi Chris, thanks. I included the curl requests to the Pod and Service IPs previously just to show that the Pod and Service appear to be working. Curling the Node IP (which in this case, is that of my master node, 10.2.0.7) returns a 'port 80 connection refused' error.
In case it helps other students here are the two versions of the ingress.rule.yaml file that I used to bypass the validation error (one for v1 and one for v1betav1):
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-test annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: www.example.com http: paths: - path: / backend: serviceName: secondapp servicePort: 80
and
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-test annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: "www.example.com" http: paths: - pathType: ImplementationSpecific path: "/" backend: service: name: secondapp port: number: 80
Any further advice on how to troubleshoot this would be much appreciated.
0 -
Ah ok. I think I understand now. From Kubernetes v1.18, alongside defining the ingress you must also define an ingress class (https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation). Here's my ingress class file -
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: traefik spec: controller: example.com/traefik-ingress-controller parameters: apiGroup: k8s.example.com kind: IngressParameters name: traefik
No errors upon creation - but still doesnt appear to resolve the issue. I'll attempt to use a default IngressClass instead of traefik
0 -
Hi @kaiwhata,
The API seems to support the annotation of the Ingress resource in Kubernetes v1.20, without producing any errors or validation warnings.
The curl header inserted with the
-H
tag is helpful when testing the Ingress, otherwise curling Pod IPs and Service ClusterIPs do not require the header tag.After deploying the ingress resource, and the Traefik ingress controller, how many
traefik-ingress-controller-...
Pod instances do you have running in your cluster? Is there an instance running on your master node (IP 10.2.0.7)?Regards,
-Chris0 -
Hi @chrispokorni and all,
I am running into exactly the same issues than @kaiwhata: Connection refused when curling the k8smaster alias or the master node "public" IP to reach
secondapp
.First, I confirm that the API accepts both versions v1 and v1beta1. Using v1beta1 will only lead to a warning, and the yaml output will be reformatted to the new standard. Therefore the current issue is not related to the version of the API.
To answer your question, as expected there is one
traefik-ingress-controller-...
pod running in thekube-system
namespace, on the **worker **node. Should this instance run on the master node? If so, I'll need to untaint the master node to let it run there, but I don't believe this is the issue.Still new to Kubernetes, so any ideas to troubleshoot this issue would be greatly appreciated.
Also, has anyone tested the nginx ingress controller so far, instead of the traefik one?Thanks and Regards, Ben
0 -
Note that I updated the Traefik image from 1.7.13 to 1.7.24 as suggested in another thread. Still the same issue.
0 -
Also, note that there is no IP overlap between Node and Pods subnet, like I could read for others. Nodes having 10.1.3.0/24, and Pods having 192.168.0.0/16 CIDR.
0 -
Hi @benrio,
The behavior of you ingress is expected based on the details you shared. With an ingress controller instance only running on the worker node, you'd need to test your ingress with the worker node's IP address. This may be the result of a missed step in lab 3, and if the taint still exists on your master node.
Regards,
-Chris0 -
Master untainted, Ingress controller now running on the master node: Everything works now as expected.
Thank you very much!0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 795 Linux Foundation IT Professional Programs
- 355 Cloud Engineer IT Professional Program
- 179 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 127 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.2K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 35 LFD102 Class Forum
- 227 LFD103 Class Forum
- 14 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 697 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 154 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 1 LFS157 Class Forum
- 33 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 102 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 42 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 50 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 154 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 111 LFS260 Class Forum
- 159 LFS261 Class Forum
- 41 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 24 LFS268 Class Forum
- 29 LFS269 Class Forum
- 1 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 9 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 507 Programming and Development
- 285 Kernel Development
- 204 Software Development
- 1.8K Software
- 211 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)