Lab 7.2 <error: endpoints "default-http-backend" not found>
I was doing my exercise 7.2 and failed at Step 8
curl -H "Host: www.example.com" http://10.128.0.7/
I did my check by running:
kubectl describe ing ingress-test -n default
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress Name: ingress-test Namespace: default Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- www.example.com / secondapp:80 192.168.235.153:80) Annotations: kubernetes.io/ingress.class: traefik Events: <none>
I also check the svc in all namespaces:
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19d default nginx ClusterIP 10.108.189.140 <none> 443/TCP 17d default registry ClusterIP 10.96.74.244 <none> 5000/TCP 17d default secondapp LoadBalancer 10.100.192.227 <pending> 80:32000/TCP 96m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19d kube-system traefik-ingress-service ClusterIP 10.110.163.198 <none> 80/TCP,8080/TCP 35m multitenant shopping NodePort 10.110.156.71 <none> 80:30381/TCP 4d4h
How do i create default-http-backend svc? Is it a simple svc code with port 80?
fyi, I am using GCP.
ok i solved the problem by installing a new traefik and configuring ingress controller again using the following guide:
the error endpoints "default-http-backend" remains unfound but the code
curl -H "Host: www.example.com" http://10.2.0.6/
did return 404 page not found.0
Glad you were able to get it working. The 404 error is an indication the ingress controller is working but is not able to associate traffic with an existing service. The most common issue I find is a typo with the service or the pod labels. Should you revisit the lab I would work out from using the pod IP to view the default web page. Then use the service IP, and finally the ingress controller.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/secondapp 2/2 Running 3 3h48m 192.168.56.9 instance-2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47h <none> service/secondapp LoadBalancer 10.104.151.51 <pending> 80:32000/TCP 94m example=second
Yes 404 does come from the ingress controller and i am happy that it works on GCE.
The strange part is that after the ingress has been activated, the indivdual PO and SVC ip is no longer accessible.
do not return nginx default html response.
Hence this might be the result why it returns 404. However i do not know why this happens after traefik ingress activation. Any advice?0
also the terminal stuck when i ran
A deep dive to check why... the container has weird respond.
kubectl exec -it secondapp -c busy -- sh / $ nslookup secondapp ;; connection timed out; no servers could be reached0
any clue? I am stuck for 3 days already and can't proceed with the lab exercise.0
Can you try running the same two
curlcommands from the
Can you try running the same two
curlcommands from the
Hi Chris, they both work in instance-2 (worker node). Both return the html content.
also from my local machine:
curl -H "Host: www.example.com" http://[instance2-ip]does return the html content too.
curl -H "Host: www.example.com" http://[instance1-ip]fails as usual0
Thank you for checking. This behavior indicates that the issue is not related to the Kubernetes cluster and any of its components: Pods, Services, and Ingress, but with the way the networking was configured on the underlying infrastructure.
Where did you provision your nodes? In the cloud? Local VMs (which hypervisor)?
For your nodes, did you allow traffic to all ports, all protocols, from all sources - either through a custom VPC and firewall rule or through local hypervisor config options?
Thanks @chrispokorni for your prompt reply.
I am using GCE and have been following the tutorial and guide suggested in the LFD259.
I was able to do the curl on both secondapp pod and svc on
But the problem arises after i setup the
traefik 1.7.13on the
I followed the traefik setup using the link below:
I am wondering if i should use AWS instead...
and for the sake of completing the course, can i continue the Lab 7.2 part 8 using Instance-2 (for those using CURL commands)?0
Could you look at the output of kubectl get pods --all-namespaces and ensure that all pods on both systems are running. You should see an ingress controller pod running on both nodes.
Also please check that both nodes are running properly and have enough resources. If you're using 2cpu/8G nodes you should be fine with what the exercises ask you to run.
If the ingress controller is working on one node, but not another it indicates the issue is not with the ingress controller, but some other configuration or issue. If the issue were with Kubernetes or an improper setting of a rule or the ingress controller it would not work anywhere.
yes i now see that there are issues with calico-nodes.
NAMESPACE NAME READY STATUS RESTARTS AGE default secondapp 2/2 Running 21 21h kube-system calico-kube-controllers-744cfdf676-mm5wq 1/1 Running 0 2d17h kube-system calico-node-9b6zs 0/1 Running 0 2d17h kube-system calico-node-lpfg2 0/1 Running 0 2d17h kube-system coredns-f9fd979d6-2frqp 1/1 Running 0 2d17h kube-system coredns-f9fd979d6-vm5rt 1/1 Running 0 2d17h kube-system etcd-instance-1 1/1 Running 0 2d17h kube-system kube-apiserver-instance-1 1/1 Running 0 2d17h kube-system kube-controller-manager-instance-1 1/1 Running 0 2d17h kube-system kube-proxy-6cqjp 1/1 Running 0 2d17h kube-system kube-proxy-7kv7v 1/1 Running 0 2d17h kube-system kube-scheduler-instance-1 1/1 Running 0 2d17h kube-system traefik-ingress-controller-w6sdr 1/1 Running 0 18h multitenant mainapp-64f7bb4cc6-lghj7 1/1 Running 0 18h
when doing describe on the individual node. I see one peculiar event.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 31s (x6713 over 18h) kubelet, instance-1 (combined from similar events): Readiness probe failed: 2021-01-22 02:28:18.334 [INFO] confd/health.go 180: Number of node(s) with BGP peering established = 0 calico/node is not ready: BIRD is not ready: BGP not established with 10.2.0.90
Ok. i have resolved the calico-node issues with the following help.
Now the direct curl command to the ip of pod and svc works!0
All issues are almost fixed except for one weird behaviour which i have noticed.
From [instance-1] (master node),
ip awould yield the following output for ens4 (the only ens)
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000 link/ether 42:01:0a:02:00:08 brd ff:ff:ff:ff:ff:ff inet 10.2.0.8/32 scope global dynamic ens4 valid_lft 1930sec preferred_lft 1930sec inet6 fe80::4001:aff:fe02:8/64 scope link valid_lft forever preferred_lft forever
While the output of ens shows clearly that i will need to use 10.2.0.8 for step 8 of Lab 7.2.
curl -H "Host: www.example.com" http://10.2.0.8/
this commands fails with 404.
From the previous calico node issue which i have encountered for the week, we see that the connection of BGP fails at
I did a try with
curl -H "Host: www.example.com" http://10.2.0.9/
this commands works perfectly with the html response.0
Correct, as the the network configuration is not proper, as happens if IP ranges overlap, the traffic may be sent across the tunnel interface to the other node, which has the 10.2.0.9 IP address. If you curl from the worker node it would be interesting if the traffic were to work to .8 instead.
From [instance-2] (worker node), ip a would yield the following output for ens4 (the only ens).
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000 link/ether 42:01:0a:02:00:09 brd ff:ff:ff:ff:ff:ff inet 10.2.0.9/32 scope global dynamic ens4 valid_lft 2560sec preferred_lft 2560sec inet6 fe80::4001:aff:fe02:9/64 scope link valid_lft forever preferred_lft forever
The ens4 is different for worker. It is 10.2.0.9 instead.
curl -H "Host: www.example.com" http://10.2.0.9/works fine
curl -H "Host: www.example.com" http://10.2.0.8/doesn't work as expected here.
Is there any way to change the ens4 in my master to point to the 10.2.0.9, same as the worker?0
it is also obvious but would like to mention here that
[email protected]:˜$ curl -H "Host: www.example.com" http://[worker ip]works perfectly fine since the curl works in the example above (to 9 its ens ip)0
This would imply that both nodes share the same Private IP address, introducing even more conflicts into the cluster
- 9.9K All Categories
- 29 LFX Mentorship
- 82 LFX Mentorship: Linux Kernel
- 465 Linux Foundation Boot Camps
- 266 Cloud Engineer Boot Camp
- 94 Advanced Cloud Engineer Boot Camp
- 43 DevOps Engineer Boot Camp
- 29 Cloud Native Developer Boot Camp
- 1 Express Training Courses
- 1 Express Courses - Discussion Forum
- 1.6K Training Courses
- 18 LFC110 Class Forum
- 4 LFC131 Class Forum
- 19 LFD102 Class Forum
- 132 LFD103 Class Forum
- 9 LFD121 Class Forum
- 60 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 544 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 20 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 36 LFS207 Class Forum
- 295 LFS211 Class Forum
- 53 LFS216 Class Forum
- 45 LFS241 Class Forum
- 39 LFS242 Class Forum
- 33 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 964 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 85 LFS260 Class Forum
- 124 LFS261 Class Forum
- 29 LFS262 Class Forum
- 78 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 16 LFS268 Class Forum
- 14 LFS269 Class Forum
- 194 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 206 LFW211 Class Forum
- 148 LFW212 Class Forum
- 890 Hardware
- 212 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 99 Printers & Scanners
- 85 Storage
- 747 Linux Distributions
- 88 Debian
- 64 Fedora
- 13 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 468 Linux System Administration
- 38 Cloud Computing
- 67 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 77 Network Management
- 107 System Management
- 48 Web Management
- 61 Mobile Computing
- 22 Android
- 25 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 525 Off Topic
- 127 Introductions
- 211 Small Talk
- 19 Study Material
- 782 Programming and Development
- 256 Kernel Development
- 492 Software Development
- 919 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 46 All In Program
- 46 All In Forum
August 20, 2018
Kubernetes Administration (LFS458)
August 20, 2018
Linux System Administration (LFS301)
August 27, 2018
Open Source Virtualization (LFS462)
August 27, 2018
Linux Kernel Debugging and Security (LFD440)