Welcome to the Linux Foundation Forum!

LAB 10.1 Ingress not accessible on port 80

My environment:

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    I am unsure of what your particular issue is here. You mention Ingress, which is a particular bit of software. From the output of the commands it looks like you are talking about a NodePort service called secondapp. You mention that you are not able to access the port 80.

    To troubleshoot. first is the pod running? Can you go to the pod IP, port 80 and see the web server? Then check that the labels of the service match the labels of the pod. They are case sensitive.

    In the future, it is helpful if you include the particular problem or error.

    Regards,

  • crumdev
    crumdev Posts: 10

    Doesn't work as expected using the master node IP using port 80

    Works with Cluster IP using port 30024:

    and Works with EP IP

  • crumdev
    crumdev Posts: 10
    edited April 2020

    My apologies @serewicz. I had accidentally posted the question without getting the additional information.

    All of this is after creating the secondapp deployment, exposing via node port as in 10.1 lab, creating the ingress.rbac.yaml, then creating the traefik-ds.yaml, then the ingress.rule.yaml provided. The pieces all seem to be there. Could it be that I don't have a public IP listed with I use kubectl get nodes on these local vms? Do I need to do something else to expose over the node's ip of 192.168.1.210 on the master node?

  • serewicz
    serewicz Posts: 1,000

    Hello,

    It looks like the service is working, which means the pod is working. Next, is the ingress pod running? Do your ingress rules point to the proper service?

    Regards,

  • crumdev
    crumdev Posts: 10

    I'm able to see these running

    All three pods look fine in logs

    Looks like the ingress-test is pointing to the endpoint created when exposing the secondapp deployment in the lab

    Thank you for your help. I really appreciate it.

  • serewicz
    serewicz Posts: 1,000

    Are you using GCE or some other type for your lab? Is there a firewall which would prevent this traffic?

    Regards,

  • crumdev
    crumdev Posts: 10

    I am running these on a local Ubuntu 18.04 server that is hosting the virtual nodes through KVM. All the commands are being ran on the k8s-master node.

  • crumdev
    crumdev Posts: 10

    Running ufw status shows that the firewall is inactive on the nodes.

  • serewicz
    serewicz Posts: 1,000

    Does it work when you access the KVM node from the host? This probably has to do with the nature of networking in KVM. If the traffic is coming from the VM it may not be routing such that you can see the traffic. When you write that the firewall is inactive on the nodes. I take this to mean the VMs running Kubernetes don't have a firewall. How about the host? Is it blocking the traffic? Perhaps a wireshark can tell you where the request is going when you use the k8smaster name instead of the internal IP.

    This would appear to be a networking issue particular to how you deployed the lab.

  • crumdev
    crumdev Posts: 10

    Correct I meant the firewall is disabled on the VM nodes that make up the kubernetes cluster although it is the same on the physical server hosting the VMs. The networking is configured as such that the VMs are on my local network as well and not a NAT'd virtual network. Just to clarify how I can reach the NGINX page thus far.

    From the Physical Server or any machine on my network I can curl to the k8s-master node (192.168.1.210) using the NodePort port 30728 created from exposing the deployment in the lab. I can also access this in the browser using machines on my network
    This also work from on the k8s-master vm

    curl -H "Host: example.com" http://192.168.1.210:30728

    From the k8s-master VM I can also use the secondapp service clusterip to reach on port 80

    curl -H "Host: example.com" http://10.101.141.180

    or the Endpoint created by the service can be used to reach on port 80

    curl -H "Host: www.example.com" http://192.168.107.198/

    I just cannot access on port 80 on my VM's 192.168.1.210 ip which is the ip on my local network.

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi @crumdev ,

    I have seen similar issues reported in the forum. You may be experiencing them because your Pod subnet and Nodes subnet overlap. Calico by default uses 192.168.0.0/16 subnet for Pod networking. From your detailed outputs, it seems that your Node IPs fall within that same range. Within your iptables this may cause some confusion with traffic routing rules.

    What I remember that worked in such situations, was to ensure that the IP ranges do not overlap for Pods and Nodes.

    Regards,
    -Chris

  • crumdev
    crumdev Posts: 10

    Thank you @chrispokorni. I will try to reconfigure with a different non-overlapping subnet range. Since this is just a lab would there be any harm in using a smaller range like /24

  • chrispokorni
    chrispokorni Posts: 2,155

    A smaller range should work just fine. As long as the two ranges do not overlap, and the configuraion in calico.yaml is consistent with the property set in the kubeadm-config.yaml file when you initialize the master node, you should be good to go.

    Regards,
    -Chris

  • suser
    suser Posts: 67
    edited April 2020

    I have similar problem during ex 7.2, except my nodes ip ranges do not overlap with calico ranges. I can see kubernetes api page on port 6443 but I cannot access secondapp server on prt 80. (I can see it at calico IP).
    I have some DNS forwarding implemented on my end, but I don't think the artificial header has anything to do with.
    What should I look for?
    Thank you in advance!

  • suser
    suser Posts: 67

    My yaml files looks correct, I have no firewall problem, but I noticed I have no ingress pod running:

    kubectl describe --namespace=kube-system pod traefik-ingress-controller-ltgn9
    Name: traefik-ingress-controller-ltgn9
    Namespace: kube-system
    Priority: 0
    Node: kw1/10.1.10.31
    Start Time: Wed, 29 Apr 2020 21:48:49 +0000
    Labels: controller-revision-hash=5cd9d9799d
    k8s-app=traefik-ingress-lb
    name=traefik-ingress-lb
    pod-template-generation=1
    Annotations:
    Status: Running
    IP: 10.1.10.31
    IPs:
    IP: 10.1.10.31
    Controlled By: DaemonSet/traefik-ingress-controller
    Containers:
    traefik-ingress-lb:
    Container ID: docker://c4340c03cefd1d8a8d3f754cd747e7b555c78d698bfcbe813577a1dec6b5cb23
    Image: traefik
    Image ID: docker-pullable://traefik@sha256:ad4442a6f88cf35266542588f13ae9984aa058a55a518a87876e48c160d19ee0
    Ports: 80/TCP, 8080/TCP
    Host Ports: 80/TCP, 8080/TCP
    Args:
    --api
    --kubernetes
    --logLevel=INFO
    State: Waiting
    Reason: CrashLoopBackOff
    Last State: Terminated
    Reason: Error
    Exit Code: 1
    Started: Thu, 30 Apr 2020 00:19:16 +0000
    Finished: Thu, 30 Apr 2020 00:19:16 +0000
    Ready: False
    Restart Count: 34
    Environment:
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from traefik-ingress-controller-token-rqwrs (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    traefik-ingress-controller-token-rqwrs:
    Type: Secret (a volume populated by a Secret)
    SecretName: traefik-ingress-controller-token-rqwrs
    Optional: false
    QoS Class: BestEffort
    Node-Selectors:
    Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
    node.kubernetes.io/memory-pressure:NoSchedule
    node.kubernetes.io/network-unavailable:NoSchedule
    node.kubernetes.io/not-ready:NoExecute
    node.kubernetes.io/pid-pressure:NoSchedule
    node.kubernetes.io/unreachable:NoExecute
    node.kubernetes.io/unschedulable:NoSchedule
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Pulling 13m (x33 over 153m) kubelet, kw1 Pulling image "traefik"
    Warning BackOff 3m36s (x698 over 153m) kubelet, kw1 Back-off restarting failed container

  • chrispokorni
    chrispokorni Posts: 2,155

    Hi Stefan,

    Please read carefully the exercise. It seems you may have missed some of the key details needed to spec the traefik image.

    Regards,
    -Chris

  • suser
    suser Posts: 67
    edited April 2020

    Ok , now I change the container image value to traefik:v1.7 on traefik-ds.yaml and the pods started, I can access the server on my nodeIP port 80, and I get the nginx welcome page from curl using the required header.

    Stefan

  • chrispokorni
    chrispokorni Posts: 2,155
    edited April 2020

    Hi Stefan,

    I am glad that you were able to find all the needed configuration options in the lab exercise, and that the resources worked as expected once you followed the instructions as presented in the lab.

    Regards,
    -Chris

Categories

Upcoming Training