Welcome to the Linux Foundation Forum!

Lab 2.1.10: ClusterIPs unreachable from Master to Minion

Would there be any reason that my ClusterIPs are not reachable from the Master node to pods running on the Minion? I made sure my AWS Security Group has TCP port 80 open:

ubuntu@ip-xxx-xx-xx-xxx:~$ kubectl get svc
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
basicservice   ClusterIP   10.103.61.173   <none>        80/TCP    31m
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP   56m

ubuntu@ip-xxx-xx-xx-xxx:~$ curl http://10.103.61.173
^C

I am able to reach the api server on the Master from the minion node.

ubuntu@ip-xxx-xx-xx-xxx:~$ curl --insecure https://10.96.0.1/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "xxx.xx.xx.xxx:6443"
    }
  ]
}

and I am able to hit the pod from the minion:

ubuntu@ip-xxx-xx-xx-xxx:~$ curl http://10.103.61.173
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

I have tried re-creating the cluster several times, starting from scratch and I am stuck at this point.

Thanks.

-Jonathan

Comments

  • theaj
    theaj Posts: 1

    I am having the same issue

  • @jtronson
    If you curl from master node any of the endpoints (instead of the ClusterIP), or from the minion to endpoints, what responses do you get?
    I am assuming both master and minion are in the same SG?
    Regardless where your pod is running (master or minion), curling the ClusterIP should produce the same result, since the service should be equally accessible from both nodes inside your cluster.
    Have you tried opening the SG to all traffic, rather than individual ports?
    Regards,
    -Chris

  • jtronson
    jtronson Posts: 5
    edited December 2018

    OK. Opening up the SG to "All Traffic" fixed the issue. So now I am just wondering what ports I was missing in the rule. Currently I have the following tcp ports opened:
    22: ssh
    80: http
    443, 8443: https
    5000: Registry
    6443: K8s API Server
    6666: Etcd
    30000 - 32767: nodePorts
    53 (UDP): kube dns
    All ICMP

    Thank you for the suggestion. At least now I can continue with the lesson!

    -Jonathan

  • Hi @TITYKOUKI ,
    If your master cannot reach the minion, you may be experiencing node-to-node networking issues. Try opening your SG to all ports, protocols, from all sources, so it does not restrict any other traffic. Another option would be to look at your VPC setup, if there is one.
    Regards,
    -Chris

  • serewicz
    serewicz Posts: 1,000

    Hello,

    If you would like to find a list of the ports in you you may consider adding a log line to the iptables on both nodes. As with other parts of Kubernetes there is much change going on, and the documentation on _kubernetes.io _tends to lag behind and/or not be accurate. You can put a log action on both the output as well as in the input chains, and from that get an understanding of what that particular version of Kubernetes wants to use for ports.

    Regards,

  • crixo
    crixo Posts: 31

    @serewicz said:
    Hello,

    If you would like to find a list of the ports in you you may consider adding a log line to the iptables on both nodes. As with other parts of Kubernetes there is much change going on, and the documentation on _kubernetes.io _tends to lag behind and/or not be accurate. You can put a log action on both the output as well as in the input chains, and from that get an understanding of what that particular version of Kubernetes wants to use for ports.

    Regards,

    Hi @serewicz, could you please provide a bash snippet to "adding a log line to the iptables on both nodes" and how to use the log for troubleshooting?
    I read some iptables documentation but it does not seems so easy for a network newbie
    Thanks a lot

  • serewicz
    serewicz Posts: 1,000

    To make the log statement the first rule applied to inboudn traffic you can use this command: iptables -I INPUT -j LOG --log-prefix "K8s test " --log-level 1 or if trying to log outboud you could add the rule last to see what has made it that far with this command: iptables -A OUTPUT -j LOG --log-prefix "k8s-outbound " --log-level 1

Categories

Upcoming Training