Welcome to the Linux Foundation Forum!

Exercise 5.2 - Confusion regarding NFS and DNS resolution for nodes

According to steps 4, 5, and PVol.yaml, the NFS can be mounted by the first node's name or IP address...

But when I try to use the node hostname I get this:

worker@ckad2[05:47:48]:~$ showmount -e ckad1
clnt_create: RPC: Unknown host

But, if I use the internal IP of the node:

worker@ckad2[06:06:44]:~$ showmount -e 172.30.0.122
Export list for 172.30.0.122:
/opt/sfw *

worker@ckad2[06:21:51]:~$ sudo mount 172.30.0.122:/opt/sfw /mnt
worker@ckad2[06:35:55]:~$ ls -l /mnt
total 4
-rw-r--r-- 1 root root 9 Dec 29 05:46 hello.txt

I also need to use the internal IP of the node in the PersistentVolume manifest or the deployment fails. So my question is:

  1. Should the kube-dns service be able to resolve the nodes hostnames?
  2. What would be the steps required to make the cluster work like the example where the NFS can be mounted and the PV pointed to the name of the node instead of the internal IP?

Comments

  • Hi @aaronireland,

    1. Should the kube-dns service be able to resolve the nodes hostnames?

    CoreDNS is the Kubernetes Service discovery plugin. Hostname resolution is handled by hosts, based on cloud infrastructure or hypervisor configuration injected during the hosts provisioning phase.

    1. What would be the steps required to make the cluster work like the example where the NFS can be mounted and the PV pointed to the name of the node instead of the internal IP?

    When run on GCE, no additional steps are required. Other cloud infrastructures may introduce their custom/specific DNS config when provisioning hosts, so in this case explicit entries for ckad1 and ckad2 in the /etc/hosts files of your nodes may help with hostname resolutions across your nodes.

    Regards,
    -Chris

  • Thanks for the reply!

    Other cloud infrastructures may introduce their custom/specific DNS config when provisioning hosts

    I followed the AWS cluster setup guide for the course exactly, so for AWS it must be different than GCE.

    in this case explicit entries for ckad1 and ckad2 in the /etc/hosts files of your nodes may help with hostname resolutions across your nodes.

    OK, so I did just now update /etc/hosts on both my nodes and confirmed that the PersistentVolume can be mounted to the pods in the deployment using the node name.

    As to why that’s needed… it seems like that might delve into the administrative nuts and bolts of provisioning/configuring a cluster so it’s not a big deal for me for the CKAD stuff, and in any case, I’ve moved on from that Lab as I was able to complete it just fine using the IP address, but feel like at the least it might be a good thing to call out in the Lab notes or include a blurb in the AWS cluster setup tutorial that mentions it.

Categories

Upcoming Training