Welcome to the Linux Foundation Forum!

LFS211 - Lab 16.1 Container without Internet access

Hello,

I'm looking for some help to get the LXC containers get internet access.

My physical host runs Fedora 32.
I run a CentOS 8 VM in qemu-kvm, which is where I've installed the lxc libs.

The VM has two virtio NICs:

  • ens3 - default, i.e. NAT'd forwarding (192.168.122.0/24)
  • ens10 - host-only, no forwarding (192.168.56.0/24) Although this shouldn't influence this exercise.

I've stopped firewalld.

The VM has no noticable issues. Internet access with DNS resolution work just fine. (Plus with virtio it's fast and it's the reason I kicked out virtual box).

Routes on the VM:

[root@training ~]# ip route show
default via 192.168.122.1 dev ens3 proto dhcp metric 101 
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 
192.168.56.0/24 dev ens10 proto kernel scope link src 192.168.56.127 metric 100 
192.168.122.0/24 dev ens3 proto kernel scope link src 192.168.122.32 metric 101

The lxcbr0 has been assigned an address

[root@training ~]# ip a s lxcbr0
4: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever

When I start one container (-n bucket), veth link pairs are created. Here's the one on the VM side:

[root@training ~]# bridge link show lxcbr0
12: vethXZPS7P@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master lxcbr0 state forwarding priority 32 cost 2

In the container itself, it received an address via DHCP:

[root@bucket ~]# ip a s eth0
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:55:d8:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.89/24 brd 10.0.3.255 scope global dynamic noprefixroute eth0
       valid_lft 2410sec preferred_lft 2410sec
    inet6 fe80::216:3eff:fe55:d85e/64 scope link 
       valid_lft forever preferred_lft forever

The container's route definition looks reasonable to me in that it all goes out to the bridge, lxcbr0:

[root@bucket ~]# ip route show
default via 10.0.3.1 dev eth0 proto dhcp metric 100 
10.0.3.0/24 dev eth0 proto kernel scope link src 10.0.3.89 metric 100

I can ping VM interfaces ens3, ens10 and lxcbr0 from the container no problem. However, I cannot, from my container, ping the bridges for the VM itself (i.e. the devices virbr0 and virbr1 that are defined on my physical host ) and I also can't ping external IPs like 8.8.8.8. DNS of course doesn't resolve either.

Both VM and physical hosts have net.ipv4.ip_forward = 1.

Any ideas as to what could be amiss or additional things I could try to get more information?

Many thanks,
/Henrik

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello Kenrik,

    There are some fun things when dealing with QEMU/KVM and nested virtualization. That the container is getting a DHCP address is a good sign overall.

    If it were me I'd start with tracepath to see where the packets are going.
    Then wireshark/tcpdump on various interfaces to try to track where the packets are going.
    Perhaps an nmap or nc from the host and another from the container, to see if there is something about ICMP messages that are not being forwarded.

    Some other things to check:
    Can the bucket container ping itself?
    If you deploy another container, can they ping each other?
    Perhaps using a web server in the container you could test port 80 and again try to narrow down issues to ICMP/UDP/TCP?

    Others may have more exposure to this, but that's what I would look at first.

    Regards,

  • Thanks @serewicz. I appreciate it. Things do get a bit trickier without internet access in the containers as I can't install dependencies in-place with yum/dnf. A good place to start following your steps is probably in the VM though and hopefully I'll see some signs of life on the bridge lxcbr0.

    Yes, the containers can talk to each other as well as the VM through said bridge. It's as if I needed a route on the VM from the bridge to the gateway. But if that was the case, the issue wouldn't be unique to my setup and more or less everyone would experience the same problem, which I suspect is not the case...

    Anyway, thanks again for the input. I'll see what I can figure out from the suggestions you made.

    Regards,
    /Henrik

  • serewicz
    serewicz Posts: 1,000

    I'm a big fan of OVS for these situations, makes in-memory networking much easier to handle: Open vSwitch with KVM how to

    Then the containers and the host are all on the same switch and I can watch the traffic and/or view openflow rules to see what's happening to the traffic. But this may add more complexity to your configuration.

    Regards,

  • I was hoping to do it with only linux-native tools but OVS might indeed be the thing I need to get it working and understanding why it currently is not. Thanks @serewicz

Categories

Upcoming Training