Welcome to the Linux Foundation Forum!

A subtle networking question - a bond/team over a qsfp+ virtual functions - can it be done?

Hello,
Wrestling a day and over with an network configuration that may not make a sense at all - but not quite sure.

The situation: I want to use a VM as a routing gateway on a network, and did pass the 4 interfaces of an qsfp+ card ( Intel X520-Q1) as virtual function links to the VM. The card running in the host is "Intel Corporation Ethernet Converged Network Adapter X520-Q1", I want to make an bond over the 4 qsfp+ interfaces in the VM and use it as a gateway on the network. I currently use similar configuration, but with an Intel i350 card at 1Gb/s, with one interface and not 4.

The bonding driver in the VM definitely refuses to make an working bond on the 4 VF functions of the qsfp coming from the host. There is nothing in the logs - I am able to enslave the 4 interfaces, but the bond never comes up - and no error is given when I try 'ip link set bond0 up' - the link just stays down.

I am able to make a working bond on the host (and not in the guest) with the 4 PHYSICAL interfaces presented by the card, but not with the VF's. It actually may not make a sense at all to make an 802.3ad bond on virtual functions - what if I pass another virtual function from the same physical one as an separate interface to another VM?

Pretty much same result with the teaming driver - an 802.3ad team is formed, but never comes up - and nothing in the logs,

I think I need to pass the card as a pci device to the guest - but may there be be a way to use virtual functions presented by the card for bonding in the guest? I do not want to read the bonding / teaming driver code in order to understand this.

Any (qualified) suggestion is welcome.
Thanks, George.

Comments

  • Hello again,
    Let me post some more information. Here is the interfaces state for eth10, eth11, eth12, eth13 which are 4 VF's transferred from the host:

    2: eth10: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 5a:04:63:17:e2:f0 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 1504
    bond_slave state ACTIVE mii_status UP link_failure_count 0 perm_hwaddr 5a:04:63:17:e2:f0 queue_id 0 ad_aggregator_id 1 ad_actor_oper_port_state 79 ad_partner_oper_port_state 1 addrgenmode eui64 numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535

    eth111,12 and 13 show exactly the same, Important part is "master bond0"

    13: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:04:63:17:e2:f0 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
    bond mode 802.3ad miimon 100 updelay 400 downdelay 200 use_carrier 1 arp_interval 0 arp_validate none arp_all_targets any primary_reselect always fail_over_mac none xmit_hash_policy layer2+3 resend_igmp 1 num_grat_arp 1 all_slaves_active 0 min_links 1 lp_interval 1 packets_per_slave 1 lacp_rate fast ad_select stable ad_aggregator 1 ad_num_ports 1 ad_actor_key 15 ad_partner_key 1 ad_partner_mac 00:00:00:00:00:00 ad_actor_sys_prio 65535 ad_user_port_key 0 ad_actor_system 00:00:00:00:00:00 tlb_dynamic_lb 1 addrgenmode eui64 numtxqueues 16 numrxqueues 16 gso_max_size 65536 gso_max_segs 65535

  • Hello, me again.
    Bond can be done on ixgbe driver on virtual functions, but there shall be no mac address specifically assigned to the VF's. So, on the host you leave VF's without assigning a mac, and best, do not load ixgbevf driver - you won't need to detach VF's from the host if you do not load the driver.

    Then, on the guest make a balance-rr (mode 0) bond, and it works. On the switch (Cisco Nexus 3064) interfaces as present as individual ports, but the bond works and packets flow via all bonded interfaces. In this case I bonded 3 out of 4 ports, and kept the 4th port for another purposes - works pretty well. I was unable to make 802.3ad bond - for some reason bond never detects the multicast. But balance-rr works fine - for now at least, at 30Gb/s:
    Settings for bond0:
    Speed: 30000Mb/s
    Duplex: Full
    Auto-negotiation: off

    The qsfp used (intel) does not support auto-negotiation, it's fixed at 10Gb.

Categories

Upcoming Training