Welcome to the Linux Foundation Forum!

Ex. 6.5 Testing the policy

Options

Hello

I can not get the selective ingress from #6 on to work.

~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
230: eth0@if231: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue qlen 1000
    link/ether 72:1b:d7:c8:cb:ac brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.209/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::701b:d7ff:fec8:cbac/64 scope link 
       valid_lft forever preferred_lft forever

The ip is 10.0.1.209/32 so I'm using 10.0.1.0/32 in allclosed.yaml (I also tested other variants like 10.0.0.0/32 and 10.0.0.0/16 which did not work).

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 10.0.1.0/32
#  - Egress

Curl and ping are both not working for this ip address. Anyone have any idea why that could be?

Thanks.

Comments

  • chrispokorni
    Options

    Hi @ghilknov,

    The cilium network plugin manages the 10.0.0.0/8 network by default. You can extract this from the cilium-config ConfigMap:

    kubectl -n kube-system get cm cilium-config -oyaml | grep cluster-pool-ipv4-cidr:
    

    The network policy can either whitelist the entire pod network cidr:

      ingress:
      - from:
        - ipBlock:
            cidr: 10.0.0.0/8
    

    or it can whitelist only the source IP of your curl command, which should be the cilium_host interface IP of your node where curl is being run, most likely the CP node if closely following the lab guide (run ip a on your CP node to locate the cilium_host interface IP, most likely a 10.0.0.x/32 IP):

      ingress:
      - from:
        - ipBlock:
            cidr: <cilium_host IP>/32
    

    Regards,
    -Chris

  • ghilknov
    Options

    Hi Chris

    Thanks for your quick answer.

    Unfortunately, using either 10.0.0.0/8 or 10.0.0.0/32 both do not work. The curl still does not get through to 10.0.1.88. If I delete the NetworkPolicy then it works. So it is not a general problem.

    Not sure what to do.

    I just tried to allow the only the clusterIP but that does not work either.

  • chrispokorni
    Options

    Hi @ghilknov,

    I was able to reproduce this issue. I observed the same behavior, where the policy does not allow ingress traffic based on the defined rules. It allows all ingress traffic from cidr: 0.0.0.0/0, however, this is not the solution we are trying to implement. Removing the policy also enables all ingress traffic.
    This was tried on custom and default installation methods of the cilium CNI plugin.
    Will research further for a solution.

    Regards,
    -Chris

  • ghilknov
    Options

    Hi Chris

    Good to know it is not only me :smiley: Also thanks for looking into it.

    Regards, ghilknov

  • sergiotarxz
    sergiotarxz Posts: 1
    edited October 2023
    Options

    Hi it seems I have the same issue with the 192.168.0.0/16 network, my output for the command provided is:

    kube@cp ~/app2 $ kubectl -n kube-system get cm cilium-config -o yaml | grep cidr 
      cluster-pool-ipv4-cidr: 192.168.0.0/16
      vtep-cidr: ""
    

    My allclosed.yaml:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: deny-default
    spec:
      podSelector: {}
      policyTypes:
      - Ingress
      ingress:
      - from:
        - ipBlock:
            cidr: 192.168.0.0/16
    

    The output from ip a in my container:

    ~ $ ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    139: eth0@if140: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
        link/ether be:f9:02:c6:34:f9 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.251/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::bcf9:2ff:fec6:34f9/64 scope link 
           valid_lft forever preferred_lft forever
    ~ $ 
    

    This is the actual output of trying to connect with curl:

    kube@cp ~/app2 $ curl 192.168.1.251:80
    curl: (28) Failed to connect to 192.168.1.251 port 80 after 130104 ms: Couldn't connect to server
    

    It won't even work with 0.0.0.0/0 in allclosed.yaml

  • chrispokorni
    chrispokorni Posts: 2,185
    Options

    Hi @sergiotarxz,

    With the introduction of the Cilium CNI plugin this exercise no longer works as it used to with Calico, from earlier releases of the lab guide. Calico would enforce network policy rules on the Pod network ipBlock but Cilium does not.

    A workaround would be to use the podSelector or namespaceSelector instead of ipBlock to test the policy. This also implies a client pod be created to match the policy rule.

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 2
    Options

    I'm experiencing the same problems with Cilium. Although the manual is updated in February, exercise 6.5 still not succeeds. I even tried to setup a Cilium specific network policy as described in https://docs.cilium.io/en/latest/security/policy/language/ but I keep failing in getting curl to work and ping not.
    Now I'm wondering what happens on the exam: will I be able to understand how network policies work?

  • chrispokorni
    chrispokorni Posts: 2,185
    Options

    Hi @marksmit,

    Have you attempted curl and ping from both nodes? Notice any differences in behavior?

    Regards,
    -Chris

Categories

Upcoming Training