Welcome to the Linux Foundation Forum!

Ex. 6.5 Testing the policy

Options

Hello

I can not get the selective ingress from #6 on to work.

~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
230: eth0@if231: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue qlen 1000
    link/ether 72:1b:d7:c8:cb:ac brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.209/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::701b:d7ff:fec8:cbac/64 scope link 
       valid_lft forever preferred_lft forever

The ip is 10.0.1.209/32 so I'm using 10.0.1.0/32 in allclosed.yaml (I also tested other variants like 10.0.0.0/32 and 10.0.0.0/16 which did not work).

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 10.0.1.0/32
#  - Egress

Curl and ping are both not working for this ip address. Anyone have any idea why that could be?

Thanks.

Comments

  • chrispokorni
    Options

    Hi @ghilknov,

    The cilium network plugin manages the 10.0.0.0/8 network by default. You can extract this from the cilium-config ConfigMap:

    kubectl -n kube-system get cm cilium-config -oyaml | grep cluster-pool-ipv4-cidr:
    

    The network policy can either whitelist the entire pod network cidr:

      ingress:
      - from:
        - ipBlock:
            cidr: 10.0.0.0/8
    

    or it can whitelist only the source IP of your curl command, which should be the cilium_host interface IP of your node where curl is being run, most likely the CP node if closely following the lab guide (run ip a on your CP node to locate the cilium_host interface IP, most likely a 10.0.0.x/32 IP):

      ingress:
      - from:
        - ipBlock:
            cidr: <cilium_host IP>/32
    

    Regards,
    -Chris

  • ghilknov
    Options

    Hi Chris

    Thanks for your quick answer.

    Unfortunately, using either 10.0.0.0/8 or 10.0.0.0/32 both do not work. The curl still does not get through to 10.0.1.88. If I delete the NetworkPolicy then it works. So it is not a general problem.

    Not sure what to do.

    I just tried to allow the only the clusterIP but that does not work either.

  • chrispokorni
    Options

    Hi @ghilknov,

    I was able to reproduce this issue. I observed the same behavior, where the policy does not allow ingress traffic based on the defined rules. It allows all ingress traffic from cidr: 0.0.0.0/0, however, this is not the solution we are trying to implement. Removing the policy also enables all ingress traffic.
    This was tried on custom and default installation methods of the cilium CNI plugin.
    Will research further for a solution.

    Regards,
    -Chris

  • ghilknov
    Options

    Hi Chris

    Good to know it is not only me :smiley: Also thanks for looking into it.

    Regards, ghilknov

  • sergiotarxz
    sergiotarxz Posts: 1
    edited October 2023
    Options

    Hi it seems I have the same issue with the 192.168.0.0/16 network, my output for the command provided is:

    kube@cp ~/app2 $ kubectl -n kube-system get cm cilium-config -o yaml | grep cidr 
      cluster-pool-ipv4-cidr: 192.168.0.0/16
      vtep-cidr: ""
    

    My allclosed.yaml:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: deny-default
    spec:
      podSelector: {}
      policyTypes:
      - Ingress
      ingress:
      - from:
        - ipBlock:
            cidr: 192.168.0.0/16
    

    The output from ip a in my container:

    ~ $ ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    139: eth0@if140: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
        link/ether be:f9:02:c6:34:f9 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.251/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::bcf9:2ff:fec6:34f9/64 scope link 
           valid_lft forever preferred_lft forever
    ~ $ 
    

    This is the actual output of trying to connect with curl:

    kube@cp ~/app2 $ curl 192.168.1.251:80
    curl: (28) Failed to connect to 192.168.1.251 port 80 after 130104 ms: Couldn't connect to server
    

    It won't even work with 0.0.0.0/0 in allclosed.yaml

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @sergiotarxz,

    With the introduction of the Cilium CNI plugin this exercise no longer works as it used to with Calico, from earlier releases of the lab guide. Calico would enforce network policy rules on the Pod network ipBlock but Cilium does not.

    A workaround would be to use the podSelector or namespaceSelector instead of ipBlock to test the policy. This also implies a client pod be created to match the policy rule.

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 10
    Options

    I'm experiencing the same problems with Cilium. Although the manual is updated in February, exercise 6.5 still not succeeds. I even tried to setup a Cilium specific network policy as described in https://docs.cilium.io/en/latest/security/policy/language/ but I keep failing in getting curl to work and ping not.
    Now I'm wondering what happens on the exam: will I be able to understand how network policies work?

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @marksmit,

    Have you attempted curl and ping from both nodes? Notice any differences in behavior?

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 10
    Options

    It's getting more and more confusing.

    • On the control plane everything is blocked with the policy.
    • On the worker node, ping and curl succeed when I test with the pod's IP address. Nothing gets returned when I try curl with the control plane's IP address and the high port.
      Bottom line: I have no clear idea how to use network policies as I cannot get them to work in my system.
  • marksmit
    marksmit Posts: 10
    Options

    I repeated 6.5 completely and started with the state described in 6.4.
    When I get to 6.5, point 9, the ping succeeds as described but curl fails with each address I try while I expect that curl should succeed. Am I mistaken?

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @marksmit,

    From what you are describing, the policy works as expected. It blocks traffic from other nodes and pods while it allows traffic from the node hosting the pod itself.

    You can find out more about the network policy from the Kubernetes official documentation.

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 10
    Options

    Thank you. I see, that makes sense.
    The next problem is point 10: curl keeps failing. When I remove the lines
    - from:
    - podSelector: {}
    curl starts to work. How can that be explained?

  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @marksmit,

    Can you attach the allclosed.yaml file? In both forms - with ingress:, - from: and - podSelector: {}, and without.

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 10
    Options
  • chrispokorni
    chrispokorni Posts: 2,190
    Options

    Hi @marksmit,

    Thank you for the two attachments.

    The difference in behavior can be explained as such:

    In the case of allclosed1.yaml, where the spec is defined as:

    spec:
      podSelector: {}
      policyTypes:
      - Ingress
    #  - Egress
      ingress:
      - from:
        - podSelector: {}
        ports:
        - port: 80
          protocol: TCP
    

    The rule allows inbound TCP traffic to port 80 (that is curl or http requests) only if originated from Pods. The - podSelector: {} under the - from: heading means that only Pods are allowed as sources, and the empty curly brackets {} indicate that all Pods are allowed to be traffic sources (no Pods are restricted).
    If you describe the network policy resource defined by the spec of allclosed1.yaml, you see the following:

    <redacted>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: 80/TCP
        From:
          PodSelector: <none>
      Not affecting egress traffic
      Policy Types: Ingress
    

    In the case of allclosed2.yaml, where the spec is defined as:

    spec:
      podSelector: {}
      policyTypes:
      - Ingress
    #  - Egress
      ingress:
      - ports:
        - port: 80
          protocol: TCP
    

    The rule allows inbound TCP traffic to port 80 (that is curl or http requests), originated from any resource (no restrictions on source type). If you describe the network policy resource defined by the spec of allclosed2.yaml, you see the following:

    <redacted>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: 80/TCP
        From: <any> (traffic not restricted by source)
      Not affecting egress traffic
      Policy Types: Ingress
    

    While initiating curl requests from various hosts (your workstation, cp node, and worker node) can be used as test cases against the policy, the most meaningful test is performed by initiating the curl and ping commands from another Pod, such as the test Pod introduced towards the end of lab exercise 6.5.

    I am hoping this helps to clarify the policy's behavior.

    Regards,
    -Chris

  • marksmit
    marksmit Posts: 10
    Options

    Thank you so much for your comprehensive answer. I finally understand what is going on and my system reacts as expected.
    I did not realize that 'describe' could be used to see what the policy is doing. It is convenient to use it for this.

Categories

Upcoming Training