Welcome to the Linux Foundation Forum!

Lab 6.5 - connect to nginx container with final policy in place?

strelko2
strelko2 Posts: 3
edited February 6 in LFD259 Class Forum

Hi Chris, all,

I would like to ask about the final state of the lab 6.5, where we have the NetworkPolicy in place.
First, some context

[Decided to omit the NetworkPolicy itself, as the relevant part of the config is practically the whole object - I'm referring to its final state as per Lab 6.5, point #10]

As I understand it, the NetworkPolicy applies to all pods in the default namespace, so our nginx container (part of the secondapp pod) can receive connections from any other pod on port 80.

We also previously set up a NodePort Service, that should expose nginx's port 80 to an external (node) port 32000;

...
kind: Service
spec:
  clusterIP: 10.100.51.7
  ports:
  - name: "80"
    nodePort: 32000
    port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort
...

The pod itself has, in my case, its IP set to 10.0.1.8

NAME                        READY   STATUS    RESTARTS        AGE   IP           NODE      
secondapp                   2/2     Running   1 (43m ago)     60m   10.0.1.8     worker-01

With this setup, I would like to access the nginx default page from any point that isn't within a pod.

  1. me@controlplane:$ curl http://10.0.1.8:80 ##Pod IP - doesn't work - why?
    • IIRC, pod IP should be visible within the K8s cluster (so from CP towards pod on Worker node), and I should be hitting the correct ingress port
  2. me@controlplane:$ curl http://10.100.51.7:80 ##Service ClusterIP - doesn't work - why?
    • My understanding is, the service should be accessing the pod in the correct local port 80, which should be allowed as the correct ingress port for the pod
  3. me@laptop:$ curl http://<worker-01-IP>:32000 - doesn't work, most likely for the same reason as attempt #2

Given that the goal is for some generic user to access the website (as in attempt #3), why doesn't the existing policy work, despite explicitly allowing Ingress via Port 80?
What change would have to happen in the NetworkPolicy (or other objects), so that attempt #3 would work?

Thank you,
Martin

Comments

  • strelko2
    strelko2 Posts: 3
    edited February 6

    Reading some more on the subject - https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource

    It dawned on me, that an empty (but present) ingress: list actually denies all inbound connections.
    I am now assuming that the presence of only a podSelector option under ingress implies - the only allowed inbound connections are from other pods in the same (default) namespace.
    This means, in order to allow external connections, I would have to add

      ingress:
      - from:
        - ipBlock:                  ### these two
            cidr: <IP-range>        ### lines
    

    to the NetworkPolicy.

    However, this doesn't work when I configured it, nor does it allow my ControlPlane node to access the service or pod, when I set the range to include its IP address.

    Still, perhaps this is a step in the right direction..?

    Best regards,
    Martin

  • chrispokorni
    chrispokorni Posts: 2,434

    Hi @strelko2,

    Network policies are an interesting territory in Kubernetes. While their definition is rather uncomplicated, the tricky part is how they are interpreted and implemented/enforced by the cluster's CNI plugin. Meaning that having the same Network policy manifest, under different CNI plugins, its rules may be interpreted differently, or may be simply omitted by the plugin. In order to find out the "whats" and "hows" of the CNI plugin, one would have to read not only the Kubernetes Network policy documentation, but also the CNI plugin's own documentation. And, to make things even more interesting, the CNI plugins come with their own custom Network policy objects, designed to extend beyond the Kubernetes Network policy capabilities.

    The Network policy manifest in the lab guide explicitly declares .ingress.from.podSelector to allow only ingress from other Pods (matching the port and protocol of the rule). While your observation is correct, due to the lab environment setup the user has access to the Pod network, Service network, and Node network (not recommended for prod settings), this is possible due to routes defined to allow traffic to travel across the three distinct network layers - nodes, services, pods. A curl to a Pod IP, Service ClusterIP, and Node IP is not treated as access from another Pod. The curl request is captured and processed differently, as coming from a source not defined in the policy - and eventually dropped. This is why the validation step includes a test Pod, running an alpine container to access the target application protected by the Network policy - that should lead to a successful curl attempt.

    For your use case, what may help is to have an Ingress proxy installed, to capture user traffic and route it internally to Service Endpoints, while making the policy more descriptive in terms of the targeted/protected Pods in the .spec.podSelector field and the permitted sources in the .from.podSelector or the .from.namespaceSelector.

    Regards,
    -Chris

Categories

Upcoming Training