Ex. 6.5 Testing the policy
Hello
I can not get the selective ingress from #6 on to work.
~ $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 230: eth0@if231: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue qlen 1000 link/ether 72:1b:d7:c8:cb:ac brd ff:ff:ff:ff:ff:ff inet 10.0.1.209/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::701b:d7ff:fec8:cbac/64 scope link valid_lft forever preferred_lft forever
The ip is 10.0.1.209/32
so I'm using 10.0.1.0/32
in allclosed.yaml
(I also tested other variants like 10.0.0.0/32
and 10.0.0.0/16
which did not work).
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-default spec: podSelector: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.0.1.0/32 # - Egress
Curl and ping are both not working for this ip address. Anyone have any idea why that could be?
Thanks.
Comments
-
Hi @ghilknov,
The cilium network plugin manages the 10.0.0.0/8 network by default. You can extract this from the cilium-config ConfigMap:
kubectl -n kube-system get cm cilium-config -oyaml | grep cluster-pool-ipv4-cidr:
The network policy can either whitelist the entire pod network cidr:
ingress: - from: - ipBlock: cidr: 10.0.0.0/8
or it can whitelist only the source IP of your
curl
command, which should be the cilium_host interface IP of your node where curl is being run, most likely the CP node if closely following the lab guide (runip a
on your CP node to locate the cilium_host interface IP, most likely a 10.0.0.x/32 IP):ingress: - from: - ipBlock: cidr: <cilium_host IP>/32
Regards,
-Chris0 -
Hi Chris
Thanks for your quick answer.
Unfortunately, using either
10.0.0.0/8
or10.0.0.0/32
both do not work. The curl still does not get through to10.0.1.88
. If I delete the NetworkPolicy then it works. So it is not a general problem.Not sure what to do.
I just tried to allow the only the clusterIP but that does not work either.
0 -
Hi @ghilknov,
I was able to reproduce this issue. I observed the same behavior, where the policy does not allow ingress traffic based on the defined rules. It allows all ingress traffic from cidr: 0.0.0.0/0, however, this is not the solution we are trying to implement. Removing the policy also enables all ingress traffic.
This was tried on custom and default installation methods of the cilium CNI plugin.
Will research further for a solution.Regards,
-Chris0 -
Hi Chris
Good to know it is not only me Also thanks for looking into it.
Regards, ghilknov
0 -
Hi it seems I have the same issue with the 192.168.0.0/16 network, my output for the command provided is:
kube@cp ~/app2 $ kubectl -n kube-system get cm cilium-config -o yaml | grep cidr cluster-pool-ipv4-cidr: 192.168.0.0/16 vtep-cidr: ""
My allclosed.yaml:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-default spec: podSelector: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 192.168.0.0/16
The output from ip a in my container:
~ $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 139: eth0@if140: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000 link/ether be:f9:02:c6:34:f9 brd ff:ff:ff:ff:ff:ff inet 192.168.1.251/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::bcf9:2ff:fec6:34f9/64 scope link valid_lft forever preferred_lft forever ~ $
This is the actual output of trying to connect with curl:
kube@cp ~/app2 $ curl 192.168.1.251:80 curl: (28) Failed to connect to 192.168.1.251 port 80 after 130104 ms: Couldn't connect to server
It won't even work with 0.0.0.0/0 in allclosed.yaml
0 -
Hi @sergiotarxz,
With the introduction of the Cilium CNI plugin this exercise no longer works as it used to with Calico, from earlier releases of the lab guide. Calico would enforce network policy rules on the Pod network ipBlock but Cilium does not.
A workaround would be to use the
podSelector
ornamespaceSelector
instead ofipBlock
to test the policy. This also implies a client pod be created to match the policy rule.Regards,
-Chris1 -
I'm experiencing the same problems with Cilium. Although the manual is updated in February, exercise 6.5 still not succeeds. I even tried to setup a Cilium specific network policy as described in https://docs.cilium.io/en/latest/security/policy/language/ but I keep failing in getting curl to work and ping not.
Now I'm wondering what happens on the exam: will I be able to understand how network policies work?0 -
Hi @marksmit,
Have you attempted curl and ping from both nodes? Notice any differences in behavior?
Regards,
-Chris0 -
It's getting more and more confusing.
- On the control plane everything is blocked with the policy.
- On the worker node, ping and curl succeed when I test with the pod's IP address. Nothing gets returned when I try curl with the control plane's IP address and the high port.
Bottom line: I have no clear idea how to use network policies as I cannot get them to work in my system.
0 -
I repeated 6.5 completely and started with the state described in 6.4.
When I get to 6.5, point 9, the ping succeeds as described but curl fails with each address I try while I expect that curl should succeed. Am I mistaken?0 -
Hi @marksmit,
From what you are describing, the policy works as expected. It blocks traffic from other nodes and pods while it allows traffic from the node hosting the pod itself.
You can find out more about the network policy from the Kubernetes official documentation.
Regards,
-Chris0 -
Thank you. I see, that makes sense.
The next problem is point 10: curl keeps failing. When I remove the lines
- from:
- podSelector: {}
curl starts to work. How can that be explained?0 -
Hi @marksmit,
Can you attach the allclosed.yaml file? In both forms - with
ingress:
,- from:
and- podSelector: {}
, and without.Regards,
-Chris0 -
-
Hi @marksmit,
Thank you for the two attachments.
The difference in behavior can be explained as such:
In the case of allclosed1.yaml, where the
spec
is defined as:spec: podSelector: {} policyTypes: - Ingress # - Egress ingress: - from: - podSelector: {} ports: - port: 80 protocol: TCP
The rule allows inbound TCP traffic to port 80 (that is curl or http requests) only if originated from Pods. The
- podSelector: {}
under the- from:
heading means that only Pods are allowed as sources, and the empty curly brackets{}
indicate that all Pods are allowed to be traffic sources (no Pods are restricted).
If you describe the network policy resource defined by the spec of allclosed1.yaml, you see the following:<redacted> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: 80/TCP From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress
In the case of allclosed2.yaml, where the
spec
is defined as:spec: podSelector: {} policyTypes: - Ingress # - Egress ingress: - ports: - port: 80 protocol: TCP
The rule allows inbound TCP traffic to port 80 (that is curl or http requests), originated from any resource (no restrictions on source type). If you describe the network policy resource defined by the spec of allclosed2.yaml, you see the following:
<redacted> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: 80/TCP From: <any> (traffic not restricted by source) Not affecting egress traffic Policy Types: Ingress
While initiating
curl
requests from various hosts (your workstation, cp node, and worker node) can be used as test cases against the policy, the most meaningful test is performed by initiating thecurl
andping
commands from another Pod, such as thetest
Pod introduced towards the end of lab exercise 6.5.I am hoping this helps to clarify the policy's behavior.
Regards,
-Chris0 -
Thank you so much for your comprehensive answer. I finally understand what is going on and my system reacts as expected.
I did not realize that 'describe' could be used to see what the policy is doing. It is convenient to use it for this.0
Categories
- All Categories
- 167 LFX Mentorship
- 219 LFX Mentorship: Linux Kernel
- 803 Linux Foundation IT Professional Programs
- 358 Cloud Engineer IT Professional Program
- 181 Advanced Cloud Engineer IT Professional Program
- 83 DevOps Engineer IT Professional Program
- 150 Cloud Native Developer IT Professional Program
- 112 Express Training Courses
- 138 Express Courses - Discussion Forum
- 6.3K Training Courses
- 48 LFC110 Class Forum - Discontinued
- 17 LFC131 Class Forum
- 42 LFD102 Class Forum
- 227 LFD103 Class Forum
- 19 LFD110 Class Forum
- 39 LFD121 Class Forum
- 15 LFD133 Class Forum
- 7 LFD134 Class Forum
- 17 LFD137 Class Forum
- 63 LFD201 Class Forum
- 3 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 1 LFD233 Class Forum
- 2 LFD237 Class Forum
- 23 LFD254 Class Forum
- 698 LFD259 Class Forum
- 109 LFD272 Class Forum
- 3 LFD272-JP クラス フォーラム
- 10 LFD273 Class Forum
- 157 LFS101 Class Forum
- 1 LFS111 Class Forum
- 1 LFS112 Class Forum
- 1 LFS116 Class Forum
- 1 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 7 LFS144 Class Forum
- 3 LFS145 Class Forum
- 1 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 4 LFS157 Class Forum
- 36 LFS158 Class Forum
- 8 LFS162 Class Forum
- 1 LFS166 Class Forum
- 1 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 1 LFS178 Class Forum
- 1 LFS180 Class Forum
- 1 LFS182 Class Forum
- 1 LFS183 Class Forum
- 29 LFS200 Class Forum
- 736 LFS201 Class Forum - Discontinued
- 2 LFS201-JP クラス フォーラム
- 14 LFS203 Class Forum
- 135 LFS207 Class Forum
- 1 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 301 LFS211 Class Forum
- 55 LFS216 Class Forum
- 48 LFS241 Class Forum
- 48 LFS242 Class Forum
- 37 LFS243 Class Forum
- 15 LFS244 Class Forum
- LFS245 Class Forum
- LFS246 Class Forum
- 51 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- LFS251 Class Forum
- 155 LFS253 Class Forum
- LFS254 Class Forum
- LFS255 Class Forum
- 5 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.3K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 122 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 20 LFS267 Class Forum
- 25 LFS268 Class Forum
- 31 LFS269 Class Forum
- 4 LFS270 Class Forum
- 199 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- LFS274 Class Forum
- 3 LFS281 Class Forum
- 10 LFW111 Class Forum
- 261 LFW211 Class Forum
- 182 LFW212 Class Forum
- 15 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 782 Hardware
- 198 Drivers
- 68 I/O Devices
- 37 Monitors
- 96 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 83 Storage
- 758 Linux Distributions
- 80 Debian
- 67 Fedora
- 15 Linux Mint
- 13 Mageia
- 23 openSUSE
- 143 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 348 Ubuntu
- 461 Linux System Administration
- 39 Cloud Computing
- 70 Command Line/Scripting
- Github systems admin projects
- 90 Linux Security
- 77 Network Management
- 101 System Management
- 46 Web Management
- 64 Mobile Computing
- 17 Android
- 34 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 19 Study Material
- 806 Programming and Development
- 304 Kernel Development
- 204 Software Development
- 1.8K Software
- 263 Applications
- 180 Command Line
- 3 Compiling/Installing
- 405 Games
- 309 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)