1- Working on LFS258 Lab_8.2 it says in previous exercise we deployed a LoadBalancer. However reviewing my deployment from the Lab_8.1 it shows - nginx-one ClusterIP; I used example yaml file at the beginning of the lab_8.1. Please help me to find out what I missed in this Lab setup :-)
2- Regarding the same Lab_8.1, curl to the service IP or ep IP works on the port 80 from worker node, where svc is deployed, however it's not reachable from the master node. It seems it should be accessible with this yaml deployment from both master and worker nodes. Any tips on how to troubleshoot this would be greatly appreciated.
Could you please paste the commands you have run. As you review them, are there any places where your edits may not match the book? Without an error, or commands with output it is difficult to troubleshoot.
As far as access to from the other node, are you sure you have removed all firewalls between the nodes? Are you running the nodes using GCE? Do you allow all traffic on all ports in the VPC?
Hello and Thank you for a quick reply.
1- I completely understand that it's impossible to troubleshoot like that :-) . I have not hit any error with the deployment. I have run the deployment following steps in Lab_8.1.pdf, those are major commands:
kubectl create -f nginx-one.yaml -> kubectl create ns accounting -> kubectl label nodes lfs258-worker system=secondOne -> kubectl -n accounting expose deployment nginx-one
My confusion is that this expected to be a LoadBalancer one. It seems I have explicitly specify the type=LoadBalancer, in the yaml or expose command, otherwise default would be ClusterIP.
2- I'm deploying in AKS. I made sure that no firewall restrictions, everything is wide open. Apparently is not, i'm going to double check that. The error from master I'm getting while connecting to service that runs on worker is timeout:
curl: (7) Failed to connect to 10.110.243.210 port 80: Connection timed out
Yes, without declaring the type of resource to expose the default is ClusterIP.
Are you referring to Azure Kubernetes Service? The Azure products are known to have issues with firewalls and other networking issues and Kubernetes. Also you may have issues with Microsoft running the network as AKS is a fully managed service not just compute resources like GCE or AWS. So you may not able to actually control it or the other resources yourself.
My suggestion would be to use a different service, not Azure.
Thank you for quick reply.
To be more accurate i'm using just vm's on Azure, it's not AKS, my bad. I installed k8s following Lab assignment from this training on regular ubuntu vm's.
Declaring the resource it make sense, I'm going to test that. In this case Lab_8.1 should be updated I guess to explicitly point out to expose service as LB, or maybe example yaml should have this type.
Thanks again, I'll continue looking from the vm networking side of things then.0
Even without using AKS, just Azure, I have heard of ongoing problems with Kubernetes and networking. I have not worked with it myself but I have had lots of request asking me figure out why the same steps that work with GCE, AWS, VirtualBox, VMWare, and bare metal don't work with Azure.
To avoid lengthy and possibly fruitless troubleshooting you may consider using some other provider.
Thank you for the quick reply. The network portion is clear.
Regarding the type LoadBalancer in Lab_8.1. Is the step specifying exact type is missing in the lab pdf or I have to revisit the setup?
Yes, in a previous lab, lab 3, we deployed a LoadBalancer. In 8.1 we deploy a ClusterIP. In 8.2 we deploy a NodePort. You seem to be seeing the expected output as the lab describes.
Thanks again. I'll go back reviewing previous labs configurations one more time.
Could you please provide more details on how correctly enable kubectl commands running on the worker node? Should we just copy over .kube/config from the master to worker node?
The .kube/config file is for the control plane only, so in this case only for the master node. Kubectl should be issued from the master node, and has effect over the entire cluster - master and worker(s).
If you are using the Calico network plugin for pod networking, there have been issues reported earlier on running a Kubernetes cluster on Azure VMs, with the Calico network plugin.
You can find out more details on the Calico documentation page.
hi @chrispokorni , @serewicz
i'm facing this issue
on the lab 8.1, i can't do curl to local ip (192.xxx) from the master, but that's accessible from the worker
kindly give some advice0
another thing, i cannot access nginx using node public IP, i can access using worker public IP
is anything wrong ?0
Similar issues have been discussed and resolved in earlier posts.
It seems that the networking between your nodes is not properly setup, and traffic to some ports may be blocked.
There are a few reasons why this happens. There may be a firewall running on your Ubuntu instances: ufw, apparmor, ... Or there may be a firewall between the nodes at your infrastructure level. The fixes are to disable any local firewall on the Ubuntu instances, and to create a custom VPC network with a custom firewall rule open to all traffic: all ports, all protocols, all sources/destinations and add your VM instances to that VPC.
thanks for the response
for this, i'm using GCP,
will try your suggestions for open to all traffic: all ports, all protocols, all sources/destinations and add your VM instances to that VPC. i'll get back for the result
implement all firewall rules to allow all traffic for all port, disable firewall on local ubuntu instance, but still cannot do CURL from master
do you have any suggestion ?
i made a mistake.
all is working fine now after allow all ip range with all port and all protocol
I'm not sure I fully understand your reply to my question regarding running kubectl commands on worker nodes. My question is basically what needs to be done in order to be able to run kubectl commands from worker node or for instance from just a regular pc, interacting with the cluster?
Hi @sashko, in order to run kubectl from a remote workstation to manage your cluster install kubect on the remote workstation and copy the .kube/config file to the remote workstation. If kubectl does not work after these 2 steps, then you may need to edit the remote .kube/config file to include the master node's private or public IP and you may also need to re-set the certs from your remote .kube/config file - these last 2 steps may have to be performed depending on your current setup.
Thank you Chris! Really appreciate your help with this.0
- 9.9K All Categories
- 29 LFX Mentorship
- 82 LFX Mentorship: Linux Kernel
- 465 Linux Foundation Boot Camps
- 266 Cloud Engineer Boot Camp
- 94 Advanced Cloud Engineer Boot Camp
- 43 DevOps Engineer Boot Camp
- 29 Cloud Native Developer Boot Camp
- 1 Express Training Courses
- 1 Express Courses - Discussion Forum
- 1.6K Training Courses
- 18 LFC110 Class Forum
- 4 LFC131 Class Forum
- 19 LFD102 Class Forum
- 132 LFD103 Class Forum
- 9 LFD121 Class Forum
- 60 LFD201 Class Forum
- LFD210 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 23 LFD254 Class Forum
- 544 LFD259 Class Forum
- 100 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 1 LFS145 Class Forum
- 20 LFS200 Class Forum
- 739 LFS201 Class Forum
- 1 LFS201-JP クラス フォーラム
- 1 LFS203 Class Forum
- 36 LFS207 Class Forum
- 295 LFS211 Class Forum
- 53 LFS216 Class Forum
- 45 LFS241 Class Forum
- 39 LFS242 Class Forum
- 33 LFS243 Class Forum
- 10 LFS244 Class Forum
- 27 LFS250 Class Forum
- 1 LFS250-JP クラス フォーラム
- 131 LFS253 Class Forum
- 963 LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 85 LFS260 Class Forum
- 124 LFS261 Class Forum
- 29 LFS262 Class Forum
- 78 LFS263 Class Forum
- 15 LFS264 Class Forum
- 10 LFS266 Class Forum
- 17 LFS267 Class Forum
- 16 LFS268 Class Forum
- 14 LFS269 Class Forum
- 194 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 206 LFW211 Class Forum
- 148 LFW212 Class Forum
- 890 Hardware
- 212 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 206 Networking
- 99 Printers & Scanners
- 85 Storage
- 747 Linux Distributions
- 88 Debian
- 64 Fedora
- 13 Linux Mint
- 13 Mageia
- 24 openSUSE
- 133 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 354 Ubuntu
- 468 Linux System Administration
- 38 Cloud Computing
- 67 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 77 Network Management
- 107 System Management
- 48 Web Management
- 61 Mobile Computing
- 22 Android
- 25 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 525 Off Topic
- 127 Introductions
- 211 Small Talk
- 19 Study Material
- 782 Programming and Development
- 256 Kernel Development
- 492 Software Development
- 919 Software
- 255 Applications
- 181 Command Line
- 2 Compiling/Installing
- 76 Games
- 316 Installation
- 46 All In Program
- 46 All In Forum
August 20, 2018
Kubernetes Administration (LFS458)
August 20, 2018
Linux System Administration (LFS301)
August 27, 2018
Open Source Virtualization (LFS462)
August 27, 2018
Linux Kernel Debugging and Security (LFD440)