Problem with k8sMaster.sh script
Hi all,
I'm trying to setup an Ubuntu 16.04 VM (in Virtual Box) using the k8sMaster.sh script, as described in the LAB 2.1 document.
I alway get:unable to recognize "calico.yaml": Get https://10.0.2.15:6443/api?timeout=32s: dial tcp 10.0.2.15:6443: connect: connection refused
Firewall is disabled and trying to check for the port 6443 I don't see anything listening (I would expect the Kubernetes API controller).
Someone can help me?
Thanks
Fabrizio
Comments
-
Hello,
What is the command you tried in order to receive this error? Are you able to connect from the VirtualBox instance to the outside world otherwise using both IP and hostname? I notice the request is going to a 10.0.2.15 IP, which would be internal and not have the calico.yaml file available.
Regards,
0 -
Hello,
to get these errors I installed Ubuntu 16.04 on VirtualBox. Then I connected to the machine and launched k8sMaster.sh, I took a look to the script what it does is quite simple: install Docker and than kubeadm, kubectl and kubeket. It installed everithing but then I got the described errors.
Thanks
Fabrizio0 -
Getting vbox network settings right for the nodes to be able to talk to each other and the internet may be a bit tricky. Also make sure you have each vbox VM instance open to all traffic from all sources, to all ports, all protocols. You already mentioned that you Ubuntu fw is disabled, so you should be good there.
Regards,
-Chris0 -
Yes, I just figured out that it could be a time consuming task. Since I'm not interested in troubleshooting VirtualBox I've subscribed the GCE with the $300 free credit. What I'm experiencing now are some problems because the scripts are made for a self deployed environment. For example I think you can't deploy the calico.yml as is in the GCE ...
0 -
Calico works fine in GCE. We are not using Google's container/Kubernetes engines, so we can deploy any networking schema we want on the GCE VM instances. Same rules from before though, Ubuntu firewall disabled/inactive, Google project level or VPC level firewall open to all traffic (all ports, protocols, sources).
Regards,
-Chris0 -
Thanks. I tried to deploy calico on GCE, it works but what I don't understand is how it work in detail.
Im trying to setup the LAB 2 of the course, in my cluster now I have this condition:NAMESPACE NAME READY STATUS RESTARTS AGE default basicpod 1/1 Running 0 8m kube-system calico-kube-controllers-764d76f647-5fnj7 1/1 Running 0 9m kube-system calico-node-vertical-autoscaler-8b959b949-bbgdv 1/1 Running 0 1h kube-system calico-typha-5c9fbf65f8-v8mll 1/1 Running 0 1h kube-system calico-typha-horizontal-autoscaler-5545fbd5d6-b9x7l 1/1 Running 0 1h kube-system calico-typha-vertical-autoscaler-54d8f88b84-77dsg 1/1 Running 0 1h kube-system event-exporter-v0.2.3-54f94754f4-2zgrw 2/2 Running 0 2d kube-system fluentd-gcp-scaler-6d7bbc67c5-z8nh4 1/1 Running 0 2d kube-system fluentd-gcp-v3.1.0-g57md 2/2 Running 0 1d kube-system fluentd-gcp-v3.1.0-l2gcm 2/2 Running 0 2d kube-system heapster-v1.5.3-5f9cfd5669-7mnfc 3/3 Running 0 1d kube-system kube-dns-788979dc8f-9m9pt 4/4 Running 0 2d kube-system kube-dns-788979dc8f-j9htn 4/4 Running 0 1d kube-system kube-dns-autoscaler-79b4b844b9-8czlt 1/1 Running 0 2d kube-system kube-proxy-gke-kube-training-power-pool-cd2516ab-8h75 1/1 Running 0 2d kube-system kube-proxy-gke-kube-training-power-pool-cd2516ab-fvt0 1/1 Running 0 1d kube-system kubernetes-dashboard-598d75cb96-jjzfv 1/1 Running 0 2d kube-system l7-default-backend-5d5b9874d5-n7sbr 1/1 Running 0 2d kube-system metrics-server-v0.2.1-7486f5bd67-cqjpj 2/2 Running 0 2d
Now, given I deployed Calico I would have expected to see the
basicpod
exposed on a ip address reachable with acurl
from the cluster master shell. If I runkubectl get pod -o wide
I get this:NAME READY STATUS RESTARTS AGE IP NODE basicpod 1/1 Running 0 14m 10.40.1.13 gke-kube-training-power-pool-cd2516ab-8h75
And with a
curl http://10.40.1.13
I'm not able to connect to the pod, and it seems quite reasonable to me.
I'm missing something, I'm sure, but I can't figure out what.Thanks
Fabrizio0 -
Hi Fabrizio,
Are you using GKE nodes (Google Kubenetes Engine)? I am assuming this based on your outputs. Using Google's Kubernetes nodes may produce different results/outputs from the ones presented in the Labs. The Labs have been completed on GCE - Google Compute Engine VM instances running Ubuntu 16.04 LTS, with Docker, Kubernetes, calico, etc installed from scratch - via the k8sMaster.sh script.
Regards,
-Chris0 -
Hi Chris,
thank you for your answer. Yes, I'm using GKE, my goal in this course is getting knowledge on Kubernetes from the Developer standpoint, then I'm not so interested in spend time to setup a cluster by myself (an then the use of GKE). Do you think it's better to start from scratch for this course?Thanks
Fabrizio0 -
Hi Fabrizio,
Installation and cluster setup steps have been scripted for the very same reason: this course is intended for Developers and they should not be wasting time setting up a cluster. This course focuses on the vendor-neutral Kubernetes and that's why we are using the community backed installation process, which can be performed on either cloud provider GCP, AWS, Azure, or local VirtualBox, VMware. Clearly each instance requires some tweaking, but generally, the vendor-neutral installation process will be similar.Kubernetes changes rapidly and unfortunately what works today, may not necessarily work tomorrow. Whit that in mind, the labs as they are presented with commands and outputs, have been tested in a vendor-neutral configuration on Kubernetes 1.12.1. Any major change from this setup, such as a vendor's flavor of Kubernetes, causes other changes down the line - commands need to be tweaked to match the vendor's environment, and outputs will be different. There is no specific reason why GCE VM instances have been used - we could have used AWS EC2 instances or VirtualBox VMs - any environment which could have provided us with clean and simple VM instances to install everything from scratch: Ubuntu OS, Docker, Kubernetes, etc... Once properly setup with Ubuntu 16.04 LTS and all firewalls open, then the install scripts "should" work without any issues. "Should" because it seems in the case of Kubernetes 1.12.1, while 1.13.0 was released, some of its code affected 1.12.1 as well.
After the master script and second script have completed, the step on the worker/second node where "kubeadm join" is issued, it will cause a permission error, and the worker node will not join the cluster. While it may be easy to just install 1.13 instead, the labs have not been tested with 1.13.
I posted a solution on how to fix the permission issue during "kubeadm join" on 1.12.1 and it requires some tasks to be completed manually - not Developer-friendly, but most definitely doable
Here is the solution to fix the permission issue:
https://github.com/chris-pok/k8s-1.12.1.gitGood luck!
-Chris0 -
The kubeadm init command now checks for an uses a new version of software, even though we asked and installed an older version. Once 1.13 was available the master node was using it for the control plane, but none of the software matches. This causes a join to fail. The fix, which I will put into the course material soon, is to declare which version of the software the init should use:
kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr 192.168.0.0/16
The use of --kubernetes-version 1.12.1 wasn't necessary when the most recent release and the software in use were the same.
Regards,
0 -
Thank you both for the answers, what isn't still clear to me is if the use of GKE fits for this course or is better to start deploying the cluster with the provided scripts (even inside Google Cloud but using VMs).
Thanks
0 -
I ended up creating two VMs on Google Cloud (Ubuntu 16.4) and setting up the cluster as described in the LAB 2.1
All works just fine but following the lab steps I'm not able to establish the connection with the basic pod from the master.In the lab at some point
basicpod
service is deployed with a containerPort set to 80, this exposes the enginx webserver. To test it I did akubectl get pod -o wide
to get the right ip address and thencurl http://{ip_address}
to read the exposed data. It doesn't work, but it I do the same connecting to the minion it works. This suggest to me I have some problem with the cluster network configuration but, as far as understood, this is managed by the Calico project, that's correctly deployed.0 -
Hi, if your pod runs on the minion and you cannot curl to it from the master, there may be a networking issue between your nodes.
0 -
@chrispokorni said:
Hi, if your pod runs on the minion and you cannot curl to it from the master, there may be a networking issue between your nodes.Yes, that's what I wrote
I stopped the firewall on both master and node, Calico is installed, then I'm asking for suggestion on what to check next.Thanks
Fabrizio0 -
@guglielmino
Calico only helps with pods networking, not with nodes networking. Nodes networking is handled by Google Cloud.
Before creating your GCE nodes, did you setup a VPC? Do you have a firewall rule created to allow all traffic: to all ports, all protocols, from all sources? Is this firewall rule associated with the VPC? Are your nodes on the VPC network?
By default, firewall rules in Google Cloud do not allow for all traffic, blocking some which may be critical for Kubernetes' functionality.
Regards,
-Chris0 -
@chrispokorni said:
@guglielmino
Calico only helps with pods networking, not with nodes networking. Nodes networking is handled by Google Cloud.
Before creating your GCE nodes, did you setup a VPC? Do you have a firewall rule created to allow all traffic: to all ports, all protocols, from all sources? Is this firewall rule associated with the VPC? Are your nodes on the VPC network?
By default, firewall rules in Google Cloud do not allow for all traffic, blocking some which may be critical for Kubernetes' functionality.
Regards,
-ChrisProblem solved! The problem was that I disabled the firewall from inside the VM but I needed to disable the Google Cloud, it was blocking some ports and then my problems.
Thank you
Fabrizio0
Categories
- All Categories
- 208 LFX Mentorship
- 208 LFX Mentorship: Linux Kernel
- 738 Linux Foundation IT Professional Programs
- 340 Cloud Engineer IT Professional Program
- 167 Advanced Cloud Engineer IT Professional Program
- 67 DevOps Engineer IT Professional Program
- 133 Cloud Native Developer IT Professional Program
- 123 Express Training Courses
- 123 Express Courses - Discussion Forum
- 6K Training Courses
- 40 LFC110 Class Forum - Discontinued
- 67 LFC131 Class Forum
- 40 LFD102 Class Forum
- 223 LFD103 Class Forum
- 18 LFD110 Class Forum
- 35 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 3 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 690 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 11 LFD273 Class Forum
- 117 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 4 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 9 LFS151 Class Forum
- 2 LFS157 Class Forum
- 19 LFS158 Class Forum
- 6 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 2 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 119 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 51 LFS241 Class Forum
- 46 LFS242 Class Forum
- 38 LFS243 Class Forum
- 14 LFS244 Class Forum
- 2 LFS245 Class Forum
- 46 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 148 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 117 LFS260 Class Forum
- 158 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 19 LFS268 Class Forum
- 30 LFS269 Class Forum
- 201 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 258 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 791 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 98 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 755 Linux Distributions
- 82 Debian
- 67 Fedora
- 16 Linux Mint
- 13 Mageia
- 23 openSUSE
- 147 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 352 Ubuntu
- 465 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 91 Linux Security
- 78 Network Management
- 101 System Management
- 47 Web Management
- 56 Mobile Computing
- 17 Android
- 28 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 366 Off Topic
- 114 Introductions
- 171 Small Talk
- 20 Study Material
- 534 Programming and Development
- 293 Kernel Development
- 223 Software Development
- 1.2K Software
- 212 Applications
- 182 Command Line
- 3 Compiling/Installing
- 405 Games
- 312 Installation
- 79 All In Program
- 79 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)