Lab 15.1 Step 13 | Helm Install
After running " helm --debug install stable/mariad --set master.persistence.enabled=false --set slave.persistence.enabled=false" i receive the errror "failed install prepare step: no available release name found"
From the logs:
kubectl -n kube-system logs tiller-deploy-58c4d6d4f7-4dwrv
[main] 2018/10/02 12:38:16 Starting Tiller v2.7.0 (tls=false)
[main] 2018/10/02 12:38:16 GRPC listening on :44134
[main] 2018/10/02 12:38:16 Probes listening on :44135
[main] 2018/10/02 12:38:16 Storage driver is ConfigMap
[main] 2018/10/02 12:38:16 Max history per release is 0
[tiller] 2018/10/02 12:43:57 preparing install for
[storage] 2018/10/02 12:43:57 getting release "olfactory-whippet.v1"
[storage/driver] 2018/10/02 12:44:27 get: failed to get "olfactory-whippet.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/olfactory-whippet.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:44:27 info: generated name olfactory-whippet is taken. Searching again.
[storage] 2018/10/02 12:44:27 getting release "sad-possum.v1"
[storage/driver] 2018/10/02 12:44:57 get: failed to get "sad-possum.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/sad-possum.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:44:57 info: generated name sad-possum is taken. Searching again.
[storage] 2018/10/02 12:44:57 getting release "agile-meerkat.v1"
[storage/driver] 2018/10/02 12:45:27 get: failed to get "agile-meerkat.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/agile-meerkat.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:45:27 info: generated name agile-meerkat is taken. Searching again.
[storage] 2018/10/02 12:45:27 getting release "ornery-toad.v1"
[storage/driver] 2018/10/02 12:45:57 get: failed to get "ornery-toad.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/ornery-toad.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:45:57 info: generated name ornery-toad is taken. Searching again.
[storage] 2018/10/02 12:45:57 getting release "punk-clownfish.v1"
[storage/driver] 2018/10/02 12:46:27 get: failed to get "punk-clownfish.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/punk-clownfish.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:46:27 info: generated name punk-clownfish is taken. Searching again.
[tiller] 2018/10/02 12:46:27 warning: No available release names found after 5 tries
[tiller] 2018/10/02 12:46:27 failed install prepare step: no available release name found
[tiller] 2018/10/02 12:49:20 preparing install for
[storage] 2018/10/02 12:49:20 getting release "kind-newt.v1"
[storage/driver] 2018/10/02 12:49:50 get: failed to get "kind-newt.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/kind-newt.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:49:50 info: generated name kind-newt is taken. Searching again.
[storage] 2018/10/02 12:49:50 getting release "reeling-penguin.v1"
[storage/driver] 2018/10/02 12:50:20 get: failed to get "reeling-penguin.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/reeling-penguin.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:50:20 info: generated name reeling-penguin is taken. Searching again.
[storage] 2018/10/02 12:50:20 getting release "guilded-grizzly.v1"
[storage/driver] 2018/10/02 12:50:50 get: failed to get "guilded-grizzly.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/guilded-grizzly.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:50:50 info: generated name guilded-grizzly is taken. Searching again.
[storage] 2018/10/02 12:50:50 getting release "solitary-whippet.v1"
[storage/driver] 2018/10/02 12:51:20 get: failed to get "solitary-whippet.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/solitary-whippet.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:51:20 info: generated name solitary-whippet is taken. Searching again.
[storage] 2018/10/02 12:51:20 getting release "bumptious-marmot.v1"
[storage/driver] 2018/10/02 12:51:50 get: failed to get "bumptious-marmot.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/bumptious-marmot.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:51:50 info: generated name bumptious-marmot is taken. Searching again.
[tiller] 2018/10/02 12:51:50 warning: No available release names found after 5 tries
[tiller] 2018/10/02 12:51:50 failed install prepare step: no available release name found
Comments
-
also...get an error when attempting to delete the helm chart
helm delete tiller my-release stable/mariadb
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME=tiller,OWNER=TILLER: dial tcp 10.96.0.1:443: i/o timeout0 -
Hi,
In your install step above, I think there is a typo:helm --debug install stable/mariad
it should be:
helm --debug install stable/mariadb
Regards,
-Chris0 -
Thanks Chris. Same issue when corrected.
helm --debug install stable/mariadb --set master.persistence.enabled=false --set slave.persistence.enabled=false
[debug] Created tunnel using local port: '34234'[debug] SERVER: "localhost:34234"
[debug] Original chart version: ""
[debug] Fetched stable/mariadb to /root/.helm/cache/archive/mariadb-5.0.7.tgz[debug] CHART PATH: /root/.helm/cache/archive/mariadb-5.0.7.tgz
Error: no available release name found
0 -
Are you installing the chart as root? That may be an issue, if helm was installed and setup as another user.
0 -
root@k8s-lfs258-01:~# helm home
/root/.helm0 -
I have not installed and used kubectl and helm as root. I'd have to try that to see if it reproduces the issue you are seeing.
0 -
Hello,
This looks like an RBAC issue. When you ran the patch command after helm init did you get any output or errors? Ensure all the curly braces {} and single quotes are properly typed. Here is the command to help, after the paste ensure the single quotes are not changed to back-quotes:
kubectl -n kube-system patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Regards,
0 -
Still no luck this is the only task thus far that hasn't worked as prescribed.
Do i need to add the tiller service account to the helm init command?
root@k8s-lfs258-01:~# kubectl -n kube-system patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patchedroot@k8s-lfs258-01:~# helm init --upgrade
$HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
root@k8s-lfs258-01:~# helm --debug install stable/mariadb --set master.persistence.enabled=false --set slave.persistence.enabled=false
[debug] Created tunnel using local port: '34084'[debug] SERVER: "localhost:34084"
[debug] Original chart version: ""
[debug] Fetched stable/mariadb to /root/.helm/cache/archive/mariadb-5.0.7.tgz[debug] CHART PATH: /root/.helm/cache/archive/mariadb-5.0.7.tgz
Error: no available release name found
root@k8s-lfs258-01:~# kubectl get clusterrolebindings | grep tiller
tiller-cluster-rule 6hroot@k8s-lfs258-01:~# kubectl describe clusterrolebindings tiller
Name: tiller-cluster-rule
Labels:
Annotations:
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount tiller kube-system0 -
It looks like a connection issue from helm to the kube-apiserver. If not RBAC then an issue with other authentication, or networking misconfiguration I would suspect. I have just run the steps as a regular student user, the same I used for the rest of the steps. I know the .kube/config file works that way. The lab works as expected, so the software hasn't broken (which can happen with dynamic projects).
This is most likely tied to a network issue if not RBAC nor having a proper config file. For example I see in your errors requests going to port 443, not 6443 which is where the API server listens.
Did all the previous commands work?
Have you been able to exec into a pod and run commands?
Are you running Calico, did you edit anything there?What I would try next:
1) Run the lab as the non-root user I used for the rest of the labs.2) Check that there are no firewalls on any of the nodes or between the nodes that would block traffic.
3) Were there any previous steps you did extra or not do to get here? Same OS, version, software used etc? Errors before this issue?
Regards,
0 -
I found solution on web:
https://github.com/helm/helm/issues/3347
https://serverfault.com/questions/931061/helm-i-o-timeout-kubernetesThis problem happens when working with virtualbox based kubernetes.
I installed kubernetes on PC directly and there was not this problem.
Solution for Virtualbox based kubernetes is, to replace 192.168.0.0 in calico.yaml with 172.16.0.0/16 before initialize with kubeadm init, then run command
kubeadm init --pod-network-cidr 172.16.0.0/16also all confguration at home dir needed to clear which clears also all depndencies
and copy this changed calico.yaml to home before run command kubectl apply -f calico.yamlthen join with worker computers
That is all, it works for me!0 -
Hello,
If using VirtualBox please ensure that all the various network interfaces are set to allow all traffic. By default VirtualBox limits the traffic, which may be why you are not seeing the 192.168.0.0 traffic.
Regards,
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 788 Linux Foundation IT Professional Programs
- 352 Cloud Engineer IT Professional Program
- 177 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 146 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 46 LFC110 Class Forum - Discontinued
- 70 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 37 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 694 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 146 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- 6 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 25 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 130 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 151 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 96 All In Program
- 96 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)