Welcome to the Linux Foundation Forum!

Lab 15.1 Step 13 | Helm Install

After running " helm --debug install stable/mariad --set master.persistence.enabled=false --set slave.persistence.enabled=false" i receive the errror "failed install prepare step: no available release name found"

From the logs:

kubectl -n kube-system logs tiller-deploy-58c4d6d4f7-4dwrv
[main] 2018/10/02 12:38:16 Starting Tiller v2.7.0 (tls=false)
[main] 2018/10/02 12:38:16 GRPC listening on :44134
[main] 2018/10/02 12:38:16 Probes listening on :44135
[main] 2018/10/02 12:38:16 Storage driver is ConfigMap
[main] 2018/10/02 12:38:16 Max history per release is 0
[tiller] 2018/10/02 12:43:57 preparing install for
[storage] 2018/10/02 12:43:57 getting release "olfactory-whippet.v1"
[storage/driver] 2018/10/02 12:44:27 get: failed to get "olfactory-whippet.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/olfactory-whippet.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:44:27 info: generated name olfactory-whippet is taken. Searching again.
[storage] 2018/10/02 12:44:27 getting release "sad-possum.v1"
[storage/driver] 2018/10/02 12:44:57 get: failed to get "sad-possum.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/sad-possum.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:44:57 info: generated name sad-possum is taken. Searching again.
[storage] 2018/10/02 12:44:57 getting release "agile-meerkat.v1"
[storage/driver] 2018/10/02 12:45:27 get: failed to get "agile-meerkat.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/agile-meerkat.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:45:27 info: generated name agile-meerkat is taken. Searching again.
[storage] 2018/10/02 12:45:27 getting release "ornery-toad.v1"
[storage/driver] 2018/10/02 12:45:57 get: failed to get "ornery-toad.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/ornery-toad.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:45:57 info: generated name ornery-toad is taken. Searching again.
[storage] 2018/10/02 12:45:57 getting release "punk-clownfish.v1"
[storage/driver] 2018/10/02 12:46:27 get: failed to get "punk-clownfish.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/punk-clownfish.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:46:27 info: generated name punk-clownfish is taken. Searching again.
[tiller] 2018/10/02 12:46:27 warning: No available release names found after 5 tries
[tiller] 2018/10/02 12:46:27 failed install prepare step: no available release name found
[tiller] 2018/10/02 12:49:20 preparing install for
[storage] 2018/10/02 12:49:20 getting release "kind-newt.v1"
[storage/driver] 2018/10/02 12:49:50 get: failed to get "kind-newt.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/kind-newt.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:49:50 info: generated name kind-newt is taken. Searching again.
[storage] 2018/10/02 12:49:50 getting release "reeling-penguin.v1"
[storage/driver] 2018/10/02 12:50:20 get: failed to get "reeling-penguin.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/reeling-penguin.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:50:20 info: generated name reeling-penguin is taken. Searching again.
[storage] 2018/10/02 12:50:20 getting release "guilded-grizzly.v1"
[storage/driver] 2018/10/02 12:50:50 get: failed to get "guilded-grizzly.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/guilded-grizzly.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:50:50 info: generated name guilded-grizzly is taken. Searching again.
[storage] 2018/10/02 12:50:50 getting release "solitary-whippet.v1"
[storage/driver] 2018/10/02 12:51:20 get: failed to get "solitary-whippet.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/solitary-whippet.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:51:20 info: generated name solitary-whippet is taken. Searching again.
[storage] 2018/10/02 12:51:20 getting release "bumptious-marmot.v1"
[storage/driver] 2018/10/02 12:51:50 get: failed to get "bumptious-marmot.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/bumptious-marmot.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2018/10/02 12:51:50 info: generated name bumptious-marmot is taken. Searching again.
[tiller] 2018/10/02 12:51:50 warning: No available release names found after 5 tries
[tiller] 2018/10/02 12:51:50 failed install prepare step: no available release name found

Comments

  • tlevin
    tlevin Posts: 5

    also...get an error when attempting to delete the helm chart

    helm delete tiller my-release stable/mariadb
    Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME=tiller,OWNER=TILLER: dial tcp 10.96.0.1:443: i/o timeout

  • chrispokorni
    chrispokorni Posts: 2,349
    edited October 2018

    Hi,
    In your install step above, I think there is a typo:

    helm --debug install stable/mariad

    it should be:

    helm --debug install stable/mariadb

    Regards,
    -Chris

  • tlevin
    tlevin Posts: 5

    Thanks Chris. Same issue when corrected.

    helm --debug install stable/mariadb --set master.persistence.enabled=false --set slave.persistence.enabled=false
    [debug] Created tunnel using local port: '34234'

    [debug] SERVER: "localhost:34234"

    [debug] Original chart version: ""
    [debug] Fetched stable/mariadb to /root/.helm/cache/archive/mariadb-5.0.7.tgz

    [debug] CHART PATH: /root/.helm/cache/archive/mariadb-5.0.7.tgz

    Error: no available release name found

  • chrispokorni
    chrispokorni Posts: 2,349

    Are you installing the chart as root? That may be an issue, if helm was installed and setup as another user.

  • tlevin
    tlevin Posts: 5

    root@k8s-lfs258-01:~# helm home
    /root/.helm

  • chrispokorni
    chrispokorni Posts: 2,349
    edited October 2018

    I have not installed and used kubectl and helm as root. I'd have to try that to see if it reproduces the issue you are seeing.

  • serewicz
    serewicz Posts: 1,000

    Hello,

    This looks like an RBAC issue. When you ran the patch command after helm init did you get any output or errors? Ensure all the curly braces {} and single quotes are properly typed. Here is the command to help, after the paste ensure the single quotes are not changed to back-quotes:

    kubectl -n kube-system patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

    Regards,

  • tlevin
    tlevin Posts: 5

    Still no luck this is the only task thus far that hasn't worked as prescribed.

    Do i need to add the tiller service account to the helm init command?

    root@k8s-lfs258-01:~# kubectl -n kube-system patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    deployment.extensions/tiller-deploy patched

    root@k8s-lfs258-01:~# helm init --upgrade
    $HELM_HOME has been configured at /root/.helm.

    Tiller (the Helm server-side component) has been upgraded to the current version.
    Happy Helming!
    root@k8s-lfs258-01:~# helm --debug install stable/mariadb --set master.persistence.enabled=false --set slave.persistence.enabled=false
    [debug] Created tunnel using local port: '34084'

    [debug] SERVER: "localhost:34084"

    [debug] Original chart version: ""
    [debug] Fetched stable/mariadb to /root/.helm/cache/archive/mariadb-5.0.7.tgz

    [debug] CHART PATH: /root/.helm/cache/archive/mariadb-5.0.7.tgz

    Error: no available release name found

    root@k8s-lfs258-01:~# kubectl get clusterrolebindings | grep tiller
    tiller-cluster-rule 6h

    root@k8s-lfs258-01:~# kubectl describe clusterrolebindings tiller
    Name: tiller-cluster-rule
    Labels:
    Annotations:
    Role:
    Kind: ClusterRole
    Name: cluster-admin
    Subjects:
    Kind Name Namespace
    ---- ---- ---------
    ServiceAccount tiller kube-system

  • serewicz
    serewicz Posts: 1,000

    It looks like a connection issue from helm to the kube-apiserver. If not RBAC then an issue with other authentication, or networking misconfiguration I would suspect. I have just run the steps as a regular student user, the same I used for the rest of the steps. I know the .kube/config file works that way. The lab works as expected, so the software hasn't broken (which can happen with dynamic projects).

    This is most likely tied to a network issue if not RBAC nor having a proper config file. For example I see in your errors requests going to port 443, not 6443 which is where the API server listens.

    Did all the previous commands work?
    Have you been able to exec into a pod and run commands?
    Are you running Calico, did you edit anything there?

    What I would try next:
    1) Run the lab as the non-root user I used for the rest of the labs.

    2) Check that there are no firewalls on any of the nodes or between the nodes that would block traffic.

    3) Were there any previous steps you did extra or not do to get here? Same OS, version, software used etc? Errors before this issue?

    Regards,

  • I found solution on web:

    https://github.com/helm/helm/issues/3347
    https://serverfault.com/questions/931061/helm-i-o-timeout-kubernetes

    This problem happens when working with virtualbox based kubernetes.
    I installed kubernetes on PC directly and there was not this problem.
    Solution for Virtualbox based kubernetes is, to replace 192.168.0.0 in calico.yaml with 172.16.0.0/16 before initialize with kubeadm init, then run command
    kubeadm init --pod-network-cidr 172.16.0.0/16

    also all confguration at home dir needed to clear which clears also all depndencies
    and copy this changed calico.yaml to home before run command kubectl apply -f calico.yaml

    then join with worker computers
    That is all, it works for me!

  • serewicz
    serewicz Posts: 1,000
    edited December 2018

    Hello,

    If using VirtualBox please ensure that all the various network interfaces are set to allow all traffic. By default VirtualBox limits the traffic, which may be why you are not seeing the 192.168.0.0 traffic.

    Regards,

Categories

Upcoming Training