Welcome to the Linux Foundation Forum!

LAB 3.1 Error Applying CAlico.YAML Help!

Options

This is the second time I have rebuilt according to the lab instructions
The kubeadm init output looks fine and says successful but there is no service listening on port 6443

export KUBECONFIG=/etc/kubernetes/admin.conf
root@cp:~# kubectl get nodes
The connection to the server k8scp:6443 was refused - did you specify the right host or port?

$ kubectl apply -f calico.yaml
error when retrieving current configuration of:
Resource: "policy/v1, Resource=poddisruptionbudgets", GroupVersionKind: "policy/v1, Kind=PodDisruptionBudget"
Name: "calico-kube-controllers", Namespace: "kube-system"
from server for: "calico.yaml": Get "https://k8scp:6443/apis/policy/v1/namespaces/kube-system/poddisruptionbudgets/calico-kube-controllers": stream error: stream ID 67; INTERNAL_ERROR; received from peer
error when retrieving current configuration of:

Name: "calico-kube-controllers", Namespace: ""
from server for: "calico.yaml": Get "https://k8scp:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/calico-kube-controllers": dial tcp 10.2.0.8:6443: connect: connection refused
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "calico-node", Namespace: ""
from server for: "calico.yaml": Get "https://k8scp:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/calico-node": dial tcp 10.2.0.8:6443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "calico-node", Namespace: "kube-system"
from server for: "calico.yaml": Get "https://k8scp:6443/apis/apps/v1/namespaces/kube-system/daemonsets/calico-node": dial tcp 10.2.0.8:6443: connect: connection refused
error when retrieving current configuration of:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "calico-kube-controllers", Namespace: "kube-system"
from server for: "calico.yaml": Get "https://k8scp:6443/apis/apps/v1/namespaces/kube-system/deployments/calico-kube-controllers": dial tcp 10.2.0.8:6443: connect: connection refused

Comments

  • chrispokorni
    Options

    Hi @dicalleson,

    Please pay close attention to the lab guide. root should not be allowed to run kubectl commands. The .kube/config file should be stored in the regular user's home directory.

    The connection refused errors are typically caused by either firewalls at the infra level or at OS level. Please ensure all firewalls are disabled and all traffic is allowed to and from your VMs, from all sources, to all destinations, to all ports, and all protocols.

    Regards,
    -Chris

  • dicalleson
    Options

    I followed the video closely for creating the VPC and firewall rules and creating the VMs.
    The VMs are able to get packages. I am able to ssh to both of them and they can ping each other.

    Here is the firewall rule. Please tell me if this is not correct?
    lfs258fwrule
    Logs
    Off
    view in Logs Explorer
    Network lfs258class
    Priority 1000
    Direction Ingress
    Action on match Allow
    Source filters
    IP ranges
    0.0.0.0/0
    Protocols and ports
    all
    Enforcement Enabled
    Insights
    None
    Hit count monitoring

  • dicalleson
    Options

    ALso, It is strange but the cp node appears that the processes are starting and restarting a lot. I was able to run kubectl get nodes and got a response then a little later it did not work. I also tried running the kubectl apply -f calico.yaml and it completed successfully... but then a little later I was not able to run the kubectl command and it got connection refused. journalctl -f shows :smile:

    Feb 27 22:44:40 cp kubelet[4111]: E0227 22:44:40.516701 4111 kuberuntime_manager.go:905] init container &Container{Name:install-cni,Image:docker.io/calico/cni:v3.25.0,Command:[/opt/cni/bin/install],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CNI_CONF_NAME,Value:10-calico.conflist,ValueFrom:nil,},EnvVar{Name:CNI_NETWORK_CONFIG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:cni_network_config,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CNI_MTU,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:calico-config,},Key:veth_mtu,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:SLEEP,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-bin-dir,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-phdll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod calico-node-kz2bs_kube-system(012de78d-46d1-4e8f-8b70-f376d3c156be): CreateContainerConfigError: failed to sync configmap cache: timed out waiting for the condition
    Feb 27 22:44:40 cp kubelet[4111]: E0227 22:44:40.517132 4111 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with CreateContainerConfigError: \"failed to sync configmap cache: timed out waiting for the condition\"" pod="kube-system/calico-node-kz2bs" podUID=012de78d-46d1-4e8f-8b70-f376d3c156be
    Feb 27 22:44:40 cp kubelet[4111]: I0227 22:44:40.549223 4111 scope.go:110] "RemoveContainer" containerID="407fedd9a93cb69b4873193590974a68346f579f0a49fa12612c37f388d2083d"
    Feb 27 22:44:40 cp kubelet[4111]: E0227 22:44:40.549616 4111 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-nswgk_kube-system(f036b8ec-fc8c-49ec-87e2-7e95f407e6c8)\"" pod="kube-system/kube-proxy-nswgk" podUID=f036b8ec-fc8c-49ec-87e2-7e95f407e6c8
    Feb 27 22:44:41 cp kubelet[4111]: I0227 22:44:41.660869 4111 scope.go:110] "RemoveContainer" containerID="b10dcc6a99ad7189ce3b1fc04b8c634326d10e1b87f6c07f8859377f93f3f339"
    Feb 27 22:44:41 cp kubelet[4111]: E0227 22:44:41.661675 4111 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-cp_kube-system(5ac93d50ac28d4985e47d6aa5d78e9dd)\"" pod="kube-system/kube-controller-manager-cp" podUID=5ac93d50ac28d4985e47d6aa5d78e9dd
    Feb 27 22:44:42 cp kubelet[4111]: I0227 22:44:42.583871 4111 scope.go:110] "RemoveContainer" containerID="b10dcc6a99ad7189ce3b1fc04b8c634326d10e1b87f6c07f8859377f93f3f339"
    Feb 27 22:44:42 cp kubelet[4111]: E0227 22:44:42.584673 4111 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-cp_kube-system(5ac93d50ac28d4985e47d6aa5d78e9dd)\"" pod="kube-system/kube-controller-manager-cp" podUID=5ac93d50ac28d4985e47d6aa5d78e9dd

  • dicalleson
    Options

    Could the problem be my subnet? I created subnet 10.2.0.0/16
    Is that a problem?

  • dicalleson
    Options

    I REbuilt again and used the ubuntu-2004-focal image instead of the jammy image.
    I am fairly certain that this was the issue.

Categories

Upcoming Training