Welcome to the Linux Foundation Forum!

haproxy not accessible

Hi,
I did changes in config of haproxy and restarted it.
Also, changed /etc/hosts on master to point to haproxy IP.
Stats are displayed correctly on http://k8s-ha-proxy:9999/stats.
haproxy /stats

Doing kubectl get nodes hung for some time and get Unable to connect to the server: net/http: TLS handshake timeout.

I checked troubleshooting ideas in proposed in https://forum.linuxfoundation.org/discussion/857393/lfs258-lab-16-2-6.

When I switch back k8smaster in /etc/hosts on master, problem is not reproduced.

In the installation lab, did you build the cluster using the k8master alias as the lab suggests or using an IP address?

I think I have used k8smaster alias. How to verify?

I have attached cluster-info and get node -v9 outputs.

Also haproxy config:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
    ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
    ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend proxynode ## Added
bind *:80
bind *:6443
stats uri /proxystats
default_backend k8sServers

backend k8sServers
balance roundrobin
server k8smaster 52.47.206.118:6553
#server lfs458-SecondMaster 15.237.159.246:6553
#server lfs458-ThirdMaster 15.237.159.246:6553
listen stats
bind :9999
mode http
stats enable
stats hide-version
stats uri /stats

Thank you,
Igor

Comments

  • Hi @iracic,

    It seems that your haproxy.cfg is slightly incomplete. I would recommend revisiting the file, compare it against the file with the same name from the SOLUTIONS tarball to ensure that syntax for all properties and options are correct.

    Are you using "k8smaster" as the hostname of your first/original master node AND as an alias of the Proxy node? If that is the case, this configuration introduces confusion into the node name resolution process in your cluster. The lab exercise uses "k8smaster" only as an alias, not as a hostname. The hostname of the first/original master node is "master", but you can use something more meaningful instead, to avoid confusion - such as "master1" or "firstmaster"...

    Regards,
    -Chris

  • Just noticed that title is wrong and can not edit it. It is about kubectl get nodes "freeze" after changing master /etc/hosts to point to haproxy.

  • Right...

    To further clarify the issue:

    If HAproxy servre is not properly configured, you are losing connectivity between your master control plane and the worker nodes, and kubectl requests will no longer reach the API server of your active master node.

    Regards,
    -Chris

  • iracic
    iracic Posts: 7
    edited December 2020

    I compared solutions config file and did changes to haproxy config. After restart in haproxy log there is:

    Dec 23 15:46:12 ip-172-31-42-50 haproxy[1013]: [WARNING] 357/154612 (1013) : Server k8sServers/master1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 a>
    Dec 23 15:46:12 ip-172-31-42-50 haproxy[1013]: [ALERT] 357/154612 (1013) : backend 'k8sServers' has no server available!

    On master, kubectl get nodes now gets

    Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

    And haproxy /stats now shows problem with master access:

    Concerning line from solutions haproxy:

    server master1 10.128.0.24:6443 check #<-- Edit with your IP addresses.

    Is here master1 just an alias, and IP is master external AWS IP address ?

    Changed haproxy config file is attached.

    Thank you,
    Igor

  • chrispokorni
    chrispokorni Posts: 2,346
    edited December 2020

    Hi @iracic,

    The haproxy.cfg file needs to include the backend servers with the following syntax:

    server <hostname> <private-IP>:<secure-API-port> check

    If my three master servers have the following hostnames: k8s-master1, k8s-master2, and k8s-master3, with their respective private IP addresses 10.128.0.10, 10.128.0.11, and 10.128.0.12, while assuming that the API server is exposed on its default secure port 6443, then one of the backend entries would look like this:

    server k8s-master1 10.128.0.10:6443 check

    At this point however, in order for the traffic to be routed through the haproxy server, all the /etc/hosts files (on all masters and worker(s) ) need to include the k8smaster alias for the haproxy server's private IP address.

    Regards,
    -Chris

Categories

Upcoming Training