Welcome to the Linux Foundation Forum!

redhat 6.2 cluster

Hello Experts,

Please help me with the exact steps on configuring two node cluster on RHEL 6.2,

I failed to configure the simplest cluster by below steps,

1- install RHEL 6.2 64-bit on both nodes

2- add to hosts file of each server ( 1 IP in local NW and another in IP private NW).

x.x.x.x node1-pub

z.z.z.z node2-pub

y.y.y.y node1-pvt

t.t.t.t node2-pvt

3- yum install ricci ( on both nodes )

4- yum install luci ( on 1 node )

5- yum groupinstall "high availability" ( on both nodes )

6- from browser, node1-pub:8084 ( login and create new cluster )

give cluster name, nodes name are (node1-pvt),(node2-pvt)

7- cluster is UP with two nodes, so far.

========================================================

Now:

8- configure failover domain and select both nodes.

9- configure resource(IP) and give IP in same range of public network.

10- configure servicegroup and assign the failover domain and the IP resource to the servicegroup.

11- IP doesn't start.

==========

Nov 24 02:59:37 rgmanager start on ip "10.10.4.223/255.255.255.0" returned 1 (generic error)

Nov 24 02:59:37 rgmanager #68: Failed to start service:vip; return value: 1

Nov 24 02:59:37 rgmanager Stopping service service:vip

Nov 24 02:59:37 rgmanager [ip] 10.10.4.223/255.255.255.0 is not configured

Nov 24 02:59:37 rgmanager Service service:vip is recovering

Nov 24 02:59:38 rgmanager #71: Relocating failed service service:vip

==========

from luci i get this error

Starting cluster "cluname" service "vip" from node "node1-pvt" failed: vip is in unknown state 118

what did i miss?

please help.

thanks

Comments

  • Any Clue Gents?
  • mfillpot
    mfillpot Posts: 2,177
    Honestly I have not configured a cluster in nearly a decade so the current steps are lost on me, but my network engineering background tells me to check the communication if the services are stating that they are running correctly. Verify that neither of the firewall or host files are blocking communications then validate the network packet communication on both hosts with tcpdump to see if all packets are being properly received.

  • I'm having the same problem -- cluster is configured exactly as you describe but I can't get the shared IP resource to start. Unknown State 118.

    Did you ever figure this one out?


    karmellove wrote:
    Hello Experts,

    Please help me with the exact steps on configuring two node cluster on RHEL 6.2,
    I failed to configure the simplest cluster by below steps,

    1- install RHEL 6.2 64-bit on both nodes
    2- add to hosts file of each server ( 1 IP in local NW and another in IP private NW).
    x.x.x.x node1-pub
    z.z.z.z node2-pub
    y.y.y.y node1-pvt
    t.t.t.t node2-pvt

    3- yum install ricci ( on both nodes )
    4- yum install luci ( on 1 node )
    5- yum groupinstall "high availability" ( on both nodes )
    6- from browser, node1-pub:8084 ( login and create new cluster )
    give cluster name, nodes name are (node1-pvt),(node2-pvt)
    7- cluster is UP with two nodes, so far.
    ========================================================
    Now:
    8- configure failover domain and select both nodes.
    9- configure resource(IP) and give IP in same range of public network.
    10- configure servicegroup and assign the failover domain and the IP resource to the servicegroup.
    11- IP doesn't start.

    ==========
    Nov 24 02:59:37 rgmanager start on ip "10.10.4.223/255.255.255.0" returned 1 (generic error)
    Nov 24 02:59:37 rgmanager #68: Failed to start service:vip; return value: 1
    Nov 24 02:59:37 rgmanager Stopping service service:vip
    Nov 24 02:59:37 rgmanager [ip] 10.10.4.223/255.255.255.0 is not configured
    Nov 24 02:59:37 rgmanager Service service:vip is recovering
    Nov 24 02:59:38 rgmanager #71: Relocating failed service service:vip
    ==========
    from luci i get this error

    Starting cluster "cluname" service "vip" from node "node1-pvt" failed: vip is in unknown state 118

    what did i miss?

    please help.

    thanks
  • Figured this one out... CLUSTAT showed that one of my cluster nodes was offline; the second node was attempting to recover the shared IP service, which was returning the Status Unknown 118 code because I had never gotten the service to work at this point.

    The reason my shared IP service wasn't starting / recovering was that I had set the "Netmask Bits" to the full netmask: 255.255.255.0

    Really, it was looking for the bits of the netmask -- in this case, 24.

    After getting my first node back online and making this change to the netmask, my shared IP is working nicely.
  • nitiratna
    nitiratna Posts: 1
    edited August 2013
    MAKE SURE THESE PKGS INSTALLED ON ALL THE NODES.
    openssh-server.x86_64
    openssh-clients.x86_64

    NOTE:Do the following steps before Adding NODES to cluster.

    1)use single nomenclature for passwordless login AND throughout the cluster configuration.
    e.g. addressing nodes as node1.com and node2.com (FQDN)

    2)Do bi-directional ssh passwordless login between all the nodes. (with using FQDN of opposite node) As follows;

    [root@node1 ~]# ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    7c:72:ec:36:0f:15:46:b6:63:fd:78:97:fc:18:ee:05 root@node1.com
    The key's randomart image is:
    +--[ RSA 2048]----+
    | o |
    | o o |
    | * . |
    | . . o o.o.|
    | S + . Eo+|
    | = . . =o|
    | = o o|
    | . + . . |
    | . . |
    +
    +
    [root@node1~]#ssh-copy-id -i /root/.ssh/id_rsa.pub node2.com

    The authenticity of host 'node2.com (node2.com)' can't be established.
    RSA key fingerprint is 88:ec:50:1a:a5:a1:63:08:be:7f:c8:73:cc:41:f5:2b.
    Are you sure you want to continue connecting(yes/no)?yes
    Warning: Permanently added 'node2.com' (RSA) to the list of known hosts.

    root@node2.com's password:
    Now try logging into the machine, with "ssh 'node2.com'", and check in:

    .ssh/authorized_keys

    to make sure we haven't added extra keys that you weren't expecting.

    [root@node1 ~]#

    3)PERFORM SIMILAR STEPS FROM node2.com for passwordless login on node1.com

    4)make local /etc/hosts entries on both nodes

    X.X.X.X node1.com node1
    Y.Y.Y.Y node2.com node2


    5)Add these nodes in the cluster as node1.com not as node1; same for the node2.com



    regards,
    nitiratna nikalje
    MUMBAI, INDIA

Categories

Upcoming Training