Welcome to the Linux Foundation Forum!

Lab11.1 - Deploy a two OSD nodes for the cluster

arbcat
arbcat Posts: 1

First of all, the title of this section isn't quite right ;)

Step 6 returns:


[ceph@rdo-cc ceph-cluster]$ ceph-deploy osd prepare \
> storage1:/var/local/osd0
usage: ceph-deploy osd [-h] {list,create} ...
ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare' (choose from 'list', 'create')
[ceph@rdo-cc ceph-cluster]$

Which is strange, because it's clearly documented: (docs look like Jewel version, but that's what I'm running)

http://docs.ceph.com/docs/jewel/man/8/ceph-deploy/


[ceph@rdo-cc yum.repos.d]$ rpm -qa | grep ceph
python-cephfs-10.2.10-0.el7.x86_64
centos-release-ceph-jewel-1.0-1.el7.centos.noarch
ceph-osd-10.2.10-0.el7.x86_64
libcephfs1-10.2.10-0.el7.x86_64
ceph-mds-10.2.10-0.el7.x86_64
ceph-mon-10.2.10-0.el7.x86_64
ceph-base-10.2.10-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-selinux-10.2.10-0.el7.x86_64
ceph-radosgw-10.2.10-0.el7.x86_64
ceph-common-10.2.10-0.el7.x86_64
ceph-10.2.10-0.el7.x86_64

[ceph@rdo-cc yum.repos.d]$ ceph-deploy --version
2.0.0
[ceph@rdo-cc yum.repos.d]$

 

What was anybody's workaround? Potentially upgrading to Luminous?

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    In case you get two similar messages I responded once but have not seen it come through, so I'm answering it again. 

    The error you're seeing is due to use of a newer version of Ceph than Kraken, which the book declares. The commands to deploy OSDs has changed in the  newer versions. The steps to use there can be found here:  http://docs.ceph.com/docs/master/install/manual-deployment/ 

    Using Kraken, the version the lab covers, continues to work. I was able to get a single OSD up and working with the following steps on the rdo-cc node. First the output of ceph -s to show the MON is working and one OSD is all the way into the cluster. I just wanted to check the overall steps.

    [ceph@ip-172-31-16-231 ceph-cluster]$ ceph -s

        cluster 65d3daef-3662-42fb-b946-c15dff99a12d

         health HEALTH_WARN

                64 pgs degraded

                64 pgs undersized

         monmap e1: 1 mons at {ip-172-31-16-231=172.31.16.231:6789/0}

                election epoch 3, quorum 0 ip-172-31-16-231

         osdmap e6: 2 osds: 1 up, 1 in

                flags sortbitwise,require_jewel_osds

          pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects

                5152 MB used, 5076 MB / 10229 MB avail

                      64 active+undersized+degraded

     

    Commands I did on RDO-CC

    As root:

       1  yum update -y #This failed due to a python issue caused by OpenStack. I moved on. 

        2  yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

        3  vim /etc/yum.repos.d/start-ceph.repo  #Vim was not installed

        4  yum install vim

        5  vim /etc/yum.repos.d/start-ceph.repo

    [root@ip-172-31-16-231 ~]# cat /etc/yum.repos.d/start-ceph.repo

    [ceph-noarch]

    name=Ceph noarch packages

    baseurl=https://download.ceph.com/rpm-kraken/el7/noarch

    enabled=1

    gpgcheck=0

    type=rpm-md

    gpgkey=https://download.ceph.com/keys/release.asc

        6  yum update -y

        7  yum install -y ceph-deploy ### This worked when I did it on a non-openstack system, but failed due to python with openstack.

        7a pip install ceph-deploy   #### Only if python is an issue

        8  useradd -d /home/ceph -m ceph

        9  id ceph

       10  echo ceph | passwd --stdin ceph

       11  echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph

       12  chmod 0400 /etc/sudoers.d/ceph

       13  su - ceph

    rdo-cc As ceph:

        9  ssh-copy-id ceph@ip-172-31-16-231

       10  for ip in ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244 ip-172-31-16-231 ; do  ssh ceph@$ip 'uname -a' ; done

       11  sudo setenforce 0; sudo yum -y install yum-plugin-priorities

       12  mkdir ceph-cluster

       13  cd ceph-cluster/

       14  ceph-deploy new ip-172-31-16-231

       15  vim ceph.conf

       16  ceph-deploy install ip-172-31-16-231 ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244

       17  ceph-deploy mon create-initial

       18  ceph-deploy osd prepare ip-172-31-26-167:/var/local/osd0

       19  ceph -s

       20  ceph-deploy osd activate ip-172-31-26-167:/var/local/osd0

     

    Hopefully this helps. Please let the forum know if Kraken continues to have this issue.  We will be updating to a new version of OpenStack and of Ceph in the near future. 

    Regards,

  • jcabrera
    jcabrera Posts: 3
    edited April 2018

    Tried the above steps and still upgrading to the latest ceph-deploy version 2.0. This version only allows list and create as options so running the ceph-desplay osd prepare... will give error.

    "Downloading packages:

    python-pip-8.1.2-1.el7.noarch.rpm                                   | 1.7 MB  00:00:00

    Running transaction check

    Running transaction test

    Transaction test succeeded

    Running transaction

      Installing : python-pip-8.1.2-1.el7.noarch                                           1/1

      Verifying  : python-pip-8.1.2-1.el7.noarch                                           1/1

    Installed:

      python-pip.noarch 0:8.1.2-1.el7

    Complete!

    Collecting ceph-deploy

      Downloading ceph-deploy-2.0.0.tar.gz (113kB)

        100% |████████████████████████████████| 122kB 4.9MB/s

    Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/

    site-packages (from ceph-deploy)

    Installing collected packages: ceph-deploy

      Running setup.py install for ceph-deploy ... done

    Successfully installed ceph-deploy-2.0.0

    You are using pip version 8.1.2, however version 9.0.3 is available.

    You should consider upgrading via the 'pip install --upgrade pip' command.

    [root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo

    [ceph-noarch]

    name=Ceph noarch packages

    baseurl=https://download.ceph.com/rpm-kraken/el7/noarch

    enabled=1

    gpgcheck=0

    type=rpm-md

    gpgkey=https://download.ceph.com/keys/release.asc

    [root@rdo-cc ~]#"

  • serewicz
    serewicz Posts: 1,000
    edited April 2018

    Hello,

    I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome. 

    I'm trying it again now.

    Regards,

  • serewicz
    serewicz Posts: 1,000
    edited April 2018

    Hello,

    I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome. 

    I'm trying it again now.

    Regards,

  • serewicz
    serewicz Posts: 1,000

    It would seem that to avoid the ongoing python issues the ceph-deploy binary was updated. As OpenStack requires an older python version due to a different bug they were unable to work together. After attempts at almost every permutation the fix is to use the lumimous version of ceph and force an older version of ceph-deploy using pip install ceph-deploy==1.5.39. 

    Following is the full history output on each node, as well as some  of the config files. Then the HEALTH_OK of the ceph cluster with the starting two-node setup.

    The rdo-cc root user has many command in place prior to becoming available. These are all the commands run, and the repo file:


    yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 171 vim /etc/yum.repos.d/start-ceph.repo 172 cat /etc/yum.repos.d/start-ceph.repo 173 yum install python-pip 174 pip install ceph-deploy==1.5.39 175 useradd -d /home/ceph -m ceph 176 echo ceph | passwd --stdin ceph 177 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 178 chmod 0400 /etc/sudoers.d/ceph 179 su - ceph 180 ssh storage1 181 ssh storage2 182 su - ceph 183 history [root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo [ceph-noarch] name=Ceph noarch packages baseurl=https://download.ceph.com/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [root@rdo-cc ~]#

    Here is the history of ceph user on rdo-cc:


    [ceph@rdo-cc ceph-cluster]$ history 1 ls /root 2 sudo ls /root 3 exit 4 ssh-keygen 5 ssh-copy-id storage1 6 ssh-copy-id storage2 7 ssh-copy-id rdo-cc 8 getenforce 9 for node in storage1 storage2 rdo-cc; do ssh $node 'yum -y install yum-plugin-priorities'; done 10 for node in storage1 storage2 rdo-cc; do ssh $node 'sudo yum -y install yum-plugin-prioriti es'; done 11 mkdir ceph-cluster 12 cd ceph-cluster 13 ceph-deploy new rdo-cc 14 vim ceph.conf 15 ceph-deploy install storage1 storage2 rdo-cc 16 ceph-deploy mon create-initial 17 ssh storage1 18 ssh storage2 19 ceph-deploy osd prepare storage1:/var/local/osd0 20 ceph-deploy osd prepare storage2:/var/local/osd1 21 ceph-deploy osd activate storage1:/var/local/osd0 storage2:/var/local/osd1 22 ceph -s 23 history

     

    Here is storage1:


    [root@storage1 ~]# history 1 useradd -d /home/ceph -m ceph 2 echo ceph | passwd --stdin ceph 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 4 chmod 0400 /etc/sudoers.d/ceph 5 exit 6 parted /dev/xvdb -- mklabel gpt 7 parted /dev/xvdb -- mkpart part1 2048s 50% 8 parted /dev/xvdb -- mkpart part2 51% 100% 9 mkfs.xfs /dev/xvdb1 10 mkdir -p /var/local/osd0 11 echo "/dev/xvdb1 /var/local/osd0 xfs noatime,nobarrier 0 0" >> /etc/fstab 12 mount /var/local/osd0/ 13 chown ceph.ceph /var/local/osd0 14 df -h /var/local/osd0 15 exit 16 history

    And on storage2:


    [root@storage2 ~]# history 1 useradd -d /home/ceph -m ceph 2 echo ceph | passwd --stdin ceph 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 4 chmod 0400 /etc/sudoers.d/ceph 5 exit 6 parted /dev/xvdb -- mklabel gpt 7 parted /dev/xvdb -- mkpart part1 2048s 50% 8 parted /dev/xvdb -- mkpart part2 51% 100% 9 mkfs.xfs /dev/xvdb1 10 mkdir -p /var/local/osd1 11 echo "/dev/xvdb1 /var/local/osd1 xfs noatime,nobarrier 0 0" >> /etc/fstab 12 mount /var/local/osd1 13 chown ceph.ceph /var/local/osd1 14 df -h /var/local/osd1 15 exit 16 history

    Finally the end result:


    [ceph@rdo-cc ceph-cluster]$ ceph -s cluster 8d2db565-40db-4f65-b223-8754294ea0a1 health HEALTH_OK monmap e1: 1 mons at {rdo-cc=192.168.98.1:6789/0} election epoch 3, quorum 0 rdo-cc osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects 10305 MB used, 66454 MB / 76760 MB avail 64 active+clean [ceph@rdo-cc ceph-cluster]$

    Please let us know if you continue to have issues with the lab.

    Regards,

  • jcabrera
    jcabrera Posts: 3
    edited April 2018

    Thank you very much. It works!

     

    Cheers...

Categories

Upcoming Training