Welcome to the Linux Foundation Forum!

Lab11.1 - Deploy a two OSD nodes for the cluster

Posts: 1

First of all, the title of this section isn't quite right ;)

Step 6 returns:


  1. [ceph@rdo-cc ceph-cluster]$ ceph-deploy osd prepare \
    > storage1:/var/local/osd0
    usage: ceph-deploy osd [-h] {list,create} ...
    ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare' (choose from 'list', 'create')
    [ceph@rdo-cc ceph-cluster]$

Which is strange, because it's clearly documented: (docs look like Jewel version, but that's what I'm running)

http://docs.ceph.com/docs/jewel/man/8/ceph-deploy/


  1. [ceph@rdo-cc yum.repos.d]$ rpm -qa | grep ceph
    python-cephfs-10.2.10-0.el7.x86_64
    centos-release-ceph-jewel-1.0-1.el7.centos.noarch
    ceph-osd-10.2.10-0.el7.x86_64
    libcephfs1-10.2.10-0.el7.x86_64
    ceph-mds-10.2.10-0.el7.x86_64
    ceph-mon-10.2.10-0.el7.x86_64
    ceph-base-10.2.10-0.el7.x86_64
    ceph-release-1-1.el7.noarch
    ceph-selinux-10.2.10-0.el7.x86_64
    ceph-radosgw-10.2.10-0.el7.x86_64
    ceph-common-10.2.10-0.el7.x86_64
    ceph-10.2.10-0.el7.x86_64

    [ceph@rdo-cc yum.repos.d]$ ceph-deploy --version
    2.0.0
    [ceph@rdo-cc yum.repos.d]$

 

What was anybody's workaround? Potentially upgrading to Luminous?

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 1,000

    Hello,

    In case you get two similar messages I responded once but have not seen it come through, so I'm answering it again. 

    The error you're seeing is due to use of a newer version of Ceph than Kraken, which the book declares. The commands to deploy OSDs has changed in the  newer versions. The steps to use there can be found here:  http://docs.ceph.com/docs/master/install/manual-deployment/ 

    Using Kraken, the version the lab covers, continues to work. I was able to get a single OSD up and working with the following steps on the rdo-cc node. First the output of ceph -s to show the MON is working and one OSD is all the way into the cluster. I just wanted to check the overall steps.

    [ceph@ip-172-31-16-231 ceph-cluster]$ ceph -s

        cluster 65d3daef-3662-42fb-b946-c15dff99a12d

         health HEALTH_WARN

                64 pgs degraded

                64 pgs undersized

         monmap e1: 1 mons at {ip-172-31-16-231=172.31.16.231:6789/0}

                election epoch 3, quorum 0 ip-172-31-16-231

         osdmap e6: 2 osds: 1 up, 1 in

                flags sortbitwise,require_jewel_osds

          pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects

                5152 MB used, 5076 MB / 10229 MB avail

                      64 active+undersized+degraded

     

    Commands I did on RDO-CC

    As root:

       1  yum update -y #This failed due to a python issue caused by OpenStack. I moved on. 

        2  yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

        3  vim /etc/yum.repos.d/start-ceph.repo  #Vim was not installed

        4  yum install vim

        5  vim /etc/yum.repos.d/start-ceph.repo

    [root@ip-172-31-16-231 ~]# cat /etc/yum.repos.d/start-ceph.repo

    [ceph-noarch]

    name=Ceph noarch packages

    baseurl=https://download.ceph.com/rpm-kraken/el7/noarch

    enabled=1

    gpgcheck=0

    type=rpm-md

    gpgkey=https://download.ceph.com/keys/release.asc

        6  yum update -y

        7  yum install -y ceph-deploy ### This worked when I did it on a non-openstack system, but failed due to python with openstack.

        7a pip install ceph-deploy   #### Only if python is an issue

        8  useradd -d /home/ceph -m ceph

        9  id ceph

       10  echo ceph | passwd --stdin ceph

       11  echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph

       12  chmod 0400 /etc/sudoers.d/ceph

       13  su - ceph

    rdo-cc As ceph:

        9  ssh-copy-id ceph@ip-172-31-16-231

       10  for ip in ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244 ip-172-31-16-231 ; do  ssh ceph@$ip 'uname -a' ; done

       11  sudo setenforce 0; sudo yum -y install yum-plugin-priorities

       12  mkdir ceph-cluster

       13  cd ceph-cluster/

       14  ceph-deploy new ip-172-31-16-231

       15  vim ceph.conf

       16  ceph-deploy install ip-172-31-16-231 ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244

       17  ceph-deploy mon create-initial

       18  ceph-deploy osd prepare ip-172-31-26-167:/var/local/osd0

       19  ceph -s

       20  ceph-deploy osd activate ip-172-31-26-167:/var/local/osd0

     

    Hopefully this helps. Please let the forum know if Kraken continues to have this issue.  We will be updating to a new version of OpenStack and of Ceph in the near future. 

    Regards,

  • Posts: 3
    edited April 2018

    Tried the above steps and still upgrading to the latest ceph-deploy version 2.0. This version only allows list and create as options so running the ceph-desplay osd prepare... will give error.

    "Downloading packages:

    python-pip-8.1.2-1.el7.noarch.rpm                                   | 1.7 MB  00:00:00

    Running transaction check

    Running transaction test

    Transaction test succeeded

    Running transaction

      Installing : python-pip-8.1.2-1.el7.noarch                                           1/1

      Verifying  : python-pip-8.1.2-1.el7.noarch                                           1/1

    Installed:

      python-pip.noarch 0:8.1.2-1.el7

    Complete!

    Collecting ceph-deploy

      Downloading ceph-deploy-2.0.0.tar.gz (113kB)

        100% |████████████████████████████████| 122kB 4.9MB/s

    Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/

    site-packages (from ceph-deploy)

    Installing collected packages: ceph-deploy

      Running setup.py install for ceph-deploy ... done

    Successfully installed ceph-deploy-2.0.0

    You are using pip version 8.1.2, however version 9.0.3 is available.

    You should consider upgrading via the 'pip install --upgrade pip' command.

    [root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo

    [ceph-noarch]

    name=Ceph noarch packages

    baseurl=https://download.ceph.com/rpm-kraken/el7/noarch

    enabled=1

    gpgcheck=0

    type=rpm-md

    gpgkey=https://download.ceph.com/keys/release.asc

    [root@rdo-cc ~]#"

  • Posts: 1,000
    edited April 2018

    Hello,

    I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome. 

    I'm trying it again now.

    Regards,

  • Posts: 1,000
    edited April 2018

    Hello,

    I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome. 

    I'm trying it again now.

    Regards,

  • Posts: 1,000

    It would seem that to avoid the ongoing python issues the ceph-deploy binary was updated. As OpenStack requires an older python version due to a different bug they were unable to work together. After attempts at almost every permutation the fix is to use the lumimous version of ceph and force an older version of ceph-deploy using pip install ceph-deploy==1.5.39. 

    Following is the full history output on each node, as well as some  of the config files. Then the HEALTH_OK of the ceph cluster with the starting two-node setup.

    The rdo-cc root user has many command in place prior to becoming available. These are all the commands run, and the repo file:


    1. yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    2. 171 vim /etc/yum.repos.d/start-ceph.repo
    3. 172 cat /etc/yum.repos.d/start-ceph.repo
    4. 173 yum install python-pip
    5. 174 pip install ceph-deploy==1.5.39
    6. 175 useradd -d /home/ceph -m ceph
    7. 176 echo ceph | passwd --stdin ceph
    8. 177 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
    9. 178 chmod 0400 /etc/sudoers.d/ceph
    10. 179 su - ceph
    11. 180 ssh storage1
    12. 181 ssh storage2
    13. 182 su - ceph
    14. 183 history
    15. [root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo
    16. [ceph-noarch]
    17. name=Ceph noarch packages
    18. baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
    19. enabled=1
    20. gpgcheck=1
    21. type=rpm-md
    22. gpgkey=https://download.ceph.com/keys/release.asc
    23. [root@rdo-cc ~]#

    Here is the history of ceph user on rdo-cc:


    1. [ceph@rdo-cc ceph-cluster]$ history
    2. 1 ls /root
    3. 2 sudo ls /root
    4. 3 exit
    5. 4 ssh-keygen
    6. 5 ssh-copy-id storage1
    7. 6 ssh-copy-id storage2
    8. 7 ssh-copy-id rdo-cc
    9. 8 getenforce
    10. 9 for node in storage1 storage2 rdo-cc; do ssh $node 'yum -y install yum-plugin-priorities';
    11. done
    12. 10 for node in storage1 storage2 rdo-cc; do ssh $node 'sudo yum -y install yum-plugin-prioriti
    13. es'; done
    14. 11 mkdir ceph-cluster
    15. 12 cd ceph-cluster
    16. 13 ceph-deploy new rdo-cc
    17. 14 vim ceph.conf
    18. 15 ceph-deploy install storage1 storage2 rdo-cc
    19. 16 ceph-deploy mon create-initial
    20. 17 ssh storage1
    21. 18 ssh storage2
    22. 19 ceph-deploy osd prepare storage1:/var/local/osd0
    23. 20 ceph-deploy osd prepare storage2:/var/local/osd1
    24. 21 ceph-deploy osd activate storage1:/var/local/osd0 storage2:/var/local/osd1
    25. 22 ceph -s
    26. 23 history

     

    Here is storage1:


    1. [root@storage1 ~]# history
    2. 1 useradd -d /home/ceph -m ceph
    3. 2 echo ceph | passwd --stdin ceph
    4. 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
    5. 4 chmod 0400 /etc/sudoers.d/ceph
    6. 5 exit
    7. 6 parted /dev/xvdb -- mklabel gpt
    8. 7 parted /dev/xvdb -- mkpart part1 2048s 50%
    9. 8 parted /dev/xvdb -- mkpart part2 51% 100%
    10. 9 mkfs.xfs /dev/xvdb1
    11. 10 mkdir -p /var/local/osd0
    12. 11 echo "/dev/xvdb1 /var/local/osd0 xfs noatime,nobarrier 0 0" >> /etc/fstab
    13. 12 mount /var/local/osd0/
    14. 13 chown ceph.ceph /var/local/osd0
    15. 14 df -h /var/local/osd0
    16. 15 exit
    17. 16 history

    And on storage2:


    1. [root@storage2 ~]# history
    2. 1 useradd -d /home/ceph -m ceph
    3. 2 echo ceph | passwd --stdin ceph
    4. 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
    5. 4 chmod 0400 /etc/sudoers.d/ceph
    6. 5 exit
    7. 6 parted /dev/xvdb -- mklabel gpt
    8. 7 parted /dev/xvdb -- mkpart part1 2048s 50%
    9. 8 parted /dev/xvdb -- mkpart part2 51% 100%
    10. 9 mkfs.xfs /dev/xvdb1
    11. 10 mkdir -p /var/local/osd1
    12. 11 echo "/dev/xvdb1 /var/local/osd1 xfs noatime,nobarrier 0 0" >> /etc/fstab
    13. 12 mount /var/local/osd1
    14. 13 chown ceph.ceph /var/local/osd1
    15. 14 df -h /var/local/osd1
    16. 15 exit
    17. 16 history

    Finally the end result:


    1. [ceph@rdo-cc ceph-cluster]$ ceph -s
    2. cluster 8d2db565-40db-4f65-b223-8754294ea0a1
    3. health HEALTH_OK
    4. monmap e1: 1 mons at {rdo-cc=192.168.98.1:6789/0}
    5. election epoch 3, quorum 0 rdo-cc
    6. osdmap e10: 2 osds: 2 up, 2 in
    7. flags sortbitwise,require_jewel_osds
    8. pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects
    9. 10305 MB used, 66454 MB / 76760 MB avail
    10. 64 active+clean
    11. [ceph@rdo-cc ceph-cluster]$

    Please let us know if you continue to have issues with the lab.

    Regards,

  • Posts: 3
    edited April 2018

    Thank you very much. It works!

     

    Cheers...

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training