Welcome to the Linux Foundation Forum!
Lab11.1 - Deploy a two OSD nodes for the cluster
First of all, the title of this section isn't quite right
Step 6 returns:
[[email protected] ceph-cluster]$ ceph-deploy osd prepare \
> storage1:/var/local/osd0
usage: ceph-deploy osd [-h] {list,create} ...
ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare' (choose from 'list', 'create')
[[email protected] ceph-cluster]$
Which is strange, because it's clearly documented: (docs look like Jewel version, but that's what I'm running)
http://docs.ceph.com/docs/jewel/man/8/ceph-deploy/
[c[email protected] yum.repos.d]$ rpm -qa | grep ceph
python-cephfs-10.2.10-0.el7.x86_64
centos-release-ceph-jewel-1.0-1.el7.centos.noarch
ceph-osd-10.2.10-0.el7.x86_64
libcephfs1-10.2.10-0.el7.x86_64
ceph-mds-10.2.10-0.el7.x86_64
ceph-mon-10.2.10-0.el7.x86_64
ceph-base-10.2.10-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-selinux-10.2.10-0.el7.x86_64
ceph-radosgw-10.2.10-0.el7.x86_64
ceph-common-10.2.10-0.el7.x86_64
ceph-10.2.10-0.el7.x86_64
[[email protected] yum.repos.d]$ ceph-deploy --version
2.0.0
[[email protected] yum.repos.d]$
What was anybody's workaround? Potentially upgrading to Luminous?
0
Comments
Hello,
In case you get two similar messages I responded once but have not seen it come through, so I'm answering it again.
The error you're seeing is due to use of a newer version of Ceph than Kraken, which the book declares. The commands to deploy OSDs has changed in the newer versions. The steps to use there can be found here: http://docs.ceph.com/docs/master/install/manual-deployment/
Using Kraken, the version the lab covers, continues to work. I was able to get a single OSD up and working with the following steps on the rdo-cc node. First the output of ceph -s to show the MON is working and one OSD is all the way into the cluster. I just wanted to check the overall steps.
[[email protected] ceph-cluster]$ ceph -s
cluster 65d3daef-3662-42fb-b946-c15dff99a12d
health HEALTH_WARN
64 pgs degraded
64 pgs undersized
monmap e1: 1 mons at {ip-172-31-16-231=172.31.16.231:6789/0}
election epoch 3, quorum 0 ip-172-31-16-231
osdmap e6: 2 osds: 1 up, 1 in
flags sortbitwise,require_jewel_osds
pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects
5152 MB used, 5076 MB / 10229 MB avail
64 active+undersized+degraded
Commands I did on RDO-CC
As root:
1 yum update -y #This failed due to a python issue caused by OpenStack. I moved on.
2 yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
3 vim /etc/yum.repos.d/start-ceph.repo #Vim was not installed
4 yum install vim
5 vim /etc/yum.repos.d/start-ceph.repo
[[email protected] ~]# cat /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
6 yum update -y
7 yum install -y ceph-deploy ### This worked when I did it on a non-openstack system, but failed due to python with openstack.
7a pip install ceph-deploy #### Only if python is an issue
8 useradd -d /home/ceph -m ceph
9 id ceph
10 echo ceph | passwd --stdin ceph
11 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
12 chmod 0400 /etc/sudoers.d/ceph
13 su - ceph
rdo-cc As ceph:
9 ssh-copy-id [email protected]
10 for ip in ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244 ip-172-31-16-231 ; do ssh [email protected]$ip 'uname -a' ; done
11 sudo setenforce 0; sudo yum -y install yum-plugin-priorities
12 mkdir ceph-cluster
13 cd ceph-cluster/
14 ceph-deploy new ip-172-31-16-231
15 vim ceph.conf
16 ceph-deploy install ip-172-31-16-231 ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244
17 ceph-deploy mon create-initial
18 ceph-deploy osd prepare ip-172-31-26-167:/var/local/osd0
19 ceph -s
20 ceph-deploy osd activate ip-172-31-26-167:/var/local/osd0
Hopefully this helps. Please let the forum know if Kraken continues to have this issue. We will be updating to a new version of OpenStack and of Ceph in the near future.
Regards,
Tried the above steps and still upgrading to the latest ceph-deploy version 2.0. This version only allows list and create as options so running the ceph-desplay osd prepare... will give error.
"Downloading packages:
python-pip-8.1.2-1.el7.noarch.rpm | 1.7 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-pip-8.1.2-1.el7.noarch 1/1
Verifying : python-pip-8.1.2-1.el7.noarch 1/1
Installed:
python-pip.noarch 0:8.1.2-1.el7
Complete!
Collecting ceph-deploy
Downloading ceph-deploy-2.0.0.tar.gz (113kB)
100% |████████████████████████████████| 122kB 4.9MB/s
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/
site-packages (from ceph-deploy)
Installing collected packages: ceph-deploy
Running setup.py install for ceph-deploy ... done
Successfully installed ceph-deploy-2.0.0
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[[email protected] ~]# cat /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[[email protected] ~]#"
Hello,
I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome.
I'm trying it again now.
Regards,
Hello,
I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome.
I'm trying it again now.
Regards,
It would seem that to avoid the ongoing python issues the ceph-deploy binary was updated. As OpenStack requires an older python version due to a different bug they were unable to work together. After attempts at almost every permutation the fix is to use the lumimous version of ceph and force an older version of ceph-deploy using pip install ceph-deploy==1.5.39.
Following is the full history output on each node, as well as some of the config files. Then the HEALTH_OK of the ceph cluster with the starting two-node setup.
The rdo-cc root user has many command in place prior to becoming available. These are all the commands run, and the repo file:
Here is the history of ceph user on rdo-cc:
Here is storage1:
And on storage2:
Finally the end result:
Please let us know if you continue to have issues with the lab.
Regards,
Thank you very much. It works!
Cheers...