Lab11.1 - Deploy a two OSD nodes for the cluster
First of all, the title of this section isn't quite right
Step 6 returns:
[ceph@rdo-cc ceph-cluster]$ ceph-deploy osd prepare \
> storage1:/var/local/osd0
usage: ceph-deploy osd [-h] {list,create} ...
ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare' (choose from 'list', 'create')
[ceph@rdo-cc ceph-cluster]$
Which is strange, because it's clearly documented: (docs look like Jewel version, but that's what I'm running)
http://docs.ceph.com/docs/jewel/man/8/ceph-deploy/
[ceph@rdo-cc yum.repos.d]$ rpm -qa | grep ceph
python-cephfs-10.2.10-0.el7.x86_64
centos-release-ceph-jewel-1.0-1.el7.centos.noarch
ceph-osd-10.2.10-0.el7.x86_64
libcephfs1-10.2.10-0.el7.x86_64
ceph-mds-10.2.10-0.el7.x86_64
ceph-mon-10.2.10-0.el7.x86_64
ceph-base-10.2.10-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-selinux-10.2.10-0.el7.x86_64
ceph-radosgw-10.2.10-0.el7.x86_64
ceph-common-10.2.10-0.el7.x86_64
ceph-10.2.10-0.el7.x86_64
[ceph@rdo-cc yum.repos.d]$ ceph-deploy --version
2.0.0
[ceph@rdo-cc yum.repos.d]$
What was anybody's workaround? Potentially upgrading to Luminous?
Comments
-
Hello,
In case you get two similar messages I responded once but have not seen it come through, so I'm answering it again.
The error you're seeing is due to use of a newer version of Ceph than Kraken, which the book declares. The commands to deploy OSDs has changed in the newer versions. The steps to use there can be found here: http://docs.ceph.com/docs/master/install/manual-deployment/
Using Kraken, the version the lab covers, continues to work. I was able to get a single OSD up and working with the following steps on the rdo-cc node. First the output of ceph -s to show the MON is working and one OSD is all the way into the cluster. I just wanted to check the overall steps.
[ceph@ip-172-31-16-231 ceph-cluster]$ ceph -s
cluster 65d3daef-3662-42fb-b946-c15dff99a12d
health HEALTH_WARN
64 pgs degraded
64 pgs undersized
monmap e1: 1 mons at {ip-172-31-16-231=172.31.16.231:6789/0}
election epoch 3, quorum 0 ip-172-31-16-231
osdmap e6: 2 osds: 1 up, 1 in
flags sortbitwise,require_jewel_osds
pgmap v9: 64 pgs, 1 pools, 0 bytes data, 0 objects
5152 MB used, 5076 MB / 10229 MB avail
64 active+undersized+degradedCommands I did on RDO-CC
As root:
1 yum update -y #This failed due to a python issue caused by OpenStack. I moved on.
2 yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
3 vim /etc/yum.repos.d/start-ceph.repo #Vim was not installed
4 yum install vim
5 vim /etc/yum.repos.d/start-ceph.repo[root@ip-172-31-16-231 ~]# cat /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc6 yum update -y
7 yum install -y ceph-deploy ### This worked when I did it on a non-openstack system, but failed due to python with openstack.7a pip install ceph-deploy #### Only if python is an issue
8 useradd -d /home/ceph -m ceph
9 id ceph
10 echo ceph | passwd --stdin ceph
11 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
12 chmod 0400 /etc/sudoers.d/ceph
13 su - cephrdo-cc As ceph:
9 ssh-copy-id ceph@ip-172-31-16-231
10 for ip in ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244 ip-172-31-16-231 ; do ssh ceph@$ip 'uname -a' ; done
11 sudo setenforce 0; sudo yum -y install yum-plugin-priorities
12 mkdir ceph-cluster
13 cd ceph-cluster/
14 ceph-deploy new ip-172-31-16-231
15 vim ceph.conf
16 ceph-deploy install ip-172-31-16-231 ip-172-31-26-167 ip-172-31-16-216 ip-172-31-20-244
17 ceph-deploy mon create-initial
18 ceph-deploy osd prepare ip-172-31-26-167:/var/local/osd0
19 ceph -s
20 ceph-deploy osd activate ip-172-31-26-167:/var/local/osd0Hopefully this helps. Please let the forum know if Kraken continues to have this issue. We will be updating to a new version of OpenStack and of Ceph in the near future.
Regards,
0 -
Tried the above steps and still upgrading to the latest ceph-deploy version 2.0. This version only allows list and create as options so running the ceph-desplay osd prepare... will give error.
"Downloading packages:
python-pip-8.1.2-1.el7.noarch.rpm | 1.7 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-pip-8.1.2-1.el7.noarch 1/1
Verifying : python-pip-8.1.2-1.el7.noarch 1/1Installed:
python-pip.noarch 0:8.1.2-1.el7Complete!
Collecting ceph-deploy
Downloading ceph-deploy-2.0.0.tar.gz (113kB)
100% |████████████████████████████████| 122kB 4.9MB/s
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/
site-packages (from ceph-deploy)
Installing collected packages: ceph-deploy
Running setup.py install for ceph-deploy ... done
Successfully installed ceph-deploy-2.0.0
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[root@rdo-cc ~]#"0 -
Hello,
I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome.
I'm trying it again now.
Regards,
0 -
Hello,
I will try the steps I mentioned again, and get back to you. I think we may be doing different steps, which is leading to the difference in outcome.
I'm trying it again now.
Regards,
0 -
It would seem that to avoid the ongoing python issues the ceph-deploy binary was updated. As OpenStack requires an older python version due to a different bug they were unable to work together. After attempts at almost every permutation the fix is to use the lumimous version of ceph and force an older version of ceph-deploy using pip install ceph-deploy==1.5.39.
Following is the full history output on each node, as well as some of the config files. Then the HEALTH_OK of the ceph cluster with the starting two-node setup.
The rdo-cc root user has many command in place prior to becoming available. These are all the commands run, and the repo file:
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 171 vim /etc/yum.repos.d/start-ceph.repo 172 cat /etc/yum.repos.d/start-ceph.repo 173 yum install python-pip 174 pip install ceph-deploy==1.5.39 175 useradd -d /home/ceph -m ceph 176 echo ceph | passwd --stdin ceph 177 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 178 chmod 0400 /etc/sudoers.d/ceph 179 su - ceph 180 ssh storage1 181 ssh storage2 182 su - ceph 183 history [root@rdo-cc ~]# cat /etc/yum.repos.d/start-ceph.repo [ceph-noarch] name=Ceph noarch packages baseurl=https://download.ceph.com/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [root@rdo-cc ~]#
Here is the history of ceph user on rdo-cc:
[ceph@rdo-cc ceph-cluster]$ history 1 ls /root 2 sudo ls /root 3 exit 4 ssh-keygen 5 ssh-copy-id storage1 6 ssh-copy-id storage2 7 ssh-copy-id rdo-cc 8 getenforce 9 for node in storage1 storage2 rdo-cc; do ssh $node 'yum -y install yum-plugin-priorities'; done 10 for node in storage1 storage2 rdo-cc; do ssh $node 'sudo yum -y install yum-plugin-prioriti es'; done 11 mkdir ceph-cluster 12 cd ceph-cluster 13 ceph-deploy new rdo-cc 14 vim ceph.conf 15 ceph-deploy install storage1 storage2 rdo-cc 16 ceph-deploy mon create-initial 17 ssh storage1 18 ssh storage2 19 ceph-deploy osd prepare storage1:/var/local/osd0 20 ceph-deploy osd prepare storage2:/var/local/osd1 21 ceph-deploy osd activate storage1:/var/local/osd0 storage2:/var/local/osd1 22 ceph -s 23 history
Here is storage1:
[root@storage1 ~]# history 1 useradd -d /home/ceph -m ceph 2 echo ceph | passwd --stdin ceph 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 4 chmod 0400 /etc/sudoers.d/ceph 5 exit 6 parted /dev/xvdb -- mklabel gpt 7 parted /dev/xvdb -- mkpart part1 2048s 50% 8 parted /dev/xvdb -- mkpart part2 51% 100% 9 mkfs.xfs /dev/xvdb1 10 mkdir -p /var/local/osd0 11 echo "/dev/xvdb1 /var/local/osd0 xfs noatime,nobarrier 0 0" >> /etc/fstab 12 mount /var/local/osd0/ 13 chown ceph.ceph /var/local/osd0 14 df -h /var/local/osd0 15 exit 16 history
And on storage2:
[root@storage2 ~]# history 1 useradd -d /home/ceph -m ceph 2 echo ceph | passwd --stdin ceph 3 echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph 4 chmod 0400 /etc/sudoers.d/ceph 5 exit 6 parted /dev/xvdb -- mklabel gpt 7 parted /dev/xvdb -- mkpart part1 2048s 50% 8 parted /dev/xvdb -- mkpart part2 51% 100% 9 mkfs.xfs /dev/xvdb1 10 mkdir -p /var/local/osd1 11 echo "/dev/xvdb1 /var/local/osd1 xfs noatime,nobarrier 0 0" >> /etc/fstab 12 mount /var/local/osd1 13 chown ceph.ceph /var/local/osd1 14 df -h /var/local/osd1 15 exit 16 history
Finally the end result:
[ceph@rdo-cc ceph-cluster]$ ceph -s cluster 8d2db565-40db-4f65-b223-8754294ea0a1 health HEALTH_OK monmap e1: 1 mons at {rdo-cc=192.168.98.1:6789/0} election epoch 3, quorum 0 rdo-cc osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects 10305 MB used, 66454 MB / 76760 MB avail 64 active+clean [ceph@rdo-cc ceph-cluster]$
Please let us know if you continue to have issues with the lab.
Regards,
0 -
Thank you very much. It works!
Cheers...
0
Categories
- All Categories
- 217 LFX Mentorship
- 217 LFX Mentorship: Linux Kernel
- 791 Linux Foundation IT Professional Programs
- 353 Cloud Engineer IT Professional Program
- 178 Advanced Cloud Engineer IT Professional Program
- 82 DevOps Engineer IT Professional Program
- 147 Cloud Native Developer IT Professional Program
- 137 Express Training Courses
- 137 Express Courses - Discussion Forum
- 6.2K Training Courses
- 47 LFC110 Class Forum - Discontinued
- 71 LFC131 Class Forum
- 42 LFD102 Class Forum
- 226 LFD103 Class Forum
- 18 LFD110 Class Forum
- 38 LFD121 Class Forum
- 18 LFD133 Class Forum
- 7 LFD134 Class Forum
- 18 LFD137 Class Forum
- 71 LFD201 Class Forum
- 4 LFD210 Class Forum
- 5 LFD210-CN Class Forum
- 2 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum - Discontinued
- 2 LFD233 Class Forum
- 4 LFD237 Class Forum
- 24 LFD254 Class Forum
- 697 LFD259 Class Forum
- 111 LFD272 Class Forum
- 4 LFD272-JP クラス フォーラム
- 12 LFD273 Class Forum
- 148 LFS101 Class Forum
- 1 LFS111 Class Forum
- 3 LFS112 Class Forum
- 2 LFS116 Class Forum
- 4 LFS118 Class Forum
- LFS120 Class Forum
- 7 LFS142 Class Forum
- 5 LFS144 Class Forum
- 4 LFS145 Class Forum
- 2 LFS146 Class Forum
- 3 LFS147 Class Forum
- 1 LFS148 Class Forum
- 15 LFS151 Class Forum
- 2 LFS157 Class Forum
- 28 LFS158 Class Forum
- 7 LFS162 Class Forum
- 2 LFS166 Class Forum
- 4 LFS167 Class Forum
- 3 LFS170 Class Forum
- 2 LFS171 Class Forum
- 3 LFS178 Class Forum
- 3 LFS180 Class Forum
- 2 LFS182 Class Forum
- 5 LFS183 Class Forum
- 31 LFS200 Class Forum
- 737 LFS201 Class Forum - Discontinued
- 3 LFS201-JP クラス フォーラム
- 18 LFS203 Class Forum
- 134 LFS207 Class Forum
- 2 LFS207-DE-Klassenforum
- 1 LFS207-JP クラス フォーラム
- 302 LFS211 Class Forum
- 56 LFS216 Class Forum
- 52 LFS241 Class Forum
- 48 LFS242 Class Forum
- 38 LFS243 Class Forum
- 15 LFS244 Class Forum
- 2 LFS245 Class Forum
- LFS246 Class Forum
- 48 LFS250 Class Forum
- 2 LFS250-JP クラス フォーラム
- 1 LFS251 Class Forum
- 152 LFS253 Class Forum
- 1 LFS254 Class Forum
- 1 LFS255 Class Forum
- 7 LFS256 Class Forum
- 1 LFS257 Class Forum
- 1.2K LFS258 Class Forum
- 10 LFS258-JP クラス フォーラム
- 118 LFS260 Class Forum
- 159 LFS261 Class Forum
- 42 LFS262 Class Forum
- 82 LFS263 Class Forum - Discontinued
- 15 LFS264 Class Forum - Discontinued
- 11 LFS266 Class Forum - Discontinued
- 24 LFS267 Class Forum
- 22 LFS268 Class Forum
- 30 LFS269 Class Forum
- LFS270 Class Forum
- 202 LFS272 Class Forum
- 2 LFS272-JP クラス フォーラム
- 1 LFS274 Class Forum
- 4 LFS281 Class Forum
- 9 LFW111 Class Forum
- 259 LFW211 Class Forum
- 181 LFW212 Class Forum
- 13 SKF100 Class Forum
- 1 SKF200 Class Forum
- 1 SKF201 Class Forum
- 795 Hardware
- 199 Drivers
- 68 I/O Devices
- 37 Monitors
- 102 Multimedia
- 174 Networking
- 91 Printers & Scanners
- 85 Storage
- 758 Linux Distributions
- 82 Debian
- 67 Fedora
- 17 Linux Mint
- 13 Mageia
- 23 openSUSE
- 148 Red Hat Enterprise
- 31 Slackware
- 13 SUSE Enterprise
- 353 Ubuntu
- 468 Linux System Administration
- 39 Cloud Computing
- 71 Command Line/Scripting
- Github systems admin projects
- 93 Linux Security
- 78 Network Management
- 102 System Management
- 47 Web Management
- 63 Mobile Computing
- 18 Android
- 33 Development
- 1.2K New to Linux
- 1K Getting Started with Linux
- 371 Off Topic
- 114 Introductions
- 174 Small Talk
- 22 Study Material
- 805 Programming and Development
- 303 Kernel Development
- 484 Software Development
- 1.8K Software
- 261 Applications
- 183 Command Line
- 3 Compiling/Installing
- 987 Games
- 317 Installation
- 97 All In Program
- 97 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)