Welcome to the Linux Foundation Forum!

Lab 11.1 --Ceph- cannot deploy OSD

In step "ceph-deploy osd create --data /dev/xvdb storage1"
Got the error

[storage1][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/xvdb
[storage1][WARNIN] usage: ceph-volume lvm create [-h] --data DATA [--filestore]
[storage1][WARNIN] [--journal JOURNAL] [--bluestore]
[storage1][WARNIN] [--block.db BLOCK_DB] [--block.wal BLOCK_WAL]
[storage1][WARNIN] [--osd-id OSD_ID] [--osd-fsid OSD_FSID]
[storage1][WARNIN] [--crush-device-class CRUSH_DEVICE_CLASS]
[storage1][WARNIN] [--dmcrypt] [--no-systemd]
[storage1][WARNIN] ceph-volume lvm create: error: Argument (device) does not exist: /dev/xvdb
[storage1][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/xvdb


I did not find /dev/xvdb in any of the nodes. So I tried with /dev/vdb which is available
In this case I got the errors
****.
...storage1][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[storage1][WARNIN] No data was received after 300 seconds, disconnecting...
[storage1][INFO ] checking OSD status...
-----****
and no OSD was created

I ran the ceph-volume command separately to test it and saw this output
__
sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 21312e4b-6123-4a24-b290-7238fff7a819
stderr: 2018-10-19 19:10:25.849490 7f74c7383700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
stderr: 2018-10-19 19:10:25.849515 7f74c7383700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
stderr: 2018-10-19 19:10:25.849517 7f74c7383700 0 librados: client.bootstrap-osd initialization error (2) No such file or directory
stderr: [errno 2] error connecting to the cluster
--> RuntimeError: Unable to create a new OSD id
__

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 1,000

    Hello,

    Which version of the ceph software were you using?

    Regards,

  • I did not note that down. It is certainly Luminous release. I was trying the steps yesterday am -- 10/19

  • Hi all,

    I am also trying to deploy the first OSD on storage1 without success and following output:

    [ceph@rdo-cc ceph-cluster]$ ceph-deploy osd create --data /dev/vdb storage1
    [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
    [ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd create --data /dev/vdb storage1
    [ceph_deploy.cli][INFO ] ceph-deploy options:
    [ceph_deploy.cli][INFO ] verbose : False
    [ceph_deploy.cli][INFO ] bluestore : None
    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f51be49c128>
    [ceph_deploy.cli][INFO ] cluster : ceph
    [ceph_deploy.cli][INFO ] fs_type : xfs
    [ceph_deploy.cli][INFO ] block_wal : None
    [ceph_deploy.cli][INFO ] default_release : False
    [ceph_deploy.cli][INFO ] username : None
    [ceph_deploy.cli][INFO ] journal : None
    [ceph_deploy.cli][INFO ] subcommand : create
    [ceph_deploy.cli][INFO ] host : storage1
    [ceph_deploy.cli][INFO ] filestore : None
    [ceph_deploy.cli][INFO ] func :
    [ceph_deploy.cli][INFO ] ceph_conf : None
    [ceph_deploy.cli][INFO ] zap_disk : False
    [ceph_deploy.cli][INFO ] data : /dev/vdb
    [ceph_deploy.cli][INFO ] block_db : None
    [ceph_deploy.cli][INFO ] dmcrypt : False
    [ceph_deploy.cli][INFO ] overwrite_conf : False
    [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
    [ceph_deploy.cli][INFO ] quiet : False
    [ceph_deploy.cli][INFO ] debug : False
    [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
    [storage1][DEBUG ] connection detected need for sudo
    [storage1][DEBUG ] connected to host: storage1
    [storage1][DEBUG ] detect platform information from remote host
    [storage1][DEBUG ] detect machine type
    [storage1][DEBUG ] find the location of an executable
    [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
    [ceph_deploy.osd][DEBUG ] Deploying osd to storage1
    [storage1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [storage1][DEBUG ] find the location of an executable
    [storage1][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
    [storage1][WARNIN] No data was received after 300 seconds, disconnecting...
    [storage1][INFO ] checking OSD status...
    [storage1][DEBUG ] find the location of an executable
    [storage1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
    [storage1][WARNIN] No data was received after 300 seconds, disconnecting...
    [ceph_deploy.osd][DEBUG ] Host storage1 is now ready for osd use.

    "ceph -s" shows, that no OSD is available:

    [ceph@rdo-cc ceph-cluster]$ ceph -s
    cluster:
    id: 8277a806-cb9e-467d-9187-12513849feea
    health: HEALTH_OK

    services:
    mon: 1 daemons, quorum rdo-cc
    mgr: rdo-cc(active)
    osd: 0 osds: 0 up, 0 in

    data:
    pools: 0 pools, 0 pgs
    objects: 0 objects, 0B
    usage: 0B used, 0B / 0B avail
    pgs:

    [ceph@rdo-cc ceph-cluster]$

    Cheers!

  • I just found the fix/workaround "sudo iptables -F" in annother thread, which helped...

    Cheers! Elvis

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training