2017-12-15 version Lab 3.2 problem
I have retried the labs since the new version based on pike has come out. I have got though to Lab 3.2 step 5, with no errors seen in the ./stack.sh output (and also in the stack.sh.log file):
=========================
DevStack Component Timing
=========================
Total runtime 781
run_process 6
apt-get-update 9
pip_install 330
osc 2
wait_for_service 8
git_timed 132
apt-get 176
=========================
This is your host IP address: 192.168.97.2
This is your host IPv6 address: ::1
WARNING:
Using lib/neutron-legacy is deprecated, and it will be removed in the future
Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html
DevStack Version: pike
Change: 4a85d5d6e040b08e085f9f5752de90a2346df6d1 Move remove_uwsgi_config to cleanup_
placement 2017-12-06 11:31:12 +0000
OS Version: Ubuntu 16.04 xenial
However, when I try and run the nova-manage command as I only see one hypervisor listed, I get the following output:
[email protected]:~/devstack$ source openrc admin
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
[email protected]:~/devstack$ openstack hypervisor list
+----+---------------------+-----------------+--------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+--------------+-------+
| 1 | devstack-cc | QEMU | 192.168.97.1 | up |
+----+---------------------+-----------------+--------------+-------+
[email protected]:~/devstack$ nova-manage cell_v2 discover_hosts
An error has occurred:
Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1797, in main
ret = fn(*fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/manage.py", line 1583, in discover_hosts
hosts = host_mapping_obj.discover_hosts(ctxt, cell_uuid, status_fn)
File "/opt/stack/nova/nova/objects/host_mapping.py", line 206, in discover_hosts
cell_mappings = objects.CellMappingList.get_all(ctxt)
File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 1
84, in wrapper
result = fn(cls, context, *args, **kwargs)
File "/opt/stack/nova/nova/objects/cell_mapping.py", line 137, in get_all
db_mappings = cls._get_all_from_db(context)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 978, in wrapper
with self._transaction_scope(context):
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 1028, in _transaction_scope
context=context) as resource:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 633, in _session
bind=self.connection, mode=self.mode)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 398, in _create_session
self._start()
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 484, in _start
engine_args, maker_args)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", l
ine 506, in _setup_for_connection
"No sql_connection parameter is established")
CantStartEngineError: No sql_connection parameter is established
I've checked the local.conf and done several unstack/clean cycles, but am stuck here. What investigation I've managed to do implies the compute-node configuration isn't being added to a cell(?), but this looks to be a bug.
Does ANYONE have any ideas on this.
Comments
-
Hello,
I just ran the labs and it seems a script is missing important variable. This is the local.conf file on the compute-node
[[local|localrc]] HOST_IP=192.168.97.2 # IP for compute-node SERVICE_HOST=192.168.97.1 # devstack-cc IP, first node you used FLAT_INTERFACE=eth0 FIXED_RANGE=10.4.128.0/20 FIXED_NETWORK_SIZE=4096 FLOATING_RANGE=192.168.42.128/25 MULTI_HOST=1 LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=openstack DATABASE_PASSWORD=db-secret RABBIT_PASSWORD=rb-secret SERVICE_PASSWORD=sr-secret DATABASE_TYPE=mysql MYSQL_HOST= RABBIT_HOST= GLANCE_HOSTPORT=:9292 ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client NOVA_VNC_ENABLED=True NOVNCPROXY_URL="http://:6080/vnc_auto.html" VNCSERVER_LISTEN= VNCSERVER_PROXYCLIENT_ADDRESS=
You will note that MYSQL_HOST, and a few others is empty. When the nova-manage command runs it tries to connect to the database to update the hypervisor entry. The local.conf file should have several entries of the variable $SERVICE_HOST. If you reference the lab Exercise 3.2, Install Software on the New Compute Node, step 2 you will see what the local.conf file should look like.
I will report this to the infrastructure folks, to update the script. ./unstack.sh, ./clean.sh, edit the local.conf file and ./stack should fix this issue. I am about to try it right now, just wanted to get you an answer ASAP.
Regards,
0 -
Hello again,
In addition to the script issue, which is rolling out now, there has been a change from the DevStack folks on how to add another node to the system. The nova-manage cell_v2 discover_hosts process was a work-around for a previous bug. The new five step process, which I am putting into the course book now, is as follows:
When the *compute-node* finishes the ./stack.sh then on the **devstack-cc** run this process: 1) Source the config file as admin [email protected]:~$ cd devstack/ [email protected]:~/devstack$ source openrc admin WARNING: setting legacy OS_TENANT_NAME to support cli tools. 2) Verify the compute-host was added and is up. [email protected]:~/devstack$ nova service-list --binary nova-compute +--------------------------------------+--------------+--------------+------+-------- -+-------+----------------------------+-----------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+--------------+--------------+------+-------- -+-------+----------------------------+-----------------+-------------+ | 32fa0ccd-45a8-45f5-b2e7-2d84e7377eb3 | nova-compute | devstack-cc | nova | enabled | up | 2017-12-19T19:49:35.000000 | - | False | | 9fc15015-d150-478f-ba4f-764fe4ed03c9 | nova-compute | compute-node | nova | enabled | up | 2017-12-19T19:49:35.000000 | - | False | +--------------------------------------+--------------+--------------+------+-------- -+-------+----------------------------+-----------------+-------------+ 3) Verify the hypervisor has not yet been added. [email protected]:~/devstack$ openstack hypervisor list +----+---------------------+-----------------+--------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+--------------+-------+ | 1 | devstack-cc | QEMU | 192.168.97.1 | up | +----+---------------------+-----------------+--------------+-------+ 4) Use a script to join the hypervisor to the cloud. [email protected]:~/devstack$ ./tools/discover_hosts.sh /usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning: (1287, u"'@@t x_isolation' is deprecated and will be removed in a future release. Please use '@@tra nsaction_isolation' instead") result = self._query(query) Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': 79bd3053-a007-469d-ba72-d7b106d08568 Found 1 unmapped computes in cell: 79bd3053-a007-469d-ba72-d7b106d08568 Checking host mapping for compute host 'compute-node': b3caa6f3-fe33-49af-839a-375813 8af2b1 Creating host mapping for compute host 'compute-node': b3caa6f3-fe33-49af-839a-375813 8af2b1 5) Verify the compute-host has been added. [email protected]:~/devstack$ openstack hypervisor list +----+---------------------+-----------------+--------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+--------------+-------+ | 1 | devstack-cc | QEMU | 192.168.97.1 | up | | 2 | compute-node | QEMU | 192.168.97.2 | up | +----+---------------------+-----------------+--------------+-------+ [email protected]:~/devstack$
I have just tested this process and it worked upon a fresh load of lab 3.2
Regards,
0 -
OK, I have followed these instructions and confirm that they work. Thanks for the quick response.
I have just looked at the full Labs and Solutions doc (from 1.8) and the Specific Lab and Solution doc (from 3.18.b), and they as of now do not have the updated multi-node instructions.
Also RE: the local.conf. I was copy & pasting the example in full Labs and Solutions which I downloaded yesterday (19 Dec), so it DID in fact have the $SERVICE_HOST details included.
0 -
Hi:
It takes some time to update the course as it cannot be done in real time (even a small change requires an entire upload) Forums help and I'm sure there will be a new errata file posted shortly It should not be long before there is a new update to the course itself, but we have to let things accumulate for a while.
0 -
Yes, the local.conf file in the book was accurate. Due to an issue with the script used to install the lab node the local.conf on the node was inaccurate. As I was troubleshooting the issue I found the book and the node did not match. Should you have restacked without editing the local.conf file on the node it would have continued to fail. If you edited the local.conf with the file from the book then it would have installed correctly, but then would have encountered the five-step update.
Regards,
0
Categories
- 9.1K All Categories
- 15 LFX Mentorship
- 68 LFX Mentorship: Linux Kernel
- 382 Linux Foundation Boot Camps
- 234 Cloud Engineer Boot Camp
- 76 Advanced Cloud Engineer Boot Camp
- 30 DevOps Engineer Boot Camp
- 9 Cloud Native Developer Boot Camp
- 979 Training Courses
- 15 LFC110 Class Forum
- 16 LFD102 Class Forum
- 106 LFD103 Class Forum
- 6 LFD121 Class Forum
- 56 LFD201 Class Forum
- 1 LFD213 Class Forum - Discontinued
- 128 LFD232 Class Forum
- 19 LFD254 Class Forum
- 450 LFD259 Class Forum
- 90 LFD272 Class Forum
- 1 LFD272-JP クラス フォーラム
- 16 LFS200 Class Forum
- 713 LFS201 Class Forum
- LFS201-JP クラス フォーラム
- 282 LFS211 Class Forum
- 51 LFS216 Class Forum
- 27 LFS241 Class Forum
- 28 LFS242 Class Forum
- 19 LFS243 Class Forum
- 6 LFS244 Class Forum
- 12 LFS250 Class Forum
- LFS250-JP クラス フォーラム
- 113 LFS253 Class Forum
- 814 LFS258 Class Forum
- 8 LFS258-JP クラス フォーラム
- 54 LFS260 Class Forum
- 81 LFS261 Class Forum
- 18 LFS262 Class Forum
- 76 LFS263 Class Forum
- 14 LFS264 Class Forum
- 10 LFS266 Class Forum
- 10 LFS267 Class Forum
- 9 LFS268 Class Forum
- 6 LFS269 Class Forum
- 183 LFS272 Class Forum
- 1 LFS272-JP クラス フォーラム
- 191 LFW211 Class Forum
- 114 LFW212 Class Forum
- 880 Hardware
- 207 Drivers
- 74 I/O Devices
- 44 Monitors
- 115 Multimedia
- 205 Networking
- 98 Printers & Scanners
- 82 Storage
- 730 Linux Distributions
- 85 Debian
- 64 Fedora
- 12 Linux Mint
- 13 Mageia
- 22 openSUSE
- 127 Red Hat Enterprise
- 33 Slackware
- 13 SUSE Enterprise
- 349 Ubuntu
- 450 Linux System Administration
- 33 Cloud Computing
- 64 Command Line/Scripting
- Github systems admin projects
- 89 Linux Security
- 76 Network Management
- 105 System Management
- 45 Web Management
- 51 Mobile Computing
- 19 Android
- 19 Development
- 1.2K New to Linux
- 1.1K Getting Started with Linux
- 501 Off Topic
- 121 Introductions
- 193 Small Talk
- 19 Study Material
- 756 Programming and Development
- 243 Kernel Development
- 479 Software Development
- 903 Software
- 247 Applications
- 178 Command Line
- 2 Compiling/Installing
- 73 Games
- 314 Installation
- 26 All In Program
- 26 All In Forum
Upcoming Training
-
August 20, 2018
Kubernetes Administration (LFS458)
-
August 20, 2018
Linux System Administration (LFS301)
-
August 27, 2018
Open Source Virtualization (LFS462)
-
August 27, 2018
Linux Kernel Debugging and Security (LFD440)