Welcome to the Linux Foundation Forum!

Stuck on Lab3.2, adding a compute node


Hi, I have followed the steps in LAB_3.2.pdf with a CC already setup as in Lab 2.1. However, after I run the stack.sh in compute node,  there is no issue found, just can't find the compute node in CC node's hyporvisor list, only one record in DB


select * from compute_nodes


If there anything I should check? How this compute node register itself to CC? Any clue?


One comlaint to the Lab environment, it's very difficult to work with, I have to restart all the work everytime pick up the course, either connection issue or something else.


Can you provide a working VM? Currently I'm using vagrant / virtualbox to build my own.




  • TangJianyu
    TangJianyu Posts: 5

    I'm chasing several suspecting issues, like after I ran stack.sh in compute node, there is an error about the generate-subunit, which I didn't notice before:



    DevStack Component Timing


    Total runtime    112

    run_process       15

    apt-get-update    28

    pip_install       31

    wait_for_service   0

    apt-get            2



    This is your host IP address:

    This is your host IPv6 address: ::1

    ./stack.sh: line 501: generate-subunit: command not found


    So I checked the install log for the os-testr, which is suppose to provide this command, but suprisingly found that the script uninstalled os-testr-0.7.0 but installed os-testr-0.2.0, what's wrong?


    Thanks a lot.


    2016-05-31 08:22:30.088 |   Found existing installation: os-testr 0.7.0

    2016-05-31 08:22:30.092 |     Uninstalling os-testr-0.7.0:

    2016-05-31 08:22:30.095 |       Successfully uninstalled os-testr-0.7.0

    2016-05-31 08:22:30.107 |   Found existing installation: six 1.10.0

    2016-05-31 08:22:30.109 |     Uninstalling six-1.10.0:

    2016-05-31 08:22:30.110 |       Successfully uninstalled six-1.10.0

    2016-05-31 08:22:30.117 |   Found existing installation: argparse 1.2.1

    2016-05-31 08:22:30.119 |     Not uninstalling argparse at /usr/lib/python2.7, as it is in the standard library.

    2016-05-31 08:22:30.521 | Successfully installed Babel-1.3 argparse-1.2.1 extras-0.0.3 fixtures-1.2.0 os-testr-0.2.0 pbr-1.2.0 python-mimeparse-0.1.4 python-subunit-1.1.0 pytz-2015.4 six-1.9.0 testtools-1.8.0



  • serewicz
    serewicz Posts: 1,000

    Hello. If I understand your post you are using your own system (VirtualBox and Vagrant) and not the lab instances included with the course.  It is not possible to troubleshoot or support issues outside of thie provided lab environment

  • serewicz
    serewicz Posts: 1,000
    edited May 2016

    If this problem happens again, please attach the local.conf file you used on the compute host as well as the stack log file. Without seeing the output my guess would be there was an error in the local.conf file, or the same local.conf file was used on the compute-host. If is is uninstalling software is this the second time running the ./stack.sh on that node? Remember to run the ./unstack.sh and the ./clean.sh between attempts at installing DevStack to minimize conflicts.

  • TangJianyu

    Thank you Serewicz, I have resolved this problem by executing a git pull / clean / stack again, after get the updated version of devstack now I got the compute node listed.


    One change is that the local.conf on compute node, I enabled n-api-meta instead of n-api.


    And another difference is that I executed below on the CC node:

    for i in `seq 2 10`; do nova-manage fixed reserve 10.10.128.$i; done

    Not sure if it's related

  • TangJianyu

    And one more difference - 

    I have to execute 

    source openrc admin admin

    on the compute node, otherwise there are errors

  • serewicz
    serewicz Posts: 1,000

    Thank your for the feedback Tanglianyu. I will look into issues with which services need to be started. These things change without much forewarning on DevStack.  The source of the openrc file has been included in the course file and should be rolled out in the next version of the material. Thank you again.


Upcoming Training