Welcome to the new Linux Foundation Forum!

Problems running yardstick tests in Lab 4


I'm finding problems when running the test cases in Lab 4, except for the very first one (ping.yaml). Aparently there are SSH timeout errors.

I've repeated the Lab 4 twice, and I'm finding the same exact errors. The first time I quitted from the Bash shell of the Docker container right after the ping.yaml test case, and I thought problems could be due to this, so I repeated the lab again but I'm still finidng the same errors.

Also, Grafana shows empty graphs,

Could you please advice? I don't know if I'm doing something wrong, or if the test cases are failing because of existing errors in the current code.

Thanks a lot! Best regards,


  • Hi,

    It does seem like the SSH sessions are timing out, since the tests are taking longer than usual in your case.

    Can you please mention the configuration of your VM instance that you are using to run this (vcpu, memory)?

    You can also try increasing the timeout in the Python code (ssh.py in yardstick directory) but this may be a temporary hack. Try changing interval to 5 and/or timeout to 240. This will make the SSH sessions wait much longer.

    def wait(self, timeout=120, interval=1): <======
    """Wait for the host will be available via ssh."""
    start_time = time.time()
    while True:
    return self.execute("uname")
    except (socket.error, SSHError) as e:
    self.log.debug("Ssh is still unavailable: %r", e)
    if time.time() > (start_time + timeout):
    raise SSHTimeout("Timeout waiting for '%s'", self.host)

  • Hi Sriram,

    Thanks for your help.

    I'm using the configuration listed in the PDF for the lab actvities: 8 cores and 52 GB RAM. I've followed the PDF indications, except for updating the yum repo as stated in the thread "lab1-issue-configure-undercloud-yml-51-fails" of this LFS264 class forum. I haven't had to follow (so far) the recommendations given in the thread "LFS264: lab1, step 5, APEX setup command fails"

    Yesterday Feb 26, at around 20:00 (CET) I repeated the opnfv_yardstick_tc001.yaml testcase of Lab 4 in two VMs in parallel. VM-1 was 8 cores, 52 GB (as per Lab 4 PDF). VM-2 was 24 cores, 52 GB. Due to GCP limitations of 24 cores per datacenter, I placed VM-2 in "us-east1-d". (By the way: any reason for choosing "us-central1-f" for these labs?) Interestingly, the errors didn't appear in any of these two VMs. I'm attaching the results.

    So, today Feb 27, I wanted to repeat the rest of test cases in Lab 4 (fio.yaml, etc.) and I got again the same errors as weeks ago. Then I tried again with opnfv_yardstick_tc001.yaml and it failed again. I applied the correction of ssh.py but no luck.

    For my learning purposes, I'm OK with the content learnt and the practice I've got through the labs.


Sign In or Register to comment.