Hi, I have a problem for Installing LAVA server and dispatcher using docker images that Linaro offer.
I installed both two images(server and dispatcher) on my local pc.
When I submit job, submitted job is listed on Lava server.
But it remain the status as 'Submitted' and not change.
When i visit server {local ip address:port number}/scheduler/device/qemu01,
I can see message like below.
Is this mean that health-check job have to be registered before
submitting test job? If then, how to do?
I have looked for the way to figure out this problem, but I couldn't.
Although I tried to disable health check on this device and
forced to change Health as 'Good',
Health status soon change like Good → Bad (Invalid device configuration).
Below is what I did for installing LAVA server and dispatcher.
- LAVA Server
1) Pull docker image and run.
$
|
2) Create superuser
Create id as admin, pw as admin.
|
3) Create token
Create token for admin account on server web ui.
4) Add device type and device
$ lava-server manage device-types add qemu |
5) Add device dictionary
$ lava-server manage devices add
--device-type qemu --worker new_lava_slave qemu01 |
- LAVA dispatcher
1) Pull docker image and run.
$
- v /boot : /boot - v /lib/modules : /lib/modules - v /home/lava-slave/LAVA-TEST : /opt/share \ - v /dev/bus/usb : /dev/bus/usb - v ~/. ssh /id_rsa_lava .pub: /home/lava/ . ssh /authorized_keys :ro - v /sys/fs/cgroup : /sys/fs/cgroup \ --device= /dev/ttyUSB0 \ -p 2022:22 -p
5555:5555 -p 5556:5556 \ -h
new_lava_slave \ --privileged \ -e
LAVA_SERVER_IP= "192.168.1.44" \ -e "LOGGER_URL=tcp://192.168.1.44:5557" \ -e "MASTER_URL=tcp://192.168.1.44:5558" \ -e "DISPATCHER_HOSTNAME=--hostname=new_lava_slave" \ lavasoftware /lava-dispatcher :2018.11 |
2) Submit job file
$ ./submityaml.py -p -k apikey.txt qemu01.yaml |
Below is submityaml.py python code.
apikey.txt file is token created on server.
#!/usr/bin/python import argparse import os.path import sys import time import xmlrpclib SLEEP = 5 __version__ = 0.5 LAVA_SERVER_IP = "192.168.1.44" def is_valid_file(parser, arg, flag): if not os.path.exists(arg): parser.error("The file %s does not exist!" % arg) else: return open(arg, flag) # return an open file handle def setup_args_parser(): """Setup the argument parsing. :return The parsed arguments. """ description = "Submit job file" parser = argparse.ArgumentParser(version=__version__, description=description) parser.add_argument("yamlfile", help="specify target job file", metavar="FILE", type=lambda x: is_valid_file(parser, x, 'r')) parser.add_argument("-d", "--debug", action="store_true", help="Display verbose debug details") parser.add_argument("-p", "--poll", action="store_true", help="poll job status until job completes") parser.add_argument("-k", "--apikey", default="apikey.txt", help="File containing the LAVA api key") parser.add_argument("--port", default="9099", help="LAVA/Apache default port number") return parser.parse_args() def loadConfiguration(): global args args = setup_args_parser() def loadJob(server_str): """loadJob - read the JSON job file and fix it up for future submission """ return args.yamlfile.read() def submitJob(yamlfile, server): """submitJob - XMLRPC call to submit a JSON file returns jobid of the submitted job """ # When making the call to submit_job, you have to send a string jobid = server.scheduler.submit_job(yamlfile) return jobid def monitorJob(jobid, server, server_str): """monitorJob - added to poll for a job to complete """ if args.poll: sys.stdout.write("Job polling enabled\n") # wcount = number of times we loop while the job is running wcount = 0 # count = number of times we loop waiting for the job to start count = 0 f = open("job_status.txt", "w+") while True: status = server.scheduler.job_status(jobid) if status['job_status'] == 'Complete': f.write("Complete\n") break elif status['job_status'] == 'Canceled': f.write("Canceled\n") print '\nJob Canceled' exit(0) elif status['job_status'] == 'Submitted': sys.stdout.write("Job waiting to run for % 2d seconds\n" % (wcount * SLEEP)) sys.stdout.flush() wcount += 1 elif status['job_status'] == 'Running': sys.stdout.write("Job Running for % 2d seconds\n" % (count * SLEEP)) sys.stdout.flush() count += 1 else: f.write("unkonwn status\n") print "unknown status" exit(0) time.sleep(SLEEP) print '\n\nJob Completed: ' + str(count * SLEEP) + ' s (' + str(wcount * SLEEP) + ' s in queue)' def process(): print "Submitting test job to LAVA server" loadConfiguration() user = "admin" with open(args.apikey) as f: line = f.readline() apikey = line.rstrip('\n') server_str = 'http://' + LAVA_SERVER_IP + ":" + args.port xmlrpc_str = 'http://' + user + ":" + apikey + "@" + LAVA_SERVER_IP + ":" + args.port + '/RPC2/' print server_str print xmlrpc_str server = xmlrpclib.ServerProxy(xmlrpc_str) server.system.listMethods() yamlfile = loadJob(server_str) jobid = submitJob(yamlfile, server) monitorJob(jobid, server, server_str) if __name__ == '__main__': process() |
The job file named qemu01.yaml is below.
# Your first LAVA JOB definition for an
x86_64 QEMU |