On Thu, 20 Dec 2018 at 09:09, tomato <seoji@nexell.co.kr> wrote:

Hi, I have a problem for Installing LAVA server and dispatcher using docker images that Linaro offer. 

I installed both two images(server and dispatcher) on my local pc.

When I submit job, submitted job is listed on Lava server.

But it remain the status as 'Submitted' and not change.


Test jobs not starting is usually a problem of device configuration.

On the master, execute:

sudo lava-server manage check --deploy


Check the lava-master logs and the lava-slave logs.

 

When i visit server {local ip address:port number}/scheduler/device/qemu01,

I can see message like below.

Is this mean that health-check job have to be registered before submitting test job?


No but it is recommended. 
 

If then, how to do?


 

I have looked for the way to figure out this problem, but I couldn't.


There is a clear hint there, in red. The device Health is Bad. The same page also shows the reason:

Invalid device configuration.

 

Although I tried to disable health check on this device and forced to change Health as 'Good',

Health status soon change like Good → Bad (Invalid device configuration).




 

Below is what I did for installing LAVA server and dispatcher.


- LAVA Server

1) Pull docker image and run.

$ docker pull lavasoftware/lava-server:2018.11
$ docker run -itd --name new_lava_server --cap-add=NET_ADMIN \

    -p 9099:80 -p 5557:5555 -p 5558:5556 -h new_lava_server \

    lavasoftware/lava-server:2018.11

2) Create superuser

Create id as admin, pw as admin.

$ lava-server manage createsuperuser

3) Create token

Create token for admin account on server web ui.

4) Add device type and device

$ lava-server manage device-types add qemu

5) Add device dictionary

$ lava-server manage devices add --device-type qemu --worker new_lava_slave qemu01



That is not adding a device dictionary. That is simply creating the Device database object.

 

- LAVA dispatcher

1) Pull docker image and run.

$ docker pull lavasoftware/lava-dispatcher:2018.11

$ docker run -it --name new_lava_slave \

       -v /boot:/boot -v /lib/modules:/lib/modules -v /home/lava-slave/LAVA-TEST:/opt/share \
       -v /dev/bus/usb:/dev/bus/usb -v ~/.ssh/id_rsa_lava.pub:/home/lava/.ssh/authorized_keys:ro -v /sys/fs/cgroup:/sys/fs/cgroup \
       --device=/dev/ttyUSB0 \
       -p 2022:22 -p 5555:5555 -p 5556:5556 \
       -h new_lava_slave \
       --privileged \
       -e LAVA_SERVER_IP="192.168.1.44" \
       -e "LOGGER_URL=tcp://192.168.1.44:5557" \
       -e "MASTER_URL=tcp://192.168.1.44:5558" \
       -e "DISPATCHER_HOSTNAME=--hostname=new_lava_slave" \
       lavasoftware/lava-dispatcher:2018.11

2) Submit job file

$ ./submityaml.py -p -k apikey.txt qemu01.yaml

Below is submityaml.py python code.


You should look at using lavacli and you should also use Python3 for your scripts. There is an example on the Available Methods page of the instance. 


 

apikey.txt file is token created on server.

#!/usr/bin/python
import argparse
import os.path
import sys
import time
import xmlrpclib

SLEEP = 5
__version__ = 0.5

LAVA_SERVER_IP = "192.168.1.44"

def is_valid_file(parser, arg, flag):
    if not os.path.exists(arg):
        parser.error("The file %s does not exist!" % arg)
    else:
        return open(arg, flag)  # return an open file handle


def setup_args_parser():
    """Setup the argument parsing.

    :return The parsed arguments.
    """
    description = "Submit job file"
    parser = argparse.ArgumentParser(version=__version__, description=description)
    parser.add_argument("yamlfile", help="specify target job file", metavar="FILE",
                   type=lambda x: is_valid_file(parser, x, 'r'))
    parser.add_argument("-d", "--debug", action="store_true", help="Display verbose debug details")
    parser.add_argument("-p", "--poll", action="store_true", help="poll job status until job completes")
    parser.add_argument("-k", "--apikey", default="apikey.txt", help="File containing the LAVA api key")
    parser.add_argument("--port", default="9099", help="LAVA/Apache default port number")

    return parser.parse_args()


def loadConfiguration():
    global args
    args = setup_args_parser()


def loadJob(server_str):
    """loadJob - read the JSON job file and fix it up for future submission
    """
    return args.yamlfile.read()


def submitJob(yamlfile, server):
    """submitJob - XMLRPC call to submit a JSON file

       returns jobid of the submitted job
    """
    # When making the call to submit_job, you have to send a string
    jobid = server.scheduler.submit_job(yamlfile)
    return jobid


def monitorJob(jobid, server, server_str):
    """monitorJob - added to poll for a job to complete

    """
    if args.poll:
        sys.stdout.write("Job polling enabled\n")
        # wcount = number of times we loop while the job is running
        wcount = 0
        # count = number of times we loop waiting for the job to start
        count = 0
       
    f = open("job_status.txt", "w+")

        while True:
            status = server.scheduler.job_status(jobid)


**AVOID** polling the XMLRPC in this way - it is bad practice as it adds a constant load to the server - this will not scale. lavacli has the support you need without reinventing the submission script.

 
            if status['job_status'] == 'Complete':
        f.write("Complete\n")
                break
            elif status['job_status'] == 'Canceled':
        f.write("Canceled\n")
                print '\nJob Canceled'
                exit(0)
            elif status['job_status'] == 'Submitted':
                sys.stdout.write("Job waiting to run for % 2d seconds\n" % (wcount * SLEEP))
                sys.stdout.flush()
                wcount += 1
            elif status['job_status'] == 'Running':
                sys.stdout.write("Job Running for % 2d seconds\n" % (count * SLEEP))
                sys.stdout.flush()
                count += 1
            else:
        f.write("unkonwn status\n")
                print "unknown status"
                exit(0)
            time.sleep(SLEEP)
        print '\n\nJob Completed: ' + str(count * SLEEP) + ' s (' + str(wcount * SLEEP) + ' s in queue)'


def process():
    print "Submitting test job to LAVA server"
    loadConfiguration()
    user = "admin"
    with open(args.apikey) as f:
        line = f.readline()
        apikey = line.rstrip('\n')

    server_str = 'http://' + LAVA_SERVER_IP + ":" + args.port
    xmlrpc_str = 'http://' + user + ":" + apikey + "@" + LAVA_SERVER_IP + ":" + args.port + '/RPC2/'
    print server_str
    print xmlrpc_str
    server = xmlrpclib.ServerProxy(xmlrpc_str)
    server.system.listMethods()

    yamlfile = loadJob(server_str)

    jobid = submitJob(yamlfile, server)

    monitorJob(jobid, server, server_str)


if __name__ == '__main__':
    process()

The job file named qemu01.yaml is below.

# Your first LAVA JOB definition for an x86_64 QEMU
device_type: qemu
job_name: QEMU pipeline, first job

timeouts:
  job:
    minutes: 15
  action:
    minutes: 5
  connection:
    minutes: 2
priority: medium
visibility: public

# context allows specific values to be overridden or included
context:
  # tell the qemu template which architecture is being tested
  # the template uses that to ensure that qemu-system-x86_64 is executed.
  arch: amd64

metadata:
  # please change these fields when modifying this job for your own tests.
  docs-source: first-job
  docs-filename: qemu-pipeline-first-job.yaml

# ACTION_BLOCK
actions:
- deploy:
    timeout:
      minutes: 5
    to: tmpfs
    images:
      rootfs:
        image_arg: -drive format=raw,file={rootfs}
        url: https://images.validation.linaro.org/kvm/standard/stretch-2.img.gz
        compression: gz

# BOOT_BLOCK
- boot:
    timeout:
      minutes: 2
    method: qemu
    media: tmpfs
    prompts: ["root@debian:"]
    auto_login:
      login_prompt: "login:"
      username: root

- test:
    timeout:
      minutes: 5
    definitions:
    - repository: http://git.linaro.org/lava-team/lava-functional-tests.git
      from: git
      path: lava-test-shell/smoke-tests-basic.yaml
      name: smoke-tests


_______________________________________________
Lava-users mailing list
Lava-users@lists.lavasoftware.org
https://lists.lavasoftware.org/mailman/listinfo/lava-users


--