Hi,
There are mcu and soc on my board, two serial port for them( ttyusb0:mcu, ttyusb1:soc), the hostpc ( it is used for lava worker) connected to the two serial ports, and reboot soc cmd is echo "soc boot" > /dev/ttyUSB0 . then we can see boot log at ttyUSB1.
We want to use lava to test soc system, it is arm64 with linux , without uboot . and we hope to add build action in the device.
So the deploy and boot steps should be:
1. run build.sh on hostpc (it should be lava worker) and check the result (failed and pass)
2. run echo "soc burn" > /dev/ttyUSB0 on hostpc to change the soc to burn mode, and check the result (failed and pass)
3. run burn.sh on hostpc to burn to soc , and check the result (failed and pass)
4. run echo "soc boot" > /dev/ttyUSB0 to reboot soc , and check the result (failed and pass)
5. connect to /dev/ttyUSB1 to get soc boot log, and check the result (failed and pass)
6. ssh to the linux of soc.
What I want to know is:
1. Is the above design feasible on lava?
2. What do I need to do for this? Are there any device type templets that I can refer to?
The following is my lava system, I can run the test job with qemu device now.
~/work/src/lava $ dpkg -l |grep lava
ii lava 2022.11.1+10+buster all Linaro Automated Validation Architecture metapackage
ii lava-common 2022.11.1+10+buster all Linaro Automated Validation Architecture common
ii lava-coordinator 2022.11.1+10+buster all LAVA coordinator daemon
ii lava-dev 2022.11.1+10+buster all Linaro Automated Validation Architecture developer support
ii lava-dispatcher 2022.11.1+10+buster all Linaro Automated Validation Architecture dispatcher
ii lava-dispatcher-host 2022.11.1+10+buster all LAVA dispatcher host tools
ii lava-server 2022.11.1+10+buster all Linaro Automated Validation Architecture server
ii lava-server-doc 2022.11.1+10+buster all Linaro Automated Validation Architecture documentation
ii lava-tool 0.25-2 all deprecated command line utility for LAVA
ii lavacli 0.9.7+buster all LAVA XML-RPC command line interface
~/work/src/lava $
Hi, guys,
I currently have 2 jobs:
Job1:
actions:
- boot:
failure_retry: 4
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
- test:
timeout:
minutes: 4
interactive:
- name: check-uboot
prompts: ["=> ", "/ # "]
script:
- command: "printenv"
name: printenv
successes:
- message: "soc_type=imx93"
Job2:
actions:
- deploy:
to : uuu
images :
boot :
url : /path/to/bootloader
- boot:
method: uuu
commands :
- bcu: reset usb
- uuu: -b sd {boot}
- bcu: deinit
timeout:
minutes: 2
- boot:
method: bootloader
bootloader: u-boot
commands: []
prompts: ['=>']
timeout:
minutes: 2
- test:
interactive:
- name: check-uboot
prompts: ["=> ", "/ # "]
script:
- command: "printenv"
name: printenv
successes:
- message: "soc_type=imx93"
timeout:
minutes: 2
The first job just boot the board and check if the bootloader ok, the second job flash a new bootloader to board everytime before check the bootloader.
I wonder if possible for me to combine the two, define next logic in a job:
1. Boot the board to check the uboot
2. If uboot is ok, then the job finish
3. If uboot not ok or action timeout, then go to flash action to flash new bootloader to the device, then job finish.
The idea is: the step 3 is optional, we only flash a new bootloader when previous boot action failure.
Regards,
Larry
Hi all.
I am trying to setup my own embedded device using multinode API.
The senario is simple. target needs to wait until host role done.
The test action that wait for host role done looks like this:
```
- test:
interactive:
- name: send_target_ready
prompts:
- 'Generate Erased Block'
script:
- command: null
name: result
- lava-send: booted
- lava-wait: done
role:
- target
```
The multinode job done very well, but the test action I mentioned not show live logs from the device uart connection.
Is it possible to show live logs from the device while waiting with 'lava-wait host'?
Thanks.
Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi. all
I am struggling with situation that one of multinode job stuck in
scheduling status forever.
[image: image.png]
It occurs 1 in 10 times.
Sometimes one of the multinode job stuck in 'scheduling' status and the
other job goes timeout waiting first multinode job.
- lava-scheduler log
[image: image.png]
when this issue occur, the lava-scheduler's log indicate only one of
multinode job scheduled, and the other's not.
- lava-dispatcher log
[image: image.png]
and when issue occur, lava-dispatcher's log seems only one job has
triggered.
I am using lava-server and lava-dispatcher with docker instance (version
2023.08)
It occur in 2023.06 too.
It seems the issue related lava-scheduler. What should i check for resolve
this issue?
Please advise.
Thank you
Hello Team,
I'm using notify action in the job definition to notify users about the
status of the job. I'm already a registered user in lava and after
completion of my job, the administration site shows the status as "*not
sent*".
Please let me know what can be the reason, and how can i achieve it?
[image: lava-notify.PNG]
Below is the job definition i'm using:
*device_type: ADT-UNIT1job_name: sample test to notify usertimeouts: job:
minutes: 15 action: minutes: 10 connection: minutes: 5visibility:
publicactions:- command: name: relay_pwr_on timeout:
minutes: 1- deploy: to: flasher images: package:
url:
https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…
<https://artifactory.softwaretools.com/artifactory/gop-generic-stable-local/…>-
boot: method: u-boot commands: - setenv factorymode 1 -
boot auto_login: login_prompt: 'login:' username: root
password_prompt: 'Password:' password: root login_commands:
- touch /home/root/test_file - ifconfig prompts: -
'root@hon-grip' - 'root@GRIP'notify: recipients: - to: method:
email user: pavan criteria: status: finished verbosity: verbose*
Thanks & Regards,
Pavan
Hi.
I am using Lava-server and Lava-dispatcher using docker image with version 2023.08.
I have an issue when submitting a multinode job, sometimes Lava-server run the jobs with abnormal Job ID such as 230, 230.1.
Sometimes, Lava-server run the jobs with normal Job ID such as 230.0, 230.1.
When I submit the definition, sometimes Lava-server show the message 'Invalid job definition: expected a dictionary' on the web page.
Is it because of the syntax error in my definition?
When I submit using Lava server web or lavacli, there's no warning or error that indicates a syntax error in the definition.
The job definition yaml looks like this.
```
job_name: multinode test job
timeouts:
job:
minutes: 60
action:
minutes: 60
connection:
minutes: 60
priority: medium
visibility: public
protocols:
lava-multinode:
roles:
target:
count: 1
device_type: customdevice
host:
count: 1
device_type: docker
actions:
- deploy:
role:
- target
to: flasher
images:
fw:
url: http://example.com/repository/customdevice/test/test.bin
- boot:
role:
- target
method: minimal
prompts:
- 'root:'
- test:
interactive:
- name: send_target_ready
prompts:
- 'root:'
script:
- command: ls
- lava-send: booted
- lava-wait: done
role:
- target
- deploy:
role:
- host
to: docker
os: debian
image: testimage:2023.08
- boot:
role:
- host
method: docker
command: /bin/bash -c 'service ssh start; bash'
prompts:
- 'root@lava:'
- test:
interactive:
- name: wait_target_ready
prompts:
- 'root@lava:'
script:
- command: ls
- lava-wait: booted
role:
- host
- test:
role:
- host
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: run-command
description: "Run command"
os:
- ubuntu
scope:
- functional
run:
steps:
- lava-test-case pwd-command --shell 'pwd'
- lava-send done
from: inline
name: run-command
path: inline/run-command.yaml
```
Hello,
I tried upgrading from lava-server 2023.03 to 2023.06 on Debian 11 and got the following error:
Setting up lava-server (2023.06+11+bullseye) ...
/var/run/postgresql:5432 - accepting connections Updating configuration:
* generate SECRET_KEY [SKIP]
* generate DATABASES [SKIP]
Run fixups:
* fix permissions:
- /var/lib/lava-server/home/
- /var/lib/lava-server/default/
- /var/lib/lava-server/default/media/
- /var/lib/lava-server/default/media/job-output/
- /etc/lava-server/dispatcher-config/
- /etc/lava-server/dispatcher.d/
- /var/lib/lava-server/default/media/job-output/2017/
- /etc/lava-server/dispatcher-config/devices/
- /etc/lava-server/dispatcher-config/devices/*
- /etc/lava-server/dispatcher-config/device-types/
- /etc/lava-server/dispatcher-config/device-types/*
- /etc/lava-server/dispatcher-config/health-checks/
- /etc/lava-server/dispatcher-config/health-checks/*
* drop duplicated templates:
* fix permissions:
- /etc/lava-server/settings.conf
- /etc/lava-server/instance.conf
- /var/log/lava-server/
- /var/log/lava-server/*
- /etc/lava-server/secret_key.conf
Create database:
psql -q
NOTICE: not creating role lavaserver -- it already exists
NOTICE: not creating role devel -- it already exists lava-server manage migrate --noinput --fake-initial Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, lava_results_app, lava_scheduler_app, linaro_django_xmlrpc, sessions, sites Running migrations:
Applying lava_results_app.0019_auto_20230307_1545...Traceback (most recent call last):
File "/usr/bin/lava-server", line 55, in <module>
main()
File "/usr/bin/lava-server", line 51, in main
execute_from_command_line([sys.argv[0]] + options.command)
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/core/management/commands/migrate.py", line 232, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3/dist-packages/django/db/migrations/migration.py", line 114, in apply
operation.state_forwards(self.app_label, project_state)
File "/usr/lib/python3/dist-packages/django/db/migrations/operations/models.py", line 256, in state_forwards
state.remove_model(app_label, self.name_lower)
File "/usr/lib/python3/dist-packages/django/db/migrations/state.py", line 100, in remove_model
del self.models[app_label, model_name]
KeyError: ('lava_results_app', 'actiondata') migration
dpkg: error processing package lava-server (--configure):
installed lava-server package post-installation script subprocess returned error exit status 1 Errors were encountered while processing:
lava-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Does anybody know what I can do in this case?
Uninstalling lava-server and installing it again does not resolve the issue.
Kind regards,
Tim
--
Tim Jaacks
SOFTWARE DEVELOPER
SECO Northern Europe GmbH
Schlachthofstrasse 20
21079 Hamburg
Germany
T: +49 40 791899-183
E: tim.jaacks(a)seco.com
Register: Amtsgericht Hamburg, HRB 148893 Represented by: Dirk Finstel, Marc-Michael Braun, Massimo Mauri
Hi,
is it possible to show "Total count of test cases listed in Test
Definition" to the Results GUI page.
Right now it shows only executed test cases (either PASS/FAIL) but does not
show to be executed (yet to be executed) cases count.
Regards,
Koti