Hi folks,
Due to an issue with authorization in REST API in 2021.03 release, we're
putting out a 2021.03.post1 hotfix.
The 2021.03.post1 tag has been pushed to master on git.lavasoftware.org
<http://git.lavasoftware.org>.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release <https://apt.lavasoftware.org/release>
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/ <https://hub.lavasoftware.org/>
and
https://hub.docker.com/u/lavasoftware
<https://hub.docker.com/u/lavasoftware>
Changes in this release
==============
# REST API
Issue where non-admin users could not submit jobs to unrestricted
devices introduced in 2021.03 is now fixed.
# FVP
## Add namespace to the feedback channel.
Feedback logs will now have an additional field indicating the namespace.
## Allow to specify the docker network.
Similar to the docker device type, allow to specify the docker network.
The use case is network isolation on a host running concurrent FVP devices.
Cheers,
--
Stevan Radaković | LAVA Senior Engineer
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs
Hi folks,
The 2021.03 tag has been pushed to master on git.lavasoftware.org
<http://git.lavasoftware.org>.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release <https://apt.lavasoftware.org/release>
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/ <https://hub.lavasoftware.org/>
and
https://hub.docker.com/u/lavasoftware
<https://hub.docker.com/u/lavasoftware>
Changes in this release
==================
# Upgrading
## Security issues
A security issue has been discovered and fixed in LAVA:
* removing/updating jobs via API without authentication
Every version of LAVA (since 2018) is affected.
We strongly advise to upgrade your instances to LAVA 2021.03.
# Device-types
## New device-types
New supported devices:
* imx6sl
* imx6ull
* imx8dxl-phantom-mek
* stm32mp157c-lxa-mc1
* meson-axg-s400
* meson-gxl-s905x-p212
## docker
Docker container can now run on a remote machine with the following option:
```yaml
methods:
docker:
options:
remote: {{ docker_remote|default('') }}
```
## depthcharge
A depthcharge support for x86 architecture has been added as part of the
base-depthcharge.jinja2 template.
## qemu
It is now possible to change machine type for m68k qemu jobs.
## mason
mason templates now support using booti via use_mainline_uboot flag.
# Device management
## sync command
Additional options have been added to the ``sync`` management command:
* physical_owner
* physical_group
* group_device_permissions
Example device dictionary snippet:
```jinja
{% set sync_to_lava = {
"device_type": "qemu",
"worker": "worker-1",
"physical_owner": "user1",
"physical_group": "group1",
"group_device_permissions": [
["change_device", "group1"],
["view_device", "group1"],
["submit_to_device", "group1"],
]
}
%}
```
# Authentication
## Gitlab integration
GitLab auth provider from allauth has been added as the new authentication
backend. For this the django-allauth package needs to be installed manually.
The following line needs to be added to the settings file:
```yaml
AUTH_GITLAB_URL: "https://gitlab.example.com"
```
For more information please refer to
https://docs.lavasoftware.org/lava/authentication.html
# LAVA dispatcher
## Ping interval
Time between two pings to the server is now configurable via
``--ping-interval`` argument.
# Docker
## options
LOGLEVEL setting can now be overridden by setting an environment variable on
the container.
# Bug fixes
* udev.py: Fix wait_device_board_id test
* rcar-gen3-common: fix kernel/dtb address
* dispatcher: fix JLink boot method to support LPC55S
* downloads: Add export of dynamic_data on postprocess
* job: do use sub_id with urls in templates
* apply_overlay: do not crash when the overlay is mentioned but missing
* schema: Fix docker image regexp
* map_scanned_results: try to keep most of the data when results is too
large
* docker: upgrade sentry_sdk to 1.0.0
* qemu: Do not search qemu binary when docker is in use
* rest api: Updating device dict should not check for global user
permission.
* lava-server manage wait: make the command more resilient
Thanks,
--
Stevan Radaković | LAVA Senior Engineer
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs
Hi folks,
The 2021.01 tag has been pushed to master on git.lavasoftware.org
<http://git.lavasoftware.org>.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release <https://apt.lavasoftware.org/release>
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/ <https://hub.lavasoftware.org/>
and
https://hub.docker.com/u/lavasoftware
<https://hub.docker.com/u/lavasoftware>
Changes in this release
==================
# Device-types
## New device-types
New supported devices:
* Musca S
* imx8dxl-ddr3l-evk
* hp-11A-G6-EE-grunt x86_64 Chromebook
## FVP
Multiple console.
Allow for reading from multiple FVP consoles. All output from additional
consoles is directed to LAVA feedback log.
FVP boot now method allows to define additional regexes to match more than
one console.
This is done with ``feedbacks`` keyword:
```yaml
console_string: 'terminal_0: Listening for serial connection on
port (?P<PORT>\d+)'
feedbacks:
- '(?P<NAME>terminal_1): Listening for serial connection on port
(?P<PORT>\d+)'
- '(?P<NAME>terminal_2): Listening for serial connection on port
(?P<PORT>\d+)'
- '(?P<NAME>terminal_3): Listening for serial connection on port
(?P<PORT>\d+)'
```
Feedbacks will be read twice during boot process (before matching login
prompt) and periodically during test-shell.
## docker
Mappings in static_info are now correctly added to the container as well.
## cmsis_dap
As part of job log, LAVA now logs contents of DAPLink virtual USB disk and
contents of DETAILS.TXT (if any).
Add action parameter "post_umount_delay". This tells LAVA to wait given
number of seconds after the umount command.
# Docker actions
## downloads namespace
Docker test shell action can now specify custom downloads namespace.
* if user specify downloads-namespace, use it
* if user not specify downloads-namespace, fall back to `namespace` to mount
the downloads dir.
# LAVA dispatcher
## support uniqify parameter in deploy to: downloads
This will cause the downloaded files to be available under ${image_key}/,
instead of at the root of the downloads directory. It allows jobs to
download more than one image with the same filename.
# Bug fixes
* REST API: __in filtering now works correctly for `choice` fields.
* REST API: fix field names crash in RelatedFilters due to django filters
upgrade
* lxc-mocker: avoid `apt upgrade` on lxc-create
* lava_dispatcher: add wait_device_board_id in device_info for fastboot
Thanks,
--
Stevan Radaković | LAVA Engineer
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs
Hi folks,
The 2020.10 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
==================
# Upgrading
No database migrations are included in this release.
# Device-types
## New device-types
New supported devices:
* imx8dx-mek
* imx8mq-zii-ultra-zest
# Secret management for deploy action URLs
We now support custom headers in the GET requests for deploy action URLs
(images, kernel, ramdisk etc) in the job definition [schema](
https://docs.lavasoftware.org/lava/pipeline-schema.html#job-submission-sche…
).
This mainly serves for passing the authentication tokens for the services
which
require one (ie. gitlab).
LAVA now also supports per user token management (found in both admin app
and
in the Profile page). Every entry will have a `token name` and the token
itself.
If the `headers` entry in the job definition contains the existing token
name
in the list of submitters list of tokens, the system will replace it with
the
token value in the header when downloading artifacts from the deploy
section.
The token values will never be visible in the job definition and also the
user
submitting the job will only ever be able to use own tokens.
Example:
```yaml
images:
kernel:
image_arg: '-kernel {kernel} -append "console=ttyAMA0,115200
root=/dev/ram0 debug verbose"'
url:
https://git.lavasoftware.org/api/v4/projects/61/jobs/112514/artifacts
headers:
"PRIVATE_TOKEN": my_private_gitlab_token
sha256sum:
c45f07a9c0ebd0c06a9dcb9676d7aada63967e0733d072f9e529b697a8a7c6ca
```
# XMLRPC and rest APIs
## Job and worker token
Super users can now get Job and Worker token from the rest api endpoints.
The field is called `token`.
```json
{
"hostname": "staging01.lavalab",
"health": "Active",
"state": "Online",
...
"token": "<worker_token>"
}
```
For XMLRPC, only the worker token is available.
# Notifications and celery
Sending notifications for large jobs can take a large amount of CPU and
time. For instances that uses a lot of notifications, admins can delegate
this work to a celery worker.
In order to use this new feature, admins should:
* pick and install a celery broker (rabbitmq, redis, ...)
```shell
sudo apt-get install redis-server python2-redis
```
* update the settings
```yaml
CELERY_TASK_ALWAYS_EAGER: false
CELERY_BROKER_URL: "redis://localhost:6379/0"
```
* restart `lava-server-gunicorn` and activate `lava-celery-worker`
```shell
sudo service lava-server-gunicorn restart
sudo systemctl enable lava-celery-worker
sudo service lava-celery-worker start
```
The use of `celery` and `lava-celey-worker` is optional. By default, LAVA
will continue to send notification like in previous releases.
## DUT environment
Environment variables can be set on the DUT by updating the configuration
file. The configuration could be stored in:
* `/etc/lava-server/dispatcher.d/{name}/env-dut.yaml`
* `/etc/lava-server/env.dut.yaml`
The filename where not consistent. Starting from this release, the
configuration files are:
* `/etc/lava-server/dispatcher.d/{name}/env-dut.yaml`
* `/etc/lava-server/env-dut.yaml`
* `/etc/lava-server/env.dut.yaml`
`/etc/lava-server/env.dut.yaml` is **deprecated** and will be dropped in a
future version.
# Crashes
Some crashes where fixed in the release:
* `docker test action` crash when using test definition dependencies
* `lava-dispatcher-host` crash in race condition
* `lava_scheduler_app` crash with some really long log lines
* `lava-server manage linaro_ldapuserlist` crash after python3-ldap upgrade
* `lava-worker` crash with version mismatch
* `lava-worker` crash when `result.yaml` is empty
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
The 2020.09 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
# Upgrading
This release is bringing a big architectural change by replacing ZMQ by
HTTP(s) for server-worker communication.
Admin will have to update the configuration of each worker.
## From ZMQ to HTTP(s)
The protocol that LAVA is using to communicate between the server and the
workers has been changed from ZMQ to HTTP(s).
This will improve performance and reliability but admins will have to
update their configuration after the upgrade.
## Database migrations
This release include two migrations:
* lava_results_app.0017_testdata_onetoone_field: drop the bug link
* lava_scheduler_app.0053_testjob_and_worker_token
# Device-types
## New device-types
New supported devices:
* imx8mn-evk
## Juno-r2 and tee
Fix a bug in LAVA %2020.08 that was preventing the use of juno-r2 boards.
This is a regression from %2020.07 introduced by the support for `tee` in
u-boot jobs.
## SoCA9
Update the dtb address from `0x00000100` to `0x00001000` to prevent some
issues with u-boot 2020.07.
# From ZMQ to HTTP(s)
In prior versions LAVA daemons where using ZMQ to communicate and send
logs. In this release, LAVA is using plain HTTP(s) to control the remote
worker and send job logs.
## Reasons
### Load-balancing and fault-tolerance
The previous architecture was not able to cope with a large number of jobs
running in parallel. Mainly because it was impossible to load-balance the
traffic to multiple `lava-logs` and `lava-master`.
By using HTTPS(s) it is way easier to load-balance the traffic to multiple
instances of `lava-server-gunicorn`.
With load-balancing we can also increase fault-tolerance and move toward
zero-downtime upgrades.
### Master and scheduling
In the previous design, `lava-master` was both the master and the job
scheduler. This was introducing latency to start jobs when running many
jobs in parallel.
With the new design, `lava-scheduler` is running in the background,
scheduling jobs while `lava-server-gunicorn` is serving both clients and
workers.
### Proxies
Using HTTPS(s) will also easier the adoption of remote workers. In fact,
connection to non-standard port is often impossible in corporate
environment.
### Job termination
With the previous design, `lava-logs` and `lava-master` where both
responsible for terminating a job. This was sometime leading to a dead lock
where the job was waiting forever for its end.
This is not possible anymore as `lava-server-gunicorn` is responsible for
both the logs and the job termination.
### Simplifying the architecture
By using HTTP(s) instead of ZMQ, we are able to decrease the number of
services running on the server. We are planning to also drop the need for
`lava-coordinator` in the future.
Using HTTP(s) also allow to decrease the number of network ports that the
server should listen to. This is simplifying deployment and help hosting
many instances on the same physical server.
## Services
The following services has been dropped:
* `lava-logs`: the logs are sent directly to `lava-server-gunicorn`
* `lava-master`: the workers are pulling jobs from `lava-server-gunicorn`
This release is introducing a new service called `lava-scheduler` that is
solely responsible for scheduling jobs.
In this release `lava-slave` has been rewritten from scratch and renamed
`lava-worker`.
## Version mismatch
With previous LAVA version, `lava-master` was not checking `lava-slave`
version. This was sometime leading to strange behavior when the server was
upgraded but not the dispatcher.
`lava-server-gunicorn` is now able to check the `lava-worker` version every
time the service is requesting jobs to run.
In the event of a version mismatch, the server will put the worker offline,
refusing to start jobs on this worker.
When it's safe to stop the worker (the worker is done with the current set
of jobs), the server will return a specific error. If you use the new [LAVA
docker worker](#lava-docker-worker), `lava-worker` will be automatically
upgraded to the server version whenever needed.
## Upgrading
After the upgrade, every worker will be inactive as the `lava-worker`
services won't be able to connect to `lava-server-gunicorn`.
For each worker, admins will have to update the configuration.
* Update the `URL` variable in the worker configuration
(`/etc/lava-dispatcher/lava-worker`). This is the full URL to the server.
* Add the worker token in `/var/lib/lava/dispatcher/worker/token`. Admins
can find the token in the worker admin page at [
http://INSTANCE/admin/lava_scheduler_app/worker/WORKER_NAME/change/](http:/…
.
* restart `lava-worker`
# LAVA docker worker
This release introduces a program called `lava-docker-worker` that runs a
LAVA worker inside a Docker container. This script is provided by the
`lava-dispatcher-host` package, and has has the following features:
* Takes the same parameters as regular `lava-worker`.
* Detects the LAVA version of the server, and runs the worker from the same
LAVA version
* Automatically upgrades the worker when the server upgrades.
* Docker containers started by it are it siblings and not children, i.e.
they will run under the host system directly.
This worker in Docker should support most user cases that are supported by
the regular LAVA worker, except running LXC containers.
It's important to note that the container started by `lava-docker-worker`
runs in privileged mode and with host networking, what means that it is
less isolated from the host system as you would usually expect application
containers to be:
- it has access to **all** devices under `/dev`.
- it uses the same networking stack as the host system.
In this case, you should consider `lava-docker-worker` as a distribution
facilitator, not as an isolation mechanism. You should not run
`lava-docker-worker` on a host where you wouldn't run the regular LAVA
worker.
# Bug link
The possibility to link a bug to a specific test job or result as been
dropped. This feature was generating a huge load on the database server
without a real benefits.
# Tests from tar
Starting from this release, LAVA can pull tests from a tar archive instead
of a git repository.
The job definition will look like:
```yaml
- test:
name: basic-linux-smoke
timeout:
minutes: 10
definitions:
- repository:
https://github.com/Linaro/test-definitions/archive/2019.03.tar.gz
from: url
path: automated/linux/smoke/smoke.yaml
name: linux-smoke
compression: gz
```
# LAVA job id
LAVA is now exporting the job id to the lava test shell environment. The
variable is called `LAVA_JOB_ID` and can be used with
```shell
echo "$LAVA_JOB_ID"
```
We are willing to export more LAVA data as environment variable in the
future.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
The 2020.08 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
Upgrading
=========
Database migrations
-------------------
No database migrations are included in this release.
Device-types
============
New device-types
----------------
New supported devices:
- ls1021a-tw
QEMU
----
Add support for riscV. The architecture would be `riscv64`.
U-Boot
======
When booting with u-boot, lava can now load tee over tftp. User should
provide a resource called `tee` in the deploy action.
```yaml
- deploy:
to: tftp
tee:
url: file:/local/lava-ref-binaries/fsl-imx6q-sabresd-linux/uTee-6qsdb
```
uuu
===
When booting with `uuu`, user can specify a docker container that will use
to run the `uuu` binary. The job definition would look like:
```yaml
actions:
- boot:
method: uuu
commands:
- uuu : -b sd {boot}
docker:
image: atline/uuu:1.3.191
```
Admins can specify a default docker image in the device dictionary:
```jinja
{% set uuu_docker_image = "atline/uuu:1.3.191" %}
```
If the DUT is not directly connected to the dispatcher, you can enable the
`remote uuu` support:
```jinja
{% set uuu_remote_options = "--tlsverify --tlscacert=/remote_ca.pem
--tlscert=/remote_cert.pem --tlskey=/remote_key.pem -H 10.192.244.5:2376" %}
```
Docker test action
==================
Power commands
--------------
When using a docker test action, the power commands are now available as
environment variables:
* `LAVA_HARD_RESET_COMMAND`
* `LAVA_POWER_ON_COMMAND`
* `LAVA_POWER_OFF_COMMAND`
Note that each of these operations can actually require more than one
command, in which case the corresponding environment variable will have the
multiple commands with `&&` between them.
Because of this, the safest way to run the commands is passing the entire
contents of the variable as a single argument to `sh -c`, like this:
```bash
sh -c "${LAVA_HARD_RESET_COMMAND}"
```
Android serial
--------------
Add `LAVA_BOARD_ID` as an alias for `ANDROID_SERIAL`.
Extra bind mounts
-----------------
Admins can specify extra files and directories that lava will bind mount
when executing the docker test action.
This is set per dispatcher in the dispatcher configuration file:
```yaml
# Directories to be bind mounted in test actions that run with docker.
# Must be an array with exactly two/three items:
# 1st item: the source directory in the host (mandatory)
# 2nd item: the destination directory in the container (mandatory)
# 3rd item: bind mount mode (optional)
# default is read-only if this item omitted
# set as "rw" could make the directory in container writable
test_docker_bind_mounts:
- [<bind-mount1-host-path>, <bind-mount1-container-path>]
- [<bind-mount2-host-path>, <bind-mount2-container-path>, "rw"]
```
Docker in actions
=================
Local images
------------
When using a docker image for actions that support it (`docker`,
`fastboot`, `fvp`, `qemu` and `uuu` for the moment), LAVA will run `docker
pull` then `docker run`.
When using local image, the call to `docker pull` will fail. This version
add the possibility to use local images:
```yaml
- boot:
docker:
name: "my-docker-image"
local: true
```
Init
----
Always run with an init system. The docker command is now `docker run
--init ...`.
Gunicorn worker class
=====================
By default gunicorn is using the "sync" worker which is not suitable for
long
requests like downloading large log files.
When using an async worker (like eventlet), the worker can process multiple
long running requests at the same time while answering to master pings. In
this case, requests are not aborted after timeout, allowing to
download large log files.
Release %2020.07 introduced an option to change the gunicorn worker class
while keeping `sync` as the default.
After many tests on Linaro LAVA instances, this release change the default
worker class from `sync` to `eventlet`. As a result `lava-server` now
depends on `python3-eventlet`.
Test job log
============
When migrating to another log storage handler, like `mongodb` or
`elasticsearch`, admins can now migrate every job logs using `copy-logs`
command:
```bash
sudo lava-server manage copy-logs LogsMongo
```
This command will go through every test job and migrate the logs from the
file system to the right log handler.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
Sorry for forgetting to send the release mail when LAVA 2020.07 was
released, on June the 9th.
The 2020.07 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
Upgrading
=========
Minimal python version for debian package is now 3.6.
Database migrations
-------------------
No database migrations are included in this release.
Device-types
============
New device-types
----------------
New supported devices:
- de0-nano-soc
- imx6dl-sabreauto
- imx6dl-sabresd
- imx6qp-sabreauto
- imx6qp-sabresd
- imx6sll-evk
- imx6sx-sdb
- imx6ul-14x14-evk
- imx6ull-14x14-evk
- imx7ulp-evk
- r8a7742-iwg21d-q7
- r8a7743-iwg20d-q7
- r8a7744-iwg20d-q7
- r8a7745-iwg22d-sodimm
- r8a77470-iwg23s-sbc
- r8a774a1-hihope-rzg2m-ex
- r8a774b1-hihope-rzg2n-ex
- r8a774c0-ek874
- r8a774e1-hihope-rzg2h-ex
Updated device types:
- imx7d-sdb not takes settings now from imx6us7d-common
- The following device types have gotten device_info entries, which allow
them
to be tested from containers (e.g. docker):
- cc13x2-launchpad
- cc3220SF
- disco-l475-iot1
- frdm-k64f
- frdm-kw41z
- mimxrt1050_evk
- nucleo-l476rg
- stm32-carbon
Changes in docker test shell
============================
Docker test shell has received updates to make it more robust and support
more use cases. In particular:
- It's now possible to e.g. reset devices via adb/fastboot, and have they
be shared with the container
again when the come up after the reboot.
- making the test action not wait for the device to appear on USB is no
longer necessary, and support for
it has been removed. Devices will now be shared with the container as
soon as they appear even if they
are not active when the test container starts.
Gunicorn configuration update
=============================
By default gunicorn is using the "sync" worker which is not suitable for
long
requests like downloading large log files.
When using an async worker (like eventlet), the worker can process multiple
long running requests at the same time while answering to master pings. In
this case, requests are not aborted after timeout, allowing to
download large log files.
This release introduces a way to change the worker class and the timeout
while
keeping the current default (sync worker and a timeout of 30 seconds).
In order to use eventlet, admins should install python3-eventlet, update the
[configuration](
https://docs.gunicorn.org/en/stable/settings.html#worker-class)
and restart lava-server-gunicorn.
Bug fixes
=========
Fixes a bug where filtering by id field for test jobs in REST API didn't
work.
Fixes a crash when test job state is set but device state is not set yet.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
The 2020.06 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
Upgrading
=========
Database migrations
-------------------
A new `is_synced` field as been added to the `Device` model. The field will
be used to know when a given device is managed by `lava-server manage sync`
command or is managed manually.
Device-types
============
New device-types
----------------
New supported devices:
* d2500cc
* imx6qp-wandboard-revd1
Device management
=================
```shell
lava-server manage sync
```
LAVA can now synchronize database records with the device dictionary via
this
management command. It currently supports these models:
* Device
* Device type
* Tag
* Alias
This can make administration less cumbersome and can help with Ansible and
similar setups. By using this feature administrators are able to keep the
list
of these records inside the version control.
For device records, a flag named **is_synced** is used to recognize devices
which are synced to/from the device dictionary. This option can be updated
via
usual channels (web UI admin, APIs) and will be automatically set to True
for
newly added device via this method.
Example device dictionary snippet:
```jinja
{% set sync_to_lava = {
"device_type": "qemu",
"worker": "worker-01"
"tags": ["tag1", "tag2"],
"aliases": ["alias1", "alias2"],
}
%}
```
Scheduler
=========
Fix multinode scheduling. The scheduler was expecting ids of jobs in a
multinode group to be consecutive.
This is not guaranteed and was leading to a dead lock when scheduling
interleaved multinode jobs.
`command` action
================
The [command](
https://lava.readthedocs.io/en/latest/technical-references/job-definition/a…)
action can now run recovery commands `recovery_mode` and `recovery_off`:
```yaml
actions:
- command:
name: recovery_off
```
Auto login
==========
The auto-login action is now able to retry on failed login. This can happen
when the kernel print some stack trace while booting.
Connection closed
=================
LAVA is now able to detect when the connection is closed by the DUT. When
retrying to boot the DUT, LAVA will automatically reconnect to the DUT.
The `minimal` boot action has been updated to detect such event and mark
the job as `incomplete`.
Documentation
=============
The documentation work is progressing with some new pages added. The
work-in-progress documentation is visible on [read the doc](
https://lava.readthedocs.io/).
Test job log
============
Prior to this release, LAVA would always store test job logs on the
filesystem in
`/var/lib/lava-server/default/media/job-output/<year>/<month>/<day>/<id>`.
In release [2020.05](2020.05), the support for mongodb was added.
Elasticsearch
-------------
The support for [Elasticsearch](https://www.elastic.co) was added in this
release.
To use elastic search to store the logs, admin should update the
configuration:
```yaml
LAVA_LOG_BACKEND: "lava_scheduler_app.logutils.LogsElasticsearch"
ELASTICSEARCH_URI: "<URI|http://localhost:9200/>"
ELASTICSEARCH_INDEX: "<INDEX_NAME|lava-logs>"
ELASTICSEARCH_APIKEY: "<API_KEY>"
```
Firestore
---------
The support of [Firestore](https://firebase.google.com/docs/firestore/) was
added in this release.
Admin should update the configuration:
```yaml
LAVA_LOG_BACKEND: "lava_scheduler_app.logutils.LogsFirestore"
```
In the environment, `GOOGLE_APPLICATION_CREDENTIALS` should point to the
google cloud credentials.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
The 2020.05 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
Upgrading
=========
Database migrations
-------------------
A new `master_version_notified` field has been added to the `Worker` model.
The field will be used to know if admins where notified of the version
mismatch between master and dispatchers.
Device-type templates
---------------------
The post-installation script will look at the device-types templates in
`/etc/lava-server/dispatcher-config/device-types/` and remove the templates
that are already present in `/usr/share/lava-server/device-types/` (and
identical).
Device-types
============
New device-types
----------------
New supported devices:
* imx6q-sabreauto
* imx8dxl-evk
* imx8mp-evk
* imx8qm-mek
* imx8qxp-mek
* ls1012ardb
* ls1028ardb
* ls1046ardb
* ls1088ardb
* fsl-s32v234sbc (and s32v234-evb)
* mt8173-elm-hana
U-Boot
------
The `uboot_error_messages` variable allows device-type templates to set
their own list of u-boot error messages as required if the default set is
not appropriate. For example it may be required that one of the default
error messages is ignored.
```jinja
{% set uboot_error_messages = [
'Resetting CPU',
'Must RESET board to recover',
'TIMEOUT']
%}
```
The `uboot_extra_error_message` variable can still be used in conjunction
with `uboot_error_messages`.
Minnowboard
-----------
Replace `boot_message` by `kernel_start_message` as the latter has been
deprecated for a long time.
nfs boot commands
-----------------
Drop "intr" mount option that has been deprecated in 2.6.25. This is the
default value since 2.6.25.
OpenOCD
-------
Open serial connection prior to invoking OpenOCD in OpenOCD boot method
The serial connection should be opened prior to invoking OpenOCD. This
fixes an issue where on some devices verbose serial output is truncated
when the data size exceeds buffering in the firmware.
Download
========
When downloading artifacts fail for network issues, LAVA `dispatcher` will
retry up to 15 times over 10 minutes.
These retries will fix some intermittent failures.
API
===
New system endpoints
--------------------
We've added a support for getting the system version and current user
New endpoints available for REST API:
* `/api/v0.2/system/version/`
* `/api/v0.2/system/whoami/`
New test endpoints
------------------
The endpoint will allow to access the tests for a given job at:
* `/api/v0.2/jobs/<job_id>/tests/`
The results are also available at:
* `/api/v0.2/jobs/<job_id>/suites/<suite_id>/tests/`
docker
======
Site
-----
When starting `lava-server`, you can set the `Site` by setting the
`LAVA_SITE` environment variable.
Superuser
---------
When starting `lava-server`, the entrypoint can create a super user for
you. Just set `LAVA_ADMIN_USERNAME` and `LAVA_ADMIN_PASSWORD` environment
variable.
Interactive tests and multinode
===============================
Implement multinode synchronization and `delay` primitives.
Introduce `delay` primitive to wait a given number of seconds (incl.
fractional) and `lava-send`, `lava-wait`, `lava-wait-all`, `lava-sync` for
multinode synchronization.
Syntax for multinode primitives follows one used in test-definitions, i.e.
single-line based.
A job definition would look like:
```yaml
- test:
role: [server]
interactive:
- name: boot
prompts: ["/ #"]
echo: discard
script:
- command: ifconfig
name: result
successes:
- message: "inet addr:(?P<ip>\\d+\\.\\d+\\.\\d+\\.\\d+)"
- lava-send: booted ipaddr={ip}
- test:
role: [client]
interactive:
- name: boot
prompts: ["/ #"]
echo: discard
script:
- delay: 5
- lava-send: booted
- lava-wait-all: booted
- command: 'echo "Other side has IP: {ipaddr}"'
```
In this example, LAVA will capture the IP in the success message on the
server and use the value in the client command.
LAVA settings
=============
In previous LAVA versions, the settings are stored in:
* `/etc/lava-server/instance.conf`: database settings
* `/etc/lava-server/settings.conf`: global settings (json)
* `/etc/lava-server/secret_key.conf`: secret key created on the fly
In order to make admin task easier, the settings are now stored in yaml
files under `/etc/lava-server/settings.d/`.
The legacy configuration files will be loaded first and then the files in
`/etc/lava-server/settings.d/` in alphabetical order.
Upgrade notification
====================
When `lava-master` restarts, it will check that the remote dispatchers are
running the same version. If that's not the case, `lava-master` will send a
mail to each worker admin.
This feature can be activated in the settings:
```yaml
MASTER_UPGRADE_NOTIFY: true
```
Test job log
============
Prior to this release, LAVA would always store test job logs on the
filesystem in
`/var/lib/lava-server/default/media/job-output/<year>/<month>/<day>/<id>`.
Starting from this release, admins can configure the logger backend to
either the filesystem (default) or [mongodb](https://www.mongodb.com/).
In order to use Mongodb, admin should install `python3-pymongo` and update
the settings:
```yaml
LAVA_LOG_BACKEND: "lava_scheduler_app.logutils.LogsMongo"
MONGO_DB_URI: "mongodb://<username>:<password>@localhost:27017/"
```
The mongodb support is currently in Beta mainly because performances can
still be improved.
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro
Hi folks,
The 2020.04 tag has been pushed to master on git.lavasoftware.org.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://hub.lavasoftware.org/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
=================
Upgrading
=========
Device-type templates
---------------------
Device-type templates as been moved from
`/etc/lava-server/dispatcher-config/device-types/` to
`/usr/share/lava-server/device-types/`.
Database migrations
-------------------
A new `version` field as been added to the `Worker` model. The field will
be used to track remove dispatcher version and check for incompatibilities
between master and dispatcher.
Device-types
============
New device-types
----------------
New supported devices:
* Fixed Virtual Platforms
* fsl-ls1043a-rdb
* fsl-lx2160a-rdb
* fsl-s32v234sbc
* imx6q-var-dt6customboard
* imx6q-sabresd
* imx7d-sdb
* imx8mm-ddr4-evk
* imx8mm-evk
* r8a7795-h3ulcb-kf
* sun8i-h3-bananapi-m2-plus
Fixed Virtual Platforms
-----------------------
FVP jobs can now be ran in LAVA. LAVA will execute FVP devices inside Docker
containers that should be provided by job submitter.
A job definition would be:
```yaml
- deploy:
to: fvp
images:
bl1:
url: https://example.com/fvp/bl1.bin
[...]
ramdisk:
url: https://example.com/fvp/ramdisk.img
- boot:
method: fvp
docker:
name: "foundation:11.9"
local: true
image:
/opt/model/Foundation_Platformpkg/models/Linux64_GCC-6.4/Foundation_Platform
version_string: 'ARM V8 Foundation Platformr0p0 [^\\n]+'
timeout:
minutes: 7
console_string: 'terminal_0: Listening for serial connection on port
(?P<PORT>\d+)'
arguments:
- "--cores=4"
- "--no-secure-memory"
- "--visualization"
- "--gicv3"
- "--data={BL1}@0x0"
- "--data={FIP}@0x8000000"
- "--data={IMAGE}@0x80080000"
- "--data={DTB}@0x82000000"
- "--data={RAMDISK}@0x84000000"
- "--block-device={ROOTFS}"
prompts:
- 'root@genericarmv8:~#'
```
To use this new device-type, the job definition should define a
[deploy](https://docs.lavasoftware.org/lava/actions-deploy.html#to-fvp)
and a
[boot](https://docs.lavasoftware.org/lava/actions-boot.html#fvp)
action.
More information in the [FVP documentation](
https://docs.lavasoftware.org/lava/fvp.html).
UBoot USB Mass Storage
----------------------
LAVA dispatcher will now use `bmaptool` instead of `dd` to flash images. The
layout is computed right before the flash.
Docker test action
==================
The docker test action introduced in LAVA
[2020.02](https://git.lavasoftware.org/lava/lava/-/wikis/releases/2020.02/)
has
been improved.
Serial connection to the device
-------------------------------
The connection commands are now exposed in the test environment as
`LAVA_CONNECTION_COMMAND` and `LAVA_CONNECTION_COMMAND_*`.
For example the given connection commands:
```jinja
{% set connection_list = ['uart0', 'uart1'] %}
{% set connection_commands = {'uart0': 'telnet localhost 4002', 'uart1':
'telnet 192.168.1.200 8001'} %}
```
Will be exported as:
```shell
LAVA_CONNECTION_COMMAND='telnet 192.168.1.200 8001'
LAVA_CONNECTION_COMMAND_UART0='telnet localhost 4002'
LAVA_CONNECTION_COMMAND_UART1='telnet 192.168.1.200 8001'
```
Waiting for device on USB made optional
---------------------------------------
By default, the docker test action will wait for the device to be connected
and exposed to the host via its USB OTG port.
In some use cases, however, you need to e.g. interact with u-boot before a
USB OTG port is enabled, so waiting on the device coming up on USB would
not work. To avoid waiting for the USB connection, you can specify the
`docker.wait.device` parameter set to false:
```yaml
actions:
# ....
- test:
docker:
image: my-image
wait:
device: false
# ....
```
Device type templates
=====================
The default device-type templates has been moved from
`/etc/lava-server/dispatcher-config/device-types` to
`/usr/share/lava-server/device-types/`.
When rendering device dictionaries, LAVA will use templates in
`/etc/lava-server/dispatcher-config/device-types` and then fallback to
`/usr/share/lava-server/device-types/`. This mechanism allow admins to
override
device-type templates.
When using the APIs to override device-type templates, LAVA will write the
template into `/etc/lava-server/dispatcher-config/device-types`.
Compression formats
===================
[zstd](https://facebook.github.io/zstd/) as been added to the list of
supported compression formats.
In order to use it in a job definition you should use:
```yaml
actions:
- deploy:
rootfs:
url: https://example.com/rootfs.ext4.zst
compression: zstd
```
The `zstd` package should be installed on the dispatcher. This is already
the case for the lava-dispatcher docker image.
Postprocessing images with docker
=================================
This release adds support for postprocessing downloaded images using
user-provider docker containers. To make use of this feature, you need to
use the new ***downloads*** deploy target. Example:
```yaml
actions:
- deploy:
to: downloads
images:
# ...
postprocess:
docker:
image: my-image
steps:
- /path/to/my/postprocessing/script
```
The provided docker image will be run with the download directory as
current working directory, and the commands listed in `steps` will be
executed. Any changes that the commands make to the downloaded images are
persisted for later actions, and this includes not only modifying the
existing images, but also creating new files in there.
To make use of any files left in the downloads directory, you need a second
deploy action, which can refer to files in the downloads directory using
the ***downloads://*** pseudo URL scheme. Example:
```yaml
- deploy:
to: fastboot
docker:
image: my-adb-fastboot-image
images:
ptable:
url: downloads://ptable-linux-8g.img
reboot: hard-reset
boot:
url: downloads://boot.img
reboot: hard-reset
system:
url: downloads://rpb-console-image-lkft.rootfs.img
apply-overlay: true
```
Interactive non-exiting commands
================================
Interactive test received a new field: ```wait_for_prompt```. It defaults
to True but can be set to False in the job definition. Idea behind this
feature is to allow for non exiting commands to work in non-posix shells.
Example can be invoking fastboot in u-boot shell and flashing the board
from the subsequent test action
```yaml
- test:
timeout:
minutes: 10
interactive:
- name: erase-emmc
prompts: ['=> ']
script:
- command: mmc dev 1 0
name: mmc_dev
successes:
- message: mmc1(part 0) is current device
- command: mmc rescan
name: mmc_rescan
- command: mmc erase 0 0x400000
name: mmc_erase
successes:
- message: "4194304 blocks erased: OK"
- name: fastboot
prompts: ['=> ', '/ # ']
script:
- command: env default -f -a
name: env-default
successes:
- message: Resetting to default environment
- command: setenv partitions $partitions_android
name: setenv-partitions
- command: fastboot 1
success:
- message: "\n"
wait_for_prompt: false
```
Boot QEMU from docker image
===========================
We now allow users to provide their own docker image that LAVA will use to
start QEMU.
This will allow users to use recent QEMU version and to test QEMU itself.
Job definition schema example:
```yaml
- boot:
method: qemu
timeout:
minutes: 2
media: tmpfs
docker:
image: my-qemu-image
binary: /usr/bin/qemu-system-x86_64
prompts:
- "root@debian:"
auto_login:
login_prompt: "login:"
username: root
```
API
===
New certificate endpoints
-------------------------
We've added a support for downloading master certificate and for uploading a
worker certificate to master. This will make setting up a remote worker a
bit
easier for the administrators.
New endpoints available for REST API:
* ```/api/v0.2/workers/${hostname}/certificate/```
* ```/api/v0.2/system/certificate/```
The first one excepts both GET and POST requests so the user can also
download
the certificate for worker for double checking, while the second one can be
used only via GET request and will download the master public key.
New methods available for XMLRPC API:
* #scheduler.workers.get_certificate($hostname)
* #scheduler.workers.set_certificate($hostname, $key)
* #system.get_master_certificate()
They mirror the REST API calls described above.
Authorization
=============
New permissions backend for workers
-----------------------------------
The worker model is now protected by the "new" authorization system. You can
find settings in the admin section for managing individual workers.
Only `change` permission is used for workers, in order to allow
administrators
to control who can upload worker certificate, environment and
configuration files.
Note that the worker authorization settings will **NOT** affect the device
or
device type authorization in any way. It's only affecting worker objects and
nothing else.
Common device commands now available to test jobs
=================================================
LAVA already supported configuring per-device custom commands that can be
used
from test jobs, like this:
```yaml
actions:
# ...
- command:
name: my-custom-command
```
With this release, the following built-in commands can also be used:
`pre_power_command`,
`pre_os_command`, `power_on`, `power_off`, and `hard_reset`. Although lab
admins can
configure these commands to be anything, the following table describes
their usual
semantics:
| Command | Usual meaning |
| :------------------ | :------------------------------------ |
| `pre_power_command` | Turns USB OTG port ON |
| `pre_os_command` | Turns USB OTG port OFF |
| `power_on` | Turns power to the board ON |
| `power_off` | Turns power to the board OFF |
| `hard_reset` | Turns power to the board OFF, then ON |
No extra configuration is required: these commands are already required to
be defined
for each device, and are now made available to the "command" action without
need to
explicitly add them to the custom commands configuration.
The changelog is also available in the wiki:
https://git.lavasoftware.org/lava/lava/-/wikis/releases/2020.04
Thanks
--
Rémi Duraffort
LAVA Architect
Linaro