Hi folks,
The 2026.04 tag has been pushed to master on gitlab.com/lava/lava.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://registry.gitlab.com/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
==================
# Security fixes
## Private job information leaked in device table
The device table was displaying the submitter and description of the
currently running job even when the user did not have permission to view
that job.
Now users without view permission will only see that the device is
"Running" a
job, without any job details.
# Breaking changes
## LXC support dropped
LXC support has been removed. It has been unmaintained and deprecated for
several years.
Users of LXC jobs must migrate to [Docker-based alternatives](
https://lava.readthedocs.io/en/latest/user/advanced-tutorials/running-arbit…
).
## Database migrations squashed (upgrade path limited)
`lava_results_app` migrations have been squashed to remove non-portable
PostgreSQL-specific SQL and Python. The squashed migration is now the
new initial migration.
**Direct upgrades from LAVA 2020.09 or earlier are no longer supported.**
The oldest supported upgrade path is from LAVA 2020.12 (the version
shipped with Debian 11 Bullseye). Administrators running older instances
must upgrade to at least 2020.12 before upgrading to this release.
## Worker auto-registration disabled by default
Worker auto-registration is now disabled by default due to security
implications.
If `WORKER_AUTO_REGISTER_NETMASK` is set and a HTTP proxy is in use, LAVA
sees
the proxy's IP address (possibly localhost), which could allow unauthorized
workers to register and obtain tokens.
To re-enable auto-registration, explicitly configure
`WORKER_AUTO_REGISTER_NETMASK`
in your settings.
## Debian 11 support
As of 2026.04 release Debian 11 (Bullseye) is no longer supported.
Administrators are requested to migrate to at least Debian 12 (Bookworm).
# LAVA Server
## REST API
New endpoints for managing groups and users are available in the v0.2 REST
API:
* `GET/POST /api/v0.2/groups/`
* `GET/PUT/PATCH/DELETE /api/v0.2/groups/<id>/`
* `GET/POST /api/v0.2/users/`
* `GET/PUT/PATCH/DELETE /api/v0.2/users/<id>/`
## XMLRPC API
Fix device owner not being updated when calling `lavacli devices update
--user`.
The `physical_owner` field was not being updated by the API call.
# LAVA Worker
## Test services
LAVA now supports running test services on the worker. Test services are
Docker Compose-based containers started on the LAVA worker, enabling custom
services for test definitions. For example, LAVA can now start an OTA server
that a DUT under test can connect to.
Job definition example:
```yaml
- test:
services:
- name: my-service
compose:
version: "3"
services:
ota-server:
image: my-ota-server:latest
ports:
- "8080:8080"
definitions:
- repository: https://example.com/tests.git
from: git
path: testdefs/ota.yaml
name: ota-test
```
## Expected test results
When an expected test list is provided, any test case not in that list now
results in a **fail** (previously it was only logged). This makes it easier
for regression-detection tools to spot unexpected test cases that should be
investigated.
## GRUB
## Overriding `net_commands`
Device dictionaries can now override `net_commands` to set custom GRUB
network
commands. This is useful for debugging network boot issues (e.g. adding
`net_ls_addr`):
```jinja2
{% set net_commands = ['net_ls_addr', 'dhcp'] %}
```
### Error detection
LAVA will now automatically catch a large set of error raised by Grub while
trying to boot a kernel.
The following error patterns are now matched:
* `missing (.*) symbol.`
* `couldn't send network packet`
* `destination unreachable`
* `couldn't autoconfigure`
* `timeout reading`
* `file '(.*)' not found`
* `no such device`
* `disk '(.*)' not found`
* `unknown filesystem`
## udev: kernel netlink fallback for containers
The udev monitoring code now falls back to kernel netlink if no udev
events are received. This allows containerized dispatchers to detect
USB devices without requiring `--network=host` or `CAP_NET_ADMIN`, since
containers don't propagate udev events across namespaces.
sysfs attributes are used as a fallback for `ID_SERIAL_SHORT` when udev
properties are unavailable.
## LAVA_JOB_ID environment variable
`LAVA_JOB_ID` is now available in the `lava-run` environment, accessible
to `user_commands` the same way as secrets declared in the job definition.
This is useful for running commands that are job-specific.
# Documentation
The documentation has been again vastly improved with changes to:
* Admin docs: hardware recommendations, typical deployment topologies,
LDAP configuration, first steps, recommended security settings
* Developer docs: contribution guide, code of conduct, guide for adding
new actions, new device type guide, developer first steps
* User docs: job debugging guide, job results, LAVA auth, user permissions,
common LAVA use cases
# CI
More pylint checks enabled and code brought into compliance:
`W1510`, `W0611`, `W1115`, `E0307`, `E0611`, `E1136`.
The minimum checked Python version has been increased to **3.11**.
Rgds
--
Rémi Duraffort
Principal Tech Lead
LAVA Tech Lead
Automation Software Team
Linaro
Dear LAVA Community,
I am reaching out to report a regression (or significant change in
behavior) regarding how the LAVA dispatcher handles high-frequency terminal
output and escape sequences during flashing operations.
Background:
After upgrading from LAVA 2022.06 to a recent version LAVA 2026.02 , our
flashing jobs—which utilize mfgtoolcli (NXP) to write images to i.MX6/8
devices—have begun failing consistently with a deploy-flasher timeout.
The Issue:
The flashing tool outputs a high-frequency progress bar using carriage
returns (\r) and ANSI escape codes (e.g., \e[1F, \e[2K).
1. Log Bloat: In the older version, these were handled gracefully. In
the current version, every escape sequence is captured as a new log entry,
resulting in logs exceeding 5MB for a single flash.
2. Dispatcher Lag: The dispatcher appears to bottleneck while processing
this flood of data. This "processing lag" causes the internal action timer
to hit the default 500s limit, even if the physical flashing process
succeeds locally on the worker.
3. Timeout Overrides: We’ve observed that deploy-flasher often ignores
the timeout values specified in the Job YAML, defaulting to 500 seconds
unless explicitly overridden in the Device Dictionary.
Error Examples:
finish programming rootfs-a>> [1F
[2K 1 100%
[=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
... continues >]
Donedeploy-flasher timed out after 500 seconds>> uuu (Universal
Update Utility) for nxp imx chips -- libuuu_1.4.243-0-ged48c51>> >>
Wait for Known USB Device Appear...
[?25l [1F [1F [1F [1F [1F [1F1:241312 1/ 0 [
]
>> [1F [1F [1F [1F [1F [1F
[1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [1F [
Steps we have attempted without success:
- Setting TERM=dumb and piping to cat (the tool continues to output
sequences).
- Using tr -d '\r' (reduces lines but the data volume still triggers the
timeout).
- Increasing timeouts in the Job YAML (often ignored by the sub-action).
Question:
Has there been a change in the dispatcher's log-collection priority or
buffering logic that would cause this bottleneck? Are there recommended
"best practices" for handling tools that force interactive progress bars in
the newer LAVA architecture?
Best Regards
Pavan Kumar
Hi folks,
The 2026.02 tag has been pushed to master on gitlab.com/lava/lava.
.deb packages have been built in GitLab CI and are published at
https://apt.lavasoftware.org/release
Docker images for amd64 and arm64 have been built in GitLab CI and
are available from
https://registry.gitlab.com/
and
https://hub.docker.com/u/lavasoftware
Changes in this release
==================
# Security fixes
Some security issues has been fixed in this release and we advice to
upgrade as soon as possible.
The details will be available later on.
# Device-types
## New device-types
* asus-CX3402CVA-brya
* lenovo-chrome-2in1-14iru10-brox
* mt8196-rauru-navi-sku1
* r9a07g043u11-smarc
* r9a07g044c2-smarc
* sc7180-trogdor-wormdingler-rev1-boe
# Documentation migration
Documentation v3 continues to be updated. All `deploy`, `boot` and `test`
methods are now documented and published at
https://lava.readthedocs.io/en/latest under **Technical references**.
# Debian support
Debian 11 (Bullseye) is being phased out. 2026.02 is the last LAVA release
to support it.
Starting from 2026.02 the oldest supported Debian release will be 12
(Bookworm)
# Docker
* All LAVA base images are now based on Debian 13 (Trixie) `slim`
* Debian 11 (Bullseye) was dropped from all CI jobs
* `wget` tool was added to lava-dispatcher-base image
# Job logs
## Secret filtering
LAVA is now filtering the job logs to remove sensitive data. This is still
a work in progress that will be improved in the following releases.
With this new feature, values of the variables defined in `secrets`
dictionary will be replaced in the job output log with string `[MASKED]`.
# Dispatcher changes
## Downloading artifacts
Set max_retries to 1 by default when downloading an artifacts.
The max_retries value can be set in the job definition using failure_retry
key. Example:
```yaml
- deploy:
failure_retry: 2
images:
image:
url: https://example.com/foo
```
## Deploy character delay
Add support for `deploy_character_delay` to add delay between each
character when interacting with the serial during deployment.
To use it add to the device dictionary (in milliseconds):
```jinja2
{% set deploy_character_delay = 30 %}
```
Defaults to 30ms delay for vexpress devices.
## fastboot
Improve fastboot device detection by making parsing more generic. This
should now work with multiple fastboot versions.
## FVP
Use `find` to locate armlm binary instead of hardcoded path to provide
compatibility with more FVP models.
## multinode
Fix `lava-role list` to properly match the documentation. Also fix
multinode collate function used to replace placeholdeers like `$ipaddr` in
synchronization messages.
## pyocd
Fix flashing with pyocd with latest pycocd versions.
## Transfer overlay
Allow to transfer the overlay using zmodem when network is not available.
In the job definition, add:
```yaml
- boot:
transfer_overlay:
transfer_method: zmodem
unpack_command: tar -C / -xzf
```
The worker and target device **must** have respectively the `sz` and `rz`
utilities installed. This is typically provided by: `lrzsz` package on
Debian-based or Fedora-based systems.
## uuu
Allow to use `uniquify` parameter for uuu deployment action. This is useful
when using `downloads://` urls.
# Server changes
## Restricting callback URLs
By default, notification callbacks can target any URL. Administrators can
restrict which hosts are permitted by setting `CALLBACK_ALLOWED_HOSTS` in
`/etc/lava-server/settings.yaml` or a file in
`/etc/lava-server/settings.d/`:
```yaml
CALLBACK_ALLOWED_HOSTS:
- "example.com"
- "*.example.com"
- "*.ci.example.com"
```
Rgds
--
Rémi Duraffort
Principal Tech Lead
LAVA Tech Lead
Automation Software Team
Linaro
Hello all,
I am preparing a pull request in LAVA and would like to know which
filesystems people are using. The PR aims to drop libguestfs and
eventually move to e2fsprogs-based tooling.
LAVA officially supports ext4, but unofficially, are people using
anything else? Could you let me know if any users are using XFS, BTRFS
(or other variants)?
Any feedback would be appreciated!
Regards,
Ben
Hi,
When LAVA introduced REST framework, it allowed for cross model
filtering using djangorestframework-filters package. v0.2 LAVA API
still allows that. Unfortunately the package has been unmaintained for
the last 5 years:
https://pypi.org/project/djangorestframework-filters/#history
Without updates it will soon become incompatible with DRF and Django
itself. In fact Debian has to carry patches to make it still working.
On top of that there are performance issues with queries made by the
DFR-filters.
For these reasons it was decided that DRF-filters will be removed as
LAVA dependency starting with version 0.3 of the API. v0.2 will remain
supported for now. There is no end of life date agreed yet. In
practice this means cross-model queries won't be supported any more in
v0.3. As a tradeoff performance of single model queries should
improve.
If there are any concerns regarding this change, please reply to this thread.
Best Regards,
Milosz
Hi,
Starting from 2026.02 worker autoregistration will be disabled by
default. Details can be found in this merge request:
https://gitlab.com/lava/lava/-/merge_requests/2081
This may break some setups, so please be aware when upgrading. Local
setting can still overwrite the default if needed.
Best Regards,
Milosz
Hi,
As of now the default retry for download action is set to 3 (three).
This is an outlier as all other actions have retry set to 1. Some time
ago I complained about it in the list. There is also a MR posted to
address the problem:
https://gitlab.com/lava/lava/-/merge_requests/3012
I just rebased it on top of 2026.01 and I'm planning to merge it in
2026.02 release. It will still be possible to set retry to 3 in
download action with addition of failure_retry in a test job:
- deploy:
failure_retry: 2
images:
image:
url: 'https://example.com/foo.tgz'
timeout:
minutes: 20
to: downloads
Please reply here if this change is going to fundamentally break your
setup. IMHO the worst that can happen is the download timeouts
increase 3x which in my experience should not cause problems. I don't
know all potential setups, so please reply if this is a breaking
change for you.
Best Regards,
Milosz
Hi,
2026.01 release was tagged yesterday. Here are the release notes:
https://gitlab.com/lava/lava/-/wikis/releases/2026.01
Please report any issues you find during the deployment or when using.
Best Regards,
Milosz
Hello Everyone,
I am seeking assistance with a performance regression observed on an i.MX6SOLO
rev 1.4 (996 MHz) board in our LAVA farm.
The Issue:
A test suite containing 90 test cases, which previously took 10 minutes to
complete, is now consistently taking 20 minutes. This issue is unique to
this one specific board; other boards of the same type are still completing
the run in 10 minutes using the same build/firmware.
Observations:
- I'm not using any test_character_delay externally in my YAML.
- Identical Logs: The "Fast" and "Slow" jobs produce identical serial
output.
- Recent Hangs: The test run has recently started to get stuck/hang
mid-run, even though CPU usage is not at 100%.
Debugging Performed:
- Verified that CPU thermal throttling is not active.
- Network latency and DNS resolution times appear normal.
- No hardware errors are reported in dmesg.
- Swapped power supplies and network cables with no improvement.
Questions:
1. What is the best way to debug "dead time" between lava-test-case
signals?
2. Could the LAVA worker be experiencing serial buffer delays on this
specific node that don't appear in the logs?
Thank you for your time.
Best Regards
Pavan Kumar
Hello LAVA users!
Debian 11 bullseye is nearing close to EOL (August 2026). We're
planning to retire LAVA support for it in the 2026.03 release. If that
breaks your setup and there is a good reason to keep the support for
longer, please respond to this message.
Dropping debian 11 will allow to remove a few workarounds in LAVA that
are currently included in the code.
Best Regards,
Milosz