Hello Lava Team,
I have created some Lava jobs that use our proprietary Flasher, based on a DFU connection.
As our flasher is not a "standard" flasher, I have adapted the boot process to be able to use our flasher.
I use the boot method "minimal" to achieve this.
To call our flasher script, I have used the script called by the method "power_on". This is defined in the device configuration.
Find below an extract of the device content :
.......................................................................................
..
..
{% set hard_reset_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command reboot --port 1' %}
{% set power_off_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command off --port 1' %}
{% set power_on_command = '/root/git/lava-config/scripts/flash_stm32_programmer.sh -u lava_pdu_01.lme.st.com -p 1 -d usb1 -b ds378_2.lme.st.com -s 4_5_6 -f /tmp/test' %}
{% set connection_command = 'telnet localhost 2001' %}
..
..
.......................................................................................
This works correctly for a "static" configuration. The settings for the flasher are defined outside Lava by a script that configure the flashing parameters.
The "power_on" script reads these parameters, and launch the flashing on the board.
My problem now, is when I launch simultaneously jobs on several boards that requires different flashing binaries version.
I am unable to indicate to each boards which binary version to be used by our flasher.
The best way would be to pass parameters in the job to indicate which binary version has to be used by the flasher.
This could be done in the "deploy action" and pass to the "power_on" command, but I don't know how to implement it.
I don't know also if it is possible to do that easily ?
Find below my job definition.
###### Job definition ##############
actions:
- deploy:
timeout:
minutes: 5
to: ssh
os: oe
device:
- boot:
method: minimal
failure_retry: 2
auto_login:
login_prompt: 'login:'
username: root
prompts:
- 'root@stm32mp1'
timeout:
minutes: 10
transfer_overlay:
download_command: sync && sleep 15 && wget
unpack_command: tar -C / -xzf
- test: ... #############################
Thanks to support me.
BR
Philippe
When using LAVA inside a docker container, the LXC support adds lots
of unnecessary overhead since the docker images are already made to
include the necessary tools. So having another container is pointless.
Even worse, LXC doesn't work inside docker anyways.
The LXC support should be made optional for a given LAVA install.
Until LXC can be disabled, projects like lava-docker[1] simply cannot
support fastboot devices which is a major problem.
Kevin
[1] https://github.com/kernelci/lava-docker/
Hi,
I have lava-master and lava-slave v2018.1 installed, and a qemu device
added. Test job can be scheduler. Then I followed
https://validation.linaro.org/static/docs/v2/pipeline-server.html#using-zmq…
to enable ZMQ authentication.
Certificates were generated correctly, public certificates were copied
to master and slave respectively. With the following configs:
lava-master
```
MASTER_SOCKET="--master-socket tcp://*:5556"
LOGLEVEL="DEBUG"
ENCRYPT="--encrypt"
MASTER_CERT="--master-cert
/etc/lava-dispatcher/certificates.d/master.key_secret"
SLAVES_CERTS="--slaves-certs /etc/lava-dispatcher/certificates.d/"
```
lava-slave
```
MASTER_URL="tcp://192.168.11.214:5556"
LOGGER_URL="tcp://192.168.11.214:5555"
HOSTNAME="--hostname lava-slave1"
LOGLEVEL="DEBUG"
ENCRYPT="--encrypt"
MASTER_CERT="--master-cert /etc/lava-dispatcher/certificates.d/master.key"
SLAVE_CERT="--slave-cert /etc/lava-dispatcher/certificates.d/slave1.key_secret"
```
After lava-master and lava-slave restarted, I see the following logs.
Seems the connect was established, but lava-logs went offline.
lava-master
```
2018-01-30 11:05:50,260 DEBUG lava-slave1 => PING(20)
2018-01-30 11:05:52,086 DEBUG lava-master => PING(20)
2018-01-30 11:06:08,728 DEBUG lava-logs => PING(20)
2018-01-30 11:06:10,261 INFO scheduling health checks:
2018-01-30 11:06:10,270 DEBUG -> disabled on: lxc, qemu
2018-01-30 11:06:10,271 INFO scheduling jobs:
2018-01-30 11:06:10,272 DEBUG - lxc
2018-01-30 11:06:10,292 DEBUG - qemu
2018-01-30 11:06:10,332 DEBUG lava-slave1 => PING(20)
2018-01-30 11:06:12,115 DEBUG lava-master => PING(20)
2018-01-30 11:06:20,252 INFO [POLL] Received a signal, leaving
2018-01-30 11:06:20,254 INFO [CLOSE] Closing the controler socket
and dropping messages
2018-01-30 11:06:21,203 INFO [INIT] Dropping privileges
2018-01-30 11:06:21,204 DEBUG Switching to (lavaserver(114), lavaserver(119))
2018-01-30 11:06:21,204 INFO [INIT] Marking all workers as offline
2018-01-30 11:06:21,209 INFO [INIT] Starting encryption
2018-01-30 11:06:21,211 DEBUG [INIT] Opening master certificate:
/etc/lava-dispatcher/certificates.d/master.key_secret
2018-01-30 11:06:21,238 DEBUG [INIT] Using slaves certificates from:
/etc/lava-dispatcher/certificates.d/
2018-01-30 11:06:21,245 INFO [INIT] LAVA master has started.
2018-01-30 11:06:21,246 INFO [INIT] Using protocol version 2
2018-01-30 11:06:41,247 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:01,255 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:04,433 INFO lava-slave1 => HELLO
2018-01-30 11:07:04,433 WARNING New dispatcher <lava-slave1>
2018-01-30 11:07:09,450 DEBUG lava-slave1 => PING(20)
2018-01-30 11:07:21,260 WARNING lava-logs is offline: can't schedule jobs
2018-01-30 11:07:29,477 DEBUG lava-slave1 => PING(20)
2018-01-30 11:07:41,265 WARNING lava-logs is offline: can't schedule jobs
```
lava-slave
```
2018-01-30 11:06:10,283 DEBUG PING => master (last message 20s ago)
2018-01-30 11:06:10,335 DEBUG master => PONG(20)
2018-01-30 11:06:30,356 DEBUG PING => master (last message 20s ago)
2018-01-30 11:07:04,379 INFO [INIT] LAVA slave has started.
2018-01-30 11:07:04,380 INFO [INIT] Using protocol version 2
2018-01-30 11:07:04,390 INFO [INIT] Starting encryption
2018-01-30 11:07:04,390 DEBUG Opening slave certificate:
/etc/lava-dispatcher/certificates.d/slave1.key_secret
2018-01-30 11:07:04,413 DEBUG Opening master certificate:
/etc/lava-dispatcher/certificates.d/master.key
2018-01-30 11:07:04,414 INFO [INIT] Connecting to master as <lava-slave1>
2018-01-30 11:07:04,415 INFO [INIT] Greeting the master => 'HELLO'
2018-01-30 11:07:04,440 INFO [INIT] Connection with master established
2018-01-30 11:07:04,442 INFO Master is ONLINE
2018-01-30 11:07:04,443 INFO Waiting for instructions
2018-01-30 11:07:09,450 DEBUG PING => master (last message 5s ago)
2018-01-30 11:07:09,455 DEBUG master => PONG(20)
```
>From django admin console, I see lava-slave1 still is online, but
both lava-master and lava-logs workers went offline, and it stopped
scheduling test job. Have you guys ever see/hit this issue? Any advice
and suggestions would be appreciated.
Thanks,
Chase
Hi All,
We use IOZone test to measure performance of our DUT
Same command executed several time in a Lava session provide a diversity of results
iozone -az -i0 -i1 -I -e -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -f /mnt_emmc//tmp/iozone.tmp
Example1:
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 5053 4420 12471 12137
102400 16 5070 5218 22101 22207
102400 512 11733 12630 40799 40862
102400 1024 11446 11494 39982 39976
102400 16384 13839 14235 42093 42094
Example 2:
kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
102400 4 5088 5006 13496 13095
102400 16 5395 5549 17199 17220
102400 512 9203 10038 27819 27586
102400 1024 9382 8482 32514 32430
102400 16384 13569 13986 41992 42081
When same command is executed without LAVA with same DUT, results are more homogeneous (similar to exemple1) and permit to define clear targets
How to reduce interaction from Lava to measure performance of DUT ?
Is it possible to disable some checks / interactions with DUT during execution of each test (around 3 / 4 minutes) ?
Thanks in advance for your answer
Florence Rouger-Jung
Hi, our boards are powered via PDU and before flashing with pyocd, they need to be powered up first. I defined power_on_command in the device dictionary, but it is ignored.
It looks like an only limited set of methods add ResetDevice action to the pipeline and pyocd is not among them. Contrarily, if defined, power_off command always gets invoked because it is added by FinalizeAction to every test.
I'm wondering if there is any way to issue the power-on command without modifying the dispatcher pipeline source code.
Thanks,
Andrei Narkevitch, Cypress Semiconductor
This message and any attachments may contain confidential information from Cypress or its subsidiaries. If it has been received in error, please advise the sender and immediately delete this message.
Hi Everyone
Can anyone share sample Yaml File and Usecase using
"prepare-scp-overlay" using MultiNode.
Some calls can only be made against specific actions. Specifically, the
prepare-scp-overlay action needs the IP address of the host device to be
able to copy the LAVA overlay (containing the test definitions) onto the
device before connecting using ssh to start the test. This is a
complex configuration
to write.
--
Thanks & Regards
Chetan Sharma
Hello everyone,
I am using lava-tool to monitor my jobs. Previously I used:
$ lava-tool submit-job --block
Using version of lava-tool 0.23 I now have this message:
--> This kind of polling is deprecated and will be removed in the next
release. Please use "wait-for-job" command.
But "wait-for-job" doesn't exist.
There is a "wait-job-events" option though. I tried this one and it doesn't
return even once the job has finished. If I manually stop it and restart it
with the same job number I get as output:
--> Job already finished with status Complete.
Command I'm using:
$ lava-tool wait-job-events --job-id 20 http://user@lava-server
Is there anything I'm doing incorrectly ? Or are you aware of this bug ?
Thanks !
--
Loys OLLIVIER
A change was sent a while ago to add support for the Coreboot /
Depthcharge bootloader which is used on Chromebook devices. This
is useful in particular to avoid having to install U-Boot on
Chromebook devices. See this Gerrit review here for previous
history:
https://review.linaro.org/#/c/15203/
I'm now opening this case again to try and get this resolved,
there seem to be several issues with the original patch that
would need to be clarified. Also, some things might have changed
since then in LAVA or Coreboot which could potentially lead to a
different approach - any feedback on this would be welcome.
To start with, I understand that running mkimage on the
dispatcher is not a valid thing to do, it should receive a
FIT (flattened image tree) kernel image ready to be booted. This
complicates things a bit for projects like kernelci.org where
only a plain kernel image is built and ramdisks are served
separately, but it's fair enough to say that LAVA is not meant to
be packaging kernel images on the fly.
Then I believe creating the command line file in LAVA should be
fine, although it probably makes more sense to have both the FIT
image and cmdline file generated by the same build system. In
any case, both files would need to be served from the dispatcher
TFTP server to the target device running Coreboot / Depthcharge.
So the idea was basically to have an option in Coreboot /
Depthcharge to interactively tell it where to find these files
for the current job to run, say:
<JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/vmlinuz
<JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/cmdline
It looks like the current patch in Gerrit relies on this location
to be hard-coded in the bootloader, which works fine for a
private development set-up but not for LAVA.
To recap, my understanding is that the "depthcharge" boot support
code in LAVA would need to:
* maybe create the cmdline file with basically the kernel
command line split up with one argument per line
* or just download the cmdline file along with the vmlinuz FIT
* place both the cmdline and vmlinuz FIT files in the job's
TFTP directory on the dispatcher
* turn on the device and open the serial console...
* interactively pass at least the path to the job TFTP
directory on the serial console (and if possible the server
IP address as well, and maybe even the individual file names
rather than hard-coded vmlinuz and cmdline)
* look for a bootloader message to know when the kernel starts
to load and hand over to the next action (login...)
Please let me know if this sounds reasonable or if we should be
doing anything differently. I think it would be good to have
some agreement and a clear understanding of how this is going to
be implemented before starting to work on the code again.
Best wishes,
Guillaume
[bcc’d to everyone]
Hi all,
Next week, starting on Monday, we will be upgrading all the LAVA servers to Debian Stretch and then upgrading to LAVA 2018.01. This will involve minimal downtime for each instance. Devices will be off-lined ahead of each upgrade.
The order of upgrade will be:
Monday:
LNG: lng.validation.linaro.org
PMWG: pmwg.validation.linaro.org
Tuesday:
Production: validation.linaro.org
LKFT: lkft.validation.linaro.org
All being well, the downtime, i.e. the time you will not be able to submit jobs or see the web site, will be of the order of one hour per instance. If that is going to change I will send out a new notification.
Thanks for your patience,
Dave
----------------
Dave Pigott
LAVA Lab Lead
Linaro Ltd
t: (+44) (0) 1223 400063