Hello everyone,
I have got an error in a lava job during to overlay unpacking operations.
This happen with jobs that flash the DUT before executing the tests. When the partitions are flashed on the DUT, I reboot the boards.
After kernel has started, the kernel boot prompt is detected. Then the commands to downloads the tests overlay are launched. ( wget ... )
But in my case, these commands are not immediately executed, because the DUT is beeing resizing the root filesystem to fit available disk space
on the SDCard.
This operation take time ( depending on capacity and characteristics of the SDCard), and cause overlay-unpack<https://citools.st.com/results/testcase/2039053> to fail with a timeout error,
because the command wget is not executed in the expected time ( 30 sec is the default time ).
I have tried to add a time out settings in the transfer_overlay part of my jobs, but this has no effect.
Is it possible to set specific "timeout trigger" for the "overlay-unpack" operation ?
My Lava version: 2019.01+stretch
Best regards
Philippe Begnic
STMicroelectronics
Hi,
I have a board with 4 uarts connected to a lava worker via ser2net.
LAVA is giving me some strange results when I try to use two uarts (uart0 and uart2) in the same testcase.
The results are not consistent. Sometimes it works, but most of the time it gives an error:
KeyError: 'uart2'
At the end of the job, LAVA prints:
LAVABug: This is probably a bug in LAVA, please report it.error_type: Bug
error_msg: 'uart2'
case: job
result: fail
definition: lava
<http://lava-master.sw.nxp.com/results/testcase/16283386>
----
The connection parts of the device definition are:
% set connection_list = ['uart0', 'uart1', 'uart2', 'uart3'] %}
{% set connection_tags = {'uart0': ['primary', 'telnet']} %}
{% set connection_commands = {'uart0': 'telnet localhost 7001', 'uart1': 'telnet localhost 7002', 'uart2': 'telnet localhost 7003', 'uart3': 'telnet localhost 7004'} %}
The job uses two boots; one primary to boot non-POSIX to uboot, then it switches to 'uart2' to access another processor.
The job seems to work fine if I define only two uarts:
% set connection_list = ['uart0', 'uart1'] %}
{% set connection_tags = {'uart0': ['primary', 'telnet']} %}
{% set connection_commands = {'uart0': 'telnet localhost 7001', 'uart1': 'telnet localhost 7003 } %}
Thank you,
Nick
Hi All,
I am installing an ODROID-N2 board into our LAVA setup. I see that there is support for ODROID-X2 and ODROID-XU3 in the LAVA codebase, but no mention of ODROID-N2.
Before I start, I wanted to know if anyone has successfully installed an ODROID-N2 in LAVA? Does it work with the existing support added for the other variants? Are there any specific bits of hardware I will need to use and/or device dictionary settings I need to ensure are set before I can get the board up and running?
Regards,
Malcolm
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi,
We've recently had an issue with our LAVA instance (version 2019.05.post1),
where a long running LAVA job which had a large log file led to
instabilities when serving web content.
The large job was seemingly causing lava-server-gunicorn workers to use up
more memory than was available, leading to workers crashing and then
restarting. This led to all the workers processing the large jobs most of
the time, while other requests would only be served once the workers
restarted. This led to the webpages being served extremely slowly and
lavacli usage timing out (if a larger timeout was not set).
We had "LOG_SIZE_LIMIT": 3 set in our /etc/lava-server/settings.conf, and
we did have the message on that job page for "*This log file is too large
to view"*, but it seems that some requests were still attempting to process
some aspect of the job causing these worker crashes. Is there any other
settings that might need to be set in order to cope with long running jobs
with large log files that might help with this situation?
Before we look into this any further, does anyone know if this is fixed
with a newer version of LAVA? Has anyone had any similar issues with their
instances?
Thanks,
Dean
Hi Lava users,
I have a problem with the lxc container which is created on the host machine, by getting an IP address.
Lava is installed in a, VM with Debian stretch.
Has anyone experienced this problem before? Or anyone any idee why lxc does not get the ip?
Here are the logs:
lava-dispatcher, installed at version: 2019.09+stretch
start: 0 validate
Start time: 2019-09-24 12:29:39.318789+00:00 (UTC)
lxc, installed at version: 1:2.0.7-2+deb9u2
Validating that file:///usr/mgu22/mgu22-19w32.5-1-2-bmw-image-mgu22-sa8155.rootfs.ext4 exists
validate duration: 0.07
definition: lava
result: pass
case: validate
start: 1 lxc-deploy (timeout 00:05:00) [tlxc]
start: 1.1 lxc-create-action (timeout 00:05:00) [tlxc]
nice lxc-create -q -t debian -n lxc-hikey-test-13 -- --release stretch --mirror http://mirror.bytemark.co.uk/debian --packages systemd,systemd-sysv
Container created successfully
end: 1.1 lxc-create-action (duration 00:01:22) [tlxc]
level: 1.1
case: lxc-create-action
definition: lava
result: pass
namespace: tlxc
duration: 82.21
extra: ...
start: 1.2 lxc-create-udev-rule-action (timeout 00:03:38) [tlxc]
device info file '/var/lib/lava/dispatcher/tmp/13/lxc-create-udev-rule-action-y0_d19aq/device-info.yaml' created with:
[{'board_id': '7d4452a4'}]
udev rules file '/var/lib/lava/dispatcher/tmp/13/lxc-create-udev-rule-action-nj_8pzv0/100-lava-lxc-hikey-test-13.rules' created
ACTION=="add", ATTR{serial}=="7d4452a4", RUN+="/usr/share/lava-dispatcher/lava_lxc_device_add.py --lxc-name lxc-hikey-test-13 --device-node $name --job-id 13 --logging-url tcp://localhost:5555"
'/etc/udev/rules.d/100-lava-lxc-hikey-test-13.rules' symlinked to '/var/lib/lava/dispatcher/tmp/13/lxc-create-udev-rule-action-nj_8pzv0/100-lava-lxc-hikey-test-13.rules'
nice udevadm control --reload-rules
udev rules reloaded.
end: 1.2 lxc-create-udev-rule-action (duration 00:00:00) [tlxc]
start: 1.3 boot-lxc (timeout 00:03:38) [tlxc]
nice lxc-start -n lxc-hikey-test-13 -d
output:
Wait until 'lxc-hikey-test-13' state becomes RUNNING
nice lxc-info -sH -n lxc-hikey-test-13
output: RUNNING
output:
'lxc-hikey-test-13' state is RUNNING
Wait until 'lxc-hikey-test-13' gets an IP address
nice lxc-info -iH -n lxc-hikey-test-13
output:
nice lxc-info -iH -n lxc-hikey-test-13
output:
nice lxc-info -iH -n lxc-hikey-test-13
output:
nice lxc-info -iH -n lxc-hikey-test-13
output:
nice lxc-info -iH -n lxc-hikey-test-13
output:
nice lxc-info -iH -n lxc-hikey-test-13
output:
Here is my Lava job:
https://pastebin.com/kiLbnjAX
Hi All,
i am new LAVA framework
i am trying to submit my first job , but error like "Invalid definition:
extra keys not allowed @ data['job_timeout']"
i am attaching screen shot & file.
Can someone please help me to solve this issue
Thanks
Veera
Hello everyone,
I am trying to send some information containing whitespaces via lava-send. Obviously this is not supported. Is this a bug or by design?
I tried the following command:
lava-send my-message my-variable="some string with whitespaces"
Which produces the following output:
<LAVA_SEND_DEBUG lava_multi_node_send preparing Tue Aug 27 15:02:41 CEST 2019>
<LAVA_SEND_DEBUG _lava_multi_node_send started Tue Aug 27 15:02:41 CEST 2019>
<LAVA_MULTI_NODE> <LAVA_SEND my-message my-variable=some>
<LAVA_SEND_DEBUG _lava_multi_node_send finished Tue Aug 27 15:02:41 CEST 2019>
<LAVA_SEND_DEBUG lava_multi_node_send finished Tue Aug 27 15:02:41 CEST 2019>
Received Multi_Node API <LAVA_SEND>
messageID: SEND-my-message
lava-multinode lava-send
1 key value pair(s) to be sent.
Handling signal <LAVA_SEND {"timeout": 360, "request": "lava_send", "messageID": "my-message", "message": {"my-variable": "some"}}>
So obviously the string is truncated on the first whitespace.
Is there any possibility to send a string containing whitespaces to another node?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
#
root@device:~# #
lava-test-shell: Wait for prompt ['root@device:~#'] (timeout 00:05:00)
#
Using /lava-133
export SHELL=/bin/bash
root@device:~# export SHELL=/bin/bash
export SHELL=/bin/bash
. /lava-133/environment
root@device:~# . /lava-133/environment
. /lava-133/environment
-sh: .: can't open '/lava-133/environment'
Will listen to feedbacks from 'tlxc' for 1 second
/lava-133/bin/lava-test-runner /lava-133/0
root@device:~# /lava-133/bin/lava-test-runner /lava-133/0
Test shell timeout: 10s (minimum of the action and connection timeout)
/lava-133/bin/lava-test-runner /lava-133/0
-sh: /lava-133/bin/lava-test-runner: not found
Device is successfully booted and logged in. But I cant run any commands.
Why lava running this command? :
. /lava-133/environment
And how it downloads to device.
Ilya