Hi all!
I am working on writing LAVA test definitions and using the callback method
to send test results to kernelCI.
I noticed a sort of bug (?) when the test definitions are reporting a lot
of elements. The job finishes and the callback gets triggered before the
end of the log parsing. I think the callback method is not waiting for the
parser to finish before sending the event. The callback output is then
missing some test results.
I made a simple test script that reports 300 test elements and returns. I
can see in the LAVA log that they are all detected. But in the callback
object there is around 80 test results.
If I add a sleep (15 seconds) to hold the script before returning, the
callback has the 300 test results bundled in the json.
Did anyone experienced this before ?
Thanks !
--
Loys OLLIVIER
Baylibre
Hello Lava Team,
I have created some Lava jobs that use our proprietary Flasher, based on a DFU connection.
As our flasher is not a "standard" flasher, I have adapted the boot process to be able to use our flasher.
I use the boot method "minimal" to achieve this.
To call our flasher script, I have used the script called by the method "power_on". This is defined in the device configuration.
Find below an extract of the device content :
.......................................................................................
..
..
{% set hard_reset_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command reboot --port 1' %}
{% set power_off_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command off --port 1' %}
{% set power_on_command = '/root/git/lava-config/scripts/flash_stm32_programmer.sh -u lava_pdu_01.lme.st.com -p 1 -d usb1 -b ds378_2.lme.st.com -s 4_5_6 -f /tmp/test' %}
{% set connection_command = 'telnet localhost 2001' %}
..
..
.......................................................................................
This works correctly for a "static" configuration. The settings for the flasher are defined outside Lava by a script that configure the flashing parameters.
The "power_on" script reads these parameters, and launch the flashing on the board.
My problem now, is when I launch simultaneously jobs on several boards that requires different flashing binaries version.
I am unable to indicate to each boards which binary version to be used by our flasher.
The best way would be to pass parameters in the job to indicate which binary version has to be used by the flasher.
This could be done in the "deploy action" and pass to the "power_on" command, but I don't know how to implement it.
I don't know also if it is possible to do that easily ?
Find below my job definition.
###### Job definition ##############
actions:
- deploy:
timeout:
minutes: 5
to: ssh
os: oe
device:
- boot:
method: minimal
failure_retry: 2
auto_login:
login_prompt: 'login:'
username: root
prompts:
- 'root@stm32mp1'
timeout:
minutes: 10
transfer_overlay:
download_command: sync && sleep 15 && wget
unpack_command: tar -C / -xzf
- test: ... #############################
Thanks to support me.
BR
Philippe
Hello everyone,
I am using lava-tool to monitor my jobs. Previously I used:
$ lava-tool submit-job --block
Using version of lava-tool 0.23 I now have this message:
--> This kind of polling is deprecated and will be removed in the next
release. Please use "wait-for-job" command.
But "wait-for-job" doesn't exist.
There is a "wait-job-events" option though. I tried this one and it doesn't
return even once the job has finished. If I manually stop it and restart it
with the same job number I get as output:
--> Job already finished with status Complete.
Command I'm using:
$ lava-tool wait-job-events --job-id 20 http://user@lava-server
Is there anything I'm doing incorrectly ? Or are you aware of this bug ?
Thanks !
--
Loys OLLIVIER
A change was sent a while ago to add support for the Coreboot /
Depthcharge bootloader which is used on Chromebook devices. This
is useful in particular to avoid having to install U-Boot on
Chromebook devices. See this Gerrit review here for previous
history:
https://review.linaro.org/#/c/15203/
I'm now opening this case again to try and get this resolved,
there seem to be several issues with the original patch that
would need to be clarified. Also, some things might have changed
since then in LAVA or Coreboot which could potentially lead to a
different approach - any feedback on this would be welcome.
To start with, I understand that running mkimage on the
dispatcher is not a valid thing to do, it should receive a
FIT (flattened image tree) kernel image ready to be booted. This
complicates things a bit for projects like kernelci.org where
only a plain kernel image is built and ramdisks are served
separately, but it's fair enough to say that LAVA is not meant to
be packaging kernel images on the fly.
Then I believe creating the command line file in LAVA should be
fine, although it probably makes more sense to have both the FIT
image and cmdline file generated by the same build system. In
any case, both files would need to be served from the dispatcher
TFTP server to the target device running Coreboot / Depthcharge.
So the idea was basically to have an option in Coreboot /
Depthcharge to interactively tell it where to find these files
for the current job to run, say:
<JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/vmlinuz
<JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/cmdline
It looks like the current patch in Gerrit relies on this location
to be hard-coded in the bootloader, which works fine for a
private development set-up but not for LAVA.
To recap, my understanding is that the "depthcharge" boot support
code in LAVA would need to:
* maybe create the cmdline file with basically the kernel
command line split up with one argument per line
* or just download the cmdline file along with the vmlinuz FIT
* place both the cmdline and vmlinuz FIT files in the job's
TFTP directory on the dispatcher
* turn on the device and open the serial console...
* interactively pass at least the path to the job TFTP
directory on the serial console (and if possible the server
IP address as well, and maybe even the individual file names
rather than hard-coded vmlinuz and cmdline)
* look for a bootloader message to know when the kernel starts
to load and hand over to the next action (login...)
Please let me know if this sounds reasonable or if we should be
doing anything differently. I think it would be good to have
some agreement and a clear understanding of how this is going to
be implemented before starting to work on the code again.
Best wishes,
Guillaume
Currently it is difficult to tell the difference between an
infrastructure problem in a device bootloader, or a kernel failure.
If a kernel silently fails to boot, LAVA throws a bootloader-commands
timeout because it hasn’t matched ‘Linux version’ to know the kernel
has started. However, this timeout could also be caused by a real
problem in the bootloader, such as a DHCP failure or a TFTP timeout.
KernelCI would like to catch actual infrastructure problems in the
bootloader, but can’t tell if the kernel just didn’t boot, or the
commands actually timed out in the bootloader.
To fix this, we're going to:
- change the bootloader-commands action to finish when it has sent the
last command
- have auto-login-action takeover monitoring the kernel boot process
- extend bootloader-commands to match more infrastructure problems
- update uboot commands to execute the commands in order (like the
other bootloader implementations), rather than building a script and
then calling that as the last command
This work is scoped for the January 2018.1 LAVA release.
Hi all!
I am working with some Raspberry Pi boards and, while defining some particularities for booting via NFS, I came across an issue regarding the way the rootfs file is unpacked.
To be more specific, after looking through the LAVA code, it seems that the untar_file() method runs on the conditional branch that states the tar members are specified. I do not understand how this implementation is designed in regards to tar archive manipulation, but the following scenario takes place:
* The rootfs archive we specify in the job definition is copied and renamed, the extension being changed from .tar.bz2 to a plain .tar (which seems a bit strange tom e, on its own)
* Then, after unpacking to the LAVA temp job dir, only two directories are extracted from the whole rootfs archive. I did manage to „inverveene” while the job was running and looking in the temp dir to see exactly what gets extracted. Since the rest of the folders from the rootfs are not available, the job fails once the system starts booting
While investigating, I noticed that the untarf_file() method is invoked from download.py (/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/) - @ line 344 – with members being specified. I know that, because when I manually changed how this call is made and clearly specified “None” for the “member” positional argument, I get the following error at job runtime: https://paste.debian.net/1001252/ (the full job log can be found here: https://paste.debian.net/1001253/ )
The code changes I am talking about (and the code where I suspect something is ambiguous) are pointed out here: https://paste.debian.net/1001256/ .
A job definition we use for this board integration can be analyzed here: https://paste.debian.net/1001257/
The full log of the initial job, that ran without my code changes, can be found here: https://paste.debian.net/1001254/ . The „kernel panic” message ocurs because, as stated in one of the errors, the “init” folder is missing, which is accurate, because only bin & dev are extracted from our rootfs tar.bz2 archive.
What could we do? Is this something that needs to be adjusted/fixed in LAVA?
Kind regards,
Dragoș
From: Aníbal Limón <anibal.limon(a)linaro.org>
Now the Test writer has access to the images inside the LXC
to make changes previous deploy/flash into the board, in order
to support mount/modify rootfs images the loop device is needed.
Add a parameter in the lxc-boot action to map a free loop device
(losetup -f) into the LXC.
Change-Id: I7060ebac12b10e5390560da082fe6c49568c5ffc
Signed-off-by: Aníbal Limón <anibal.limon(a)linaro.org>
---
lava_dispatcher/actions/boot/lxc.py | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/lava_dispatcher/actions/boot/lxc.py b/lava_dispatcher/actions/boot/lxc.py
index d896d303..e3e3cb48 100644
--- a/lava_dispatcher/actions/boot/lxc.py
+++ b/lava_dispatcher/actions/boot/lxc.py
@@ -75,7 +75,11 @@ class BootLxcAction(BootAction):
def populate(self, parameters):
self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
self.internal_pipeline.add_action(LxcStartAction())
- self.internal_pipeline.add_action(LxcAddStaticDevices())
+
+ lxc_add_loop = False
+ if 'lxc_add_loop' in parameters:
+ lxc_add_loop = parameters.get('lxc_add_loop', False)
+ self.internal_pipeline.add_action(LxcAddStaticDevices(lxc_add_loop))
self.internal_pipeline.add_action(ConnectLxc())
# Skip AutoLoginAction unconditionally as this action tries to parse kernel message
# self.internal_pipeline.add_action(AutoLoginAction())
@@ -91,11 +95,12 @@ class LxcAddStaticDevices(Action):
worker.
"""
- def __init__(self):
+ def __init__(self, lxc_add_loop=False):
super(LxcAddStaticDevices, self).__init__()
self.name = 'lxc-add-static'
self.description = 'Add devices which are permanently powered by the worker to the LXC'
self.summary = 'Add static devices to the LXC'
+ self.lxc_add_loop = lxc_add_loop
def validate(self):
super(LxcAddStaticDevices, self).validate()
@@ -115,6 +120,15 @@ class LxcAddStaticDevices(Action):
def run(self, connection, max_end_time, args=None):
connection = super(LxcAddStaticDevices, self).run(connection, max_end_time, args)
lxc_name = self.get_namespace_data(action='lxc-create-action', label='lxc', key='name')
+
+ if self.lxc_add_loop:
+ lxc_get_loop_cmd = ['losetup', '-f']
+ loop_device = self.run_command(lxc_get_loop_cmd, allow_silent=True).strip()
+ lxc_loop_cmd = ['lxc-device', '-n', lxc_name, 'add', loop_device]
+ cmd_out = self.run_command(lxc_loop_cmd)
+ if cmd_out:
+ self.logger.debug(cmd_out)
+
# If there is no static_info then this action should be idempotent.
if 'static_info' not in self.job.device:
return connection
--
2.11.0
On 18 Dec 2017 3:45 p.m., "Guillaume Tucker" <guillaume.tucker(a)collabora.com>
wrote:
On 18/12/17 11:45, Neil Williams wrote:
> On 14 December 2017 at 09:47, Guillaume Tucker <
> guillaume.tucker(a)collabora.com> wrote:
>
> On 07/12/17 17:16, Neil Williams wrote:
>>
>> On 7 December 2017 at 16:20, Guillaume Tucker <
>>> guillaume.tucker(a)collabora.com> wrote:
>>>
>>> A change was sent a while ago to add support for the Coreboot /
>>>
>>>> Depthcharge bootloader which is used on Chromebook devices. This
>>>> is useful in particular to avoid having to install U-Boot on
>>>> Chromebook devices. See this Gerrit review here for previous
>>>> history:
>>>>
>>>> https://review.linaro.org/#/c/15203/
>>>>
>>>> I'm now opening this case again to try and get this resolved,
>>>> there seem to be several issues with the original patch that
>>>> would need to be clarified. Also, some things might have changed
>>>> since then in LAVA or Coreboot which could potentially lead to a
>>>> different approach - any feedback on this would be welcome.
>>>>
>>>>
>>>> Thanks for picking this up.
>>>
>>>
>> You're welcome. I've now uploaded a new version which generates
>> the command line file but not the FIT image, it expects the
>> kernel image to be already in this format. Still the same
>> Gerrit number:
>>
>> https://review.linaro.org/#/c/15203/
>>
>> I've also made a patch to add the rk3288-veyron-jaq as
>> a "depthcharge" device type:
>>
>> https://review.linaro.org/#/c/22992/
>>
>> So as a next step, it would be convenient to find a way to have
>> the FIT image generated as part of the LAVA job with a given
>> kernel image, dtb, maybe the .its file and optionally a ramdisk.
>>
>> For the reference:
>>
>> http://git.denx.de/?p=u-boot.git;a=blob;f=doc/uImage.FIT/how
>> to.txt;hb=master
>>
>> To start with, I understand that running mkimage on the
>>
>>> dispatcher is not a valid thing to do, it should receive a
>>>> FIT (flattened image tree) kernel image ready to be booted. This
>>>> complicates things a bit for projects like kernelci.org where
>>>> only a plain kernel image is built and ramdisks are served
>>>> separately, but it's fair enough to say that LAVA is not meant to
>>>> be packaging kernel images on the fly.
>>>>
>>>>
>>>> We've come up with a method in the meantime, although it does mean using
>>> LXC but that makes it completely generic. It's principally designed for
>>> boards which need to munge a kernel and other files into an image to be
>>> transferred to the device using tools like fastboot. This is how KernelCI
>>> will be able to submit boot tests on devices like HiKey and db410c.
>>> Sadly,
>>> the example test job is suffering because the db410c devices have a
>>> different problem which is keeping them offline. Matt has been looking
>>> into
>>> this.
>>>
>>> https://staging.validation.linaro.org/scheduler/job/203317/definition
>>>
>>> https://staging.validation.linaro.org/static/docs/v2/actions
>>> -deploy.html#index-25
>>>
>>>
>> Thanks for the pointers, seems worth investigating.
>>
>> On the other hand, creating the FIT image is a similar process to
>> that of uImage, which is currently being done directly on the
>> dispatcher:
>>
>> https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_di
>> spatcher/actions/deploy/prepare.py#n79
>>
>> So would it make sense to add some code there to support FIT?
>>
>
>
> What is an example command line to mkimage to do this?
>
mkimage -D "-I dts -O dtb -p 2048" -f rk3288-veyron-jaq.its
arch/arm/boot/vmlinuz
Is the its file really needed? I added the ramdisk parameter precisely so
lava doesn't need to generate one.
Regards,
Tomeu
Are any external configuration files required?
>
Everything should be in the .its file, and it should also be
possible to generate it on the fly using a template and the LAVA
device properties (kernel load address etc...). If this proves
to not be flexible enough in practice, then I suppose the .its
file could be downloaded although I think we should avoid doing
this if we can.
Then I believe creating the command line file in LAVA should be
>>
>>> fine, although it probably makes more sense to have both the FIT
>>>> image and cmdline file generated by the same build system. In
>>>> any case, both files would need to be served from the dispatcher
>>>> TFTP server to the target device running Coreboot / Depthcharge.
>>>>
>>>>
>>>> That bit is fine, the problem is why this cannot use the existing
>>> temporary
>>> paths which all the other TFTP devices use. Is it just to do some
>>> mangling
>>> of the files?
>>>
>>>
>> This is resolved now with the version I sent yesterday.
>>
>
>
> That makes this review much better, thanks.
>
Great, thanks for confirming.
So the idea was basically to have an option in Coreboot /
>>
>>> Depthcharge to interactively tell it where to find these files
>>>> for the current job to run, say:
>>>>
>>>> <JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/vmlinuz
>>>> <JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/cmdline
>>>>
>>>> It looks like the current patch in Gerrit relies on this location
>>>> to be hard-coded in the bootloader, which works fine for a
>>>> private development set-up but not for LAVA.
>>>>
>>>>
>>>> That makes very little sense because the whole point of TFTP is that
>>> everything after the SERVER_IP is just a relative path from the TFTP base
>>> directory which is handled by the TFTP daemon itself.
>>>
>>>
>> Ditto.
>>
>> To recap, my understanding is that the "depthcharge" boot support
>>
>>> code in LAVA would need to:
>>>>
>>>> * maybe create the cmdline file with basically the kernel
>>>> command line split up with one argument per line
>>>>
>>>>
>>>> Alternatively, do whatever operations are required in a test shell in
>>> the
>>> LXC and then pass those files to the device - entirely within the test
>>> shell support.
>>>
>>>
>> That, or maybe run mkimage on the dispatcher like for uImage...
>>
>> The cmdline file is now generated on the dispatcher.
>>
>> * or just download the cmdline file along with the vmlinuz FIT
>>
>>>
>>>>
>>> The ready-made FIT kernel image is now downloaded with the
>> version I sent yesterday.
>>
>> * place both the cmdline and vmlinuz FIT files in the job's
>>
>>> TFTP directory on the dispatcher
>>>>
>>>> * turn on the device and open the serial console...
>>>>
>>>> * interactively pass at least the path to the job TFTP
>>>> directory on the serial console (and if possible the server
>>>> IP address as well, and maybe even the individual file names
>>>> rather than hard-coded vmlinuz and cmdline)
>>>>
>>>>
>>>> Isn't this equivalent to what U-Boot already does with TFTP?
>>>
>>>
>> Almost. This part is now all implemented in the last patch I
>> sent. One thing though is that the NFS rootfs parameters are
>> stored in the kernel cmdline file and not set interactively in
>> the bootloader shell.
>>
>
>
> How can these be extended by test writers? We do see requests to add
> arguments to the NFS parameters but adding options to the kernel command
> line itself is all but essential for most testing.
>
This can be done using the {{ extra_kernel_args }} template
variable, see the other change to add base-depthcharge.jinja2:
https://review.linaro.org/#/c/22992/1/lava_scheduler_app/tes
ts/device-types/base-depthcharge.jinja2
If anything more special ever needs to be done with some
parameters such as inserting some IP address, it can be done in
DepthchargeCommandOverlay where the command line file is
generated.
The only command sent is to start the tftp
>> boot with the server IP and the relative paths to the kernel and
>> cmdline files.
>>
>
On this topic, the changes to add the tftpboot command in
Depthcharge are still under review:
https://chromium-review.googlesource.com/c/chromiumos/platfo
rm/depthcharge/+/451382
So I think it would actually be wiser to not merge
base-depthcharge.jinja2 until the review above has been merged in
case the command line syntax needs to be adjusted.
* look for a bootloader message to know when the kernel starts
>>
>>> to load and hand over to the next action (login...)
>>>>
>>>>
>>> Done as well, I've now got the veyron-jaq device booting fine
>> with NFS rootfs. There was an issue with adding a ramdisk to the
>> FIT image as it was to big to boot on the device, will
>> investigate this part to add "ramdisk" boot commands.
>>
>>
>> Please let me know if this sounds reasonable or if we should be
>>
>>> doing anything differently. I think it would be good to have
>>>> some agreement and a clear understanding of how this is going to
>>>> be implemented before starting to work on the code again.
>>>>
>>>
Best wishes,
Guillaume
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lava-users
Hi ,
In our testing, we must test the os centos and Ubuntu, but the grub interrupt_prompts is different betweent Ubuntu and centos. So in my device_type template I use "menu_options: {{ grub_method }}", but the "grub_method" must define in device jinjia2 file. I don't want to often chang the device jinja2 file , whether I can define some options in the job file?
Please give me some help!
The attachmen are my device_type and device file!
methods:
grub:
menu_options: {{ grub_method }}
parameters:
{% if grub_method == 'centos' %}
interrupt_prompt: {{ grub_interrupt_prompt|default('Press \'e\' to edit the selected item, or \'c\' for a command prompt.') }}
{% elif grub_method == 'ubuntu' %}
interrupt_prompt: {{ grub_interrupt_prompt|default(' Press enter to boot the selected OS') }}
{% elif grub_method == 'pxe' %}
interrupt_prompt: {{ grub_interrupt_prompt|default('Press \'e\' to edit the selected item, or \'c\' for a command prompt.') }}
{% endif %}
bootloader_prompt: {{ grub_efi_bootloader_prompt|default('grub>') }}
boot_message: {{ kernel_boot_message | default("Booting Linux Kernel...") }}
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello Lava Team,
We faced to errors when we launched long tests ( e.g LTP tests, stress tests ... ) since we use Lava dockers.
The following messages are returned by Lava and the test stop :
Connection closed by foreign host.
err: lava_test_shell connection dropped
Marking unfinished test run as failed
These error messages appears during the test, if no message are generated on the console during approximatively six minutes.
We found a workaround that consist to send periodically message to the console, allowing the test to complete.
I would like to know if we can setup a setting to
I would like to know if there is a way to inhibit this control, or to change the timeout settings of this control ?
Our configuration : Lava 2017.6 with dockers
Regards
Philippe Begnic