Smita,
Yes I looked and confirm that analysis, you've not commented on what
the U-Boot prompt is?
I'm putting lava-users back on the cc list in case others wish to
comment, and making my comment below clearer
Robert
"Gumansingh, Smita" <Smita_Gumansingh(a)mentor.com> writes:
> Hi Robert,
>
> Have you got any chance to see my bbb health check log
>
> bbb health check fails at this point
>
> ------------ ----------------------
> send-reboot-commands timed out after 179 seconds
> end: 2.4.1.1 send-reboot-commands (duration 00:02:59)
> ----------------------------
>
> Please have a look on this log https://pastebin.com/gPmMG84J
>
> Thanks & Regards,
> Smita Gumansingh
>
> ________________________________________
> From: Robert Marshall <robert.marshall(a)codethink.co.uk>
> Sent: Wednesday, January 10, 2018 6:21 PM
> To: Gumansingh, Smita
> Subject: Re: [Lava-users] No result coming when job submitted in lava
>
> Smita
>
> Though you only to replace that line *if* the U-Boot prompt doesn't
^ need
> consist of "U-Boot⇒" - interrupt the BBB boot to see what it actually is,
> and if it is necessary, you also need to remove the other line -
> apologies for the instructions not being clear here.
>
> Thanks for the fuller output in the other email!
>
> Robert
>
> "Gumansingh, Smita" <Smita_Gumansingh(a)mentor.com> writes:
>
>> Thanks Robert for the quick response
>>
>> Currently u-boot is showing in /etc/lava-server/dispatcher-config/device-types/beaglebone-black.jinja2 is:
>>
>> {% set bootloader_prompt = bootloader_prompt|default('U-Boot') %}
>>
>> As you suggested I added the line: {% set bootloader_prompt = bootloader_prompt|default('⇒') %} and submitted a job
>>
>> No output from the job when submitted...
>>
>> Thanks & Regards,
>> Smita Gumansingh
>>
>> ________________________________________
>> From: Robert Marshall <robert.marshall(a)codethink.co.uk>
>> Sent: Wednesday, January 10, 2018 5:08 PM
>> To: Gumansingh, Smita
>> Cc: lava-users(a)lists.linaro.org
>> Subject: Re: [Lava-users] No result coming when job submitted in lava
>>
>> Hi, some comments below!
>>
>> "Gumansingh, Smita" <Smita_Gumansingh(a)mentor.com> writes:
>>
>>> Hi,
>>> I am new to lava and trying to submit a job on lava scheduler ,job submitted nut no result is coming. I am trying to
>>> test the CIP Kernel on the Beaglebone Black(board is physically connected to my linux machine). Health checkup is
>>> working somehow .
>>
>> By 'somehow' do you mean the health check is completing correctly and the
>> device is shown as online?
>>
>>> I am following the steps from here
>>> https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingrefe…
>>>
>>> Pre-requise:
>>> 1. I have built(cip_v4.4.92) the CIP Kernel with Kernel CI as the steps mentioned in
>>> https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipsystembuild…
>>> 2.Target is booted and up and flashed with debian 4.9 rootfs
>>>
>>
>> What is the U-Boot prompt with this version, if it is ⇒ rather than U-Boot⇒
>>
>> on the vagrant machine you need to
>>
>> sudo vi /etc/lava-server/dispatcher-config/device-types/beaglebone-black.jinja2
>> And add the line: {% set bootloader_prompt = bootloader_prompt|default('⇒') %}
>>
>>
>>> Job Definination is pasted here
>>>
>>> https://pastebin.com/YwnPXidK
>>>
>>> Need help for going further !!!!
>>
>> Do you get any output from the job?
>>
>>>
>>> Thanks & Regards,
>>> Smita Gumansingh
>>>
>>
>> Robert
Hi,
I am new to lava and trying to submit a job on lava scheduler ,job submitted nut no result is coming. I am trying to test the CIP Kernel on the Beaglebone Black(board is physically connected to my linux machine). Health checkup is working somehow . I am following the steps from here https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingrefe…
Pre-requise:
1. I have built(cip_v4.4.92) the CIP Kernel with Kernel CI as the steps mentioned in https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipsystembuild…
2.Target is booted and up and flashed with debian 4.9 rootfs
Job Definination is pasted here
https://pastebin.com/YwnPXidK
Need help for going further !!!!
Thanks & Regards,
Smita Gumansingh
Currently it is difficult to tell the difference between an
infrastructure problem in a device bootloader, or a kernel failure.
If a kernel silently fails to boot, LAVA throws a bootloader-commands
timeout because it hasn’t matched ‘Linux version’ to know the kernel
has started. However, this timeout could also be caused by a real
problem in the bootloader, such as a DHCP failure or a TFTP timeout.
KernelCI would like to catch actual infrastructure problems in the
bootloader, but can’t tell if the kernel just didn’t boot, or the
commands actually timed out in the bootloader.
To fix this, we're going to:
- change the bootloader-commands action to finish when it has sent the
last command
- have auto-login-action takeover monitoring the kernel boot process
- extend bootloader-commands to match more infrastructure problems
- update uboot commands to execute the commands in order (like the
other bootloader implementations), rather than building a script and
then calling that as the last command
This work is scoped for the January 2018.1 LAVA release.
Hello,
We have an installation where we use LAVA 2017.12. We are regularly
seeing jobs that remain stuck for several days.
For example, I have a job right now on the Armada XP GP that is stuck
since 1 day and 11 hours. The log visible in the LAVA Web interface
looks like this:
http://code.bulix.org/7pvru8-255308?raw
This is job #855671 in our setup.
The logs on the lava-slave look like this:
http://code.bulix.org/c5tejy-255312?raw
So, from the lava-slave point of view, the job is finished.
However, the "Job END" message had to be resent several times to the
master. Interestingly, this sequence lead to a very nice:
ERROR [855671] lava-run crashed
On the lava-master side (which runs on another machine), the logs
look like this:
http://code.bulix.org/b61keb-255316?raw
And this happens for lots of jobs. Pretty much every day or two, we
have ten boards stuck in this situation.
I have the lava-master logs with DEBUGs if this can be helpful.
However, must DEBUG logs don't have the job number in them, so it makes
it difficult to associate the DEBUG messages with the problematic job
(since numerous other jobs are running).
Does anyone has an idea what could be causing this ? Or how to debug
this further ?
Best regards,
Thomas Petazzoni
--
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
Hi,
I am having issues using an LXC device within the multinode LAVA protocole/API. The job yaml gets validated, but it fails to run with the following error:
Missing protocol 'lava-lxc' in ['lava-multinode']
Full yaml used can be found here: https://pastebin.com/BUsX0G0C
While searching for a fix I also found this email from a year ago, which wasn't answered (seems related):
http://linaro-validation.linaro.narkive.com/mXhxhHqy/issues-with-lava-multi…
Thanks,
Andrei
Hi, I'm trying to get started with LAVA by first attempting to do some simplistic testing over SSH to run some tests on a device that doesn't have a default template. I'm getting a few connection errors and others like output ['Permission denied (publickey,password).\r', 'lost connection', ''] and it's probably because I've misconfigured the jinja files, having little experience with these LAVA jinja templates.
Attached are the job logs, jinja2 template files and the test yaml file. Could anyone point me in the right direction by either providing sample ssh jinja files and job files or pointing out the errors in my config. Thanks!
Thanks!
Jian Chern
Hi all!
I am working with some Raspberry Pi boards and, while defining some particularities for booting via NFS, I came across an issue regarding the way the rootfs file is unpacked.
To be more specific, after looking through the LAVA code, it seems that the untar_file() method runs on the conditional branch that states the tar members are specified. I do not understand how this implementation is designed in regards to tar archive manipulation, but the following scenario takes place:
* The rootfs archive we specify in the job definition is copied and renamed, the extension being changed from .tar.bz2 to a plain .tar (which seems a bit strange tom e, on its own)
* Then, after unpacking to the LAVA temp job dir, only two directories are extracted from the whole rootfs archive. I did manage to „inverveene” while the job was running and looking in the temp dir to see exactly what gets extracted. Since the rest of the folders from the rootfs are not available, the job fails once the system starts booting
While investigating, I noticed that the untarf_file() method is invoked from download.py (/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/) - @ line 344 – with members being specified. I know that, because when I manually changed how this call is made and clearly specified “None” for the “member” positional argument, I get the following error at job runtime: https://paste.debian.net/1001252/ (the full job log can be found here: https://paste.debian.net/1001253/ )
The code changes I am talking about (and the code where I suspect something is ambiguous) are pointed out here: https://paste.debian.net/1001256/ .
A job definition we use for this board integration can be analyzed here: https://paste.debian.net/1001257/
The full log of the initial job, that ran without my code changes, can be found here: https://paste.debian.net/1001254/ . The „kernel panic” message ocurs because, as stated in one of the errors, the “init” folder is missing, which is accurate, because only bin & dev are extracted from our rootfs tar.bz2 archive.
What could we do? Is this something that needs to be adjusted/fixed in LAVA?
Kind regards,
Dragoș
From: Aníbal Limón <anibal.limon(a)linaro.org>
Now the Test writer has access to the images inside the LXC
to make changes previous deploy/flash into the board, in order
to support mount/modify rootfs images the loop device is needed.
Add a parameter in the lxc-boot action to map a free loop device
(losetup -f) into the LXC.
Change-Id: I7060ebac12b10e5390560da082fe6c49568c5ffc
Signed-off-by: Aníbal Limón <anibal.limon(a)linaro.org>
---
lava_dispatcher/actions/boot/lxc.py | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/lava_dispatcher/actions/boot/lxc.py b/lava_dispatcher/actions/boot/lxc.py
index d896d303..e3e3cb48 100644
--- a/lava_dispatcher/actions/boot/lxc.py
+++ b/lava_dispatcher/actions/boot/lxc.py
@@ -75,7 +75,11 @@ class BootLxcAction(BootAction):
def populate(self, parameters):
self.internal_pipeline = Pipeline(parent=self, job=self.job, parameters=parameters)
self.internal_pipeline.add_action(LxcStartAction())
- self.internal_pipeline.add_action(LxcAddStaticDevices())
+
+ lxc_add_loop = False
+ if 'lxc_add_loop' in parameters:
+ lxc_add_loop = parameters.get('lxc_add_loop', False)
+ self.internal_pipeline.add_action(LxcAddStaticDevices(lxc_add_loop))
self.internal_pipeline.add_action(ConnectLxc())
# Skip AutoLoginAction unconditionally as this action tries to parse kernel message
# self.internal_pipeline.add_action(AutoLoginAction())
@@ -91,11 +95,12 @@ class LxcAddStaticDevices(Action):
worker.
"""
- def __init__(self):
+ def __init__(self, lxc_add_loop=False):
super(LxcAddStaticDevices, self).__init__()
self.name = 'lxc-add-static'
self.description = 'Add devices which are permanently powered by the worker to the LXC'
self.summary = 'Add static devices to the LXC'
+ self.lxc_add_loop = lxc_add_loop
def validate(self):
super(LxcAddStaticDevices, self).validate()
@@ -115,6 +120,15 @@ class LxcAddStaticDevices(Action):
def run(self, connection, max_end_time, args=None):
connection = super(LxcAddStaticDevices, self).run(connection, max_end_time, args)
lxc_name = self.get_namespace_data(action='lxc-create-action', label='lxc', key='name')
+
+ if self.lxc_add_loop:
+ lxc_get_loop_cmd = ['losetup', '-f']
+ loop_device = self.run_command(lxc_get_loop_cmd, allow_silent=True).strip()
+ lxc_loop_cmd = ['lxc-device', '-n', lxc_name, 'add', loop_device]
+ cmd_out = self.run_command(lxc_loop_cmd)
+ if cmd_out:
+ self.logger.debug(cmd_out)
+
# If there is no static_info then this action should be idempotent.
if 'static_info' not in self.job.device:
return connection
--
2.11.0
On 18 Dec 2017 3:45 p.m., "Guillaume Tucker" <guillaume.tucker(a)collabora.com>
wrote:
On 18/12/17 11:45, Neil Williams wrote:
> On 14 December 2017 at 09:47, Guillaume Tucker <
> guillaume.tucker(a)collabora.com> wrote:
>
> On 07/12/17 17:16, Neil Williams wrote:
>>
>> On 7 December 2017 at 16:20, Guillaume Tucker <
>>> guillaume.tucker(a)collabora.com> wrote:
>>>
>>> A change was sent a while ago to add support for the Coreboot /
>>>
>>>> Depthcharge bootloader which is used on Chromebook devices. This
>>>> is useful in particular to avoid having to install U-Boot on
>>>> Chromebook devices. See this Gerrit review here for previous
>>>> history:
>>>>
>>>> https://review.linaro.org/#/c/15203/
>>>>
>>>> I'm now opening this case again to try and get this resolved,
>>>> there seem to be several issues with the original patch that
>>>> would need to be clarified. Also, some things might have changed
>>>> since then in LAVA or Coreboot which could potentially lead to a
>>>> different approach - any feedback on this would be welcome.
>>>>
>>>>
>>>> Thanks for picking this up.
>>>
>>>
>> You're welcome. I've now uploaded a new version which generates
>> the command line file but not the FIT image, it expects the
>> kernel image to be already in this format. Still the same
>> Gerrit number:
>>
>> https://review.linaro.org/#/c/15203/
>>
>> I've also made a patch to add the rk3288-veyron-jaq as
>> a "depthcharge" device type:
>>
>> https://review.linaro.org/#/c/22992/
>>
>> So as a next step, it would be convenient to find a way to have
>> the FIT image generated as part of the LAVA job with a given
>> kernel image, dtb, maybe the .its file and optionally a ramdisk.
>>
>> For the reference:
>>
>> http://git.denx.de/?p=u-boot.git;a=blob;f=doc/uImage.FIT/how
>> to.txt;hb=master
>>
>> To start with, I understand that running mkimage on the
>>
>>> dispatcher is not a valid thing to do, it should receive a
>>>> FIT (flattened image tree) kernel image ready to be booted. This
>>>> complicates things a bit for projects like kernelci.org where
>>>> only a plain kernel image is built and ramdisks are served
>>>> separately, but it's fair enough to say that LAVA is not meant to
>>>> be packaging kernel images on the fly.
>>>>
>>>>
>>>> We've come up with a method in the meantime, although it does mean using
>>> LXC but that makes it completely generic. It's principally designed for
>>> boards which need to munge a kernel and other files into an image to be
>>> transferred to the device using tools like fastboot. This is how KernelCI
>>> will be able to submit boot tests on devices like HiKey and db410c.
>>> Sadly,
>>> the example test job is suffering because the db410c devices have a
>>> different problem which is keeping them offline. Matt has been looking
>>> into
>>> this.
>>>
>>> https://staging.validation.linaro.org/scheduler/job/203317/definition
>>>
>>> https://staging.validation.linaro.org/static/docs/v2/actions
>>> -deploy.html#index-25
>>>
>>>
>> Thanks for the pointers, seems worth investigating.
>>
>> On the other hand, creating the FIT image is a similar process to
>> that of uImage, which is currently being done directly on the
>> dispatcher:
>>
>> https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_di
>> spatcher/actions/deploy/prepare.py#n79
>>
>> So would it make sense to add some code there to support FIT?
>>
>
>
> What is an example command line to mkimage to do this?
>
mkimage -D "-I dts -O dtb -p 2048" -f rk3288-veyron-jaq.its
arch/arm/boot/vmlinuz
Is the its file really needed? I added the ramdisk parameter precisely so
lava doesn't need to generate one.
Regards,
Tomeu
Are any external configuration files required?
>
Everything should be in the .its file, and it should also be
possible to generate it on the fly using a template and the LAVA
device properties (kernel load address etc...). If this proves
to not be flexible enough in practice, then I suppose the .its
file could be downloaded although I think we should avoid doing
this if we can.
Then I believe creating the command line file in LAVA should be
>>
>>> fine, although it probably makes more sense to have both the FIT
>>>> image and cmdline file generated by the same build system. In
>>>> any case, both files would need to be served from the dispatcher
>>>> TFTP server to the target device running Coreboot / Depthcharge.
>>>>
>>>>
>>>> That bit is fine, the problem is why this cannot use the existing
>>> temporary
>>> paths which all the other TFTP devices use. Is it just to do some
>>> mangling
>>> of the files?
>>>
>>>
>> This is resolved now with the version I sent yesterday.
>>
>
>
> That makes this review much better, thanks.
>
Great, thanks for confirming.
So the idea was basically to have an option in Coreboot /
>>
>>> Depthcharge to interactively tell it where to find these files
>>>> for the current job to run, say:
>>>>
>>>> <JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/vmlinuz
>>>> <JOB_NUMBER>/tftp-deploy-<RANDOM>/kernel/cmdline
>>>>
>>>> It looks like the current patch in Gerrit relies on this location
>>>> to be hard-coded in the bootloader, which works fine for a
>>>> private development set-up but not for LAVA.
>>>>
>>>>
>>>> That makes very little sense because the whole point of TFTP is that
>>> everything after the SERVER_IP is just a relative path from the TFTP base
>>> directory which is handled by the TFTP daemon itself.
>>>
>>>
>> Ditto.
>>
>> To recap, my understanding is that the "depthcharge" boot support
>>
>>> code in LAVA would need to:
>>>>
>>>> * maybe create the cmdline file with basically the kernel
>>>> command line split up with one argument per line
>>>>
>>>>
>>>> Alternatively, do whatever operations are required in a test shell in
>>> the
>>> LXC and then pass those files to the device - entirely within the test
>>> shell support.
>>>
>>>
>> That, or maybe run mkimage on the dispatcher like for uImage...
>>
>> The cmdline file is now generated on the dispatcher.
>>
>> * or just download the cmdline file along with the vmlinuz FIT
>>
>>>
>>>>
>>> The ready-made FIT kernel image is now downloaded with the
>> version I sent yesterday.
>>
>> * place both the cmdline and vmlinuz FIT files in the job's
>>
>>> TFTP directory on the dispatcher
>>>>
>>>> * turn on the device and open the serial console...
>>>>
>>>> * interactively pass at least the path to the job TFTP
>>>> directory on the serial console (and if possible the server
>>>> IP address as well, and maybe even the individual file names
>>>> rather than hard-coded vmlinuz and cmdline)
>>>>
>>>>
>>>> Isn't this equivalent to what U-Boot already does with TFTP?
>>>
>>>
>> Almost. This part is now all implemented in the last patch I
>> sent. One thing though is that the NFS rootfs parameters are
>> stored in the kernel cmdline file and not set interactively in
>> the bootloader shell.
>>
>
>
> How can these be extended by test writers? We do see requests to add
> arguments to the NFS parameters but adding options to the kernel command
> line itself is all but essential for most testing.
>
This can be done using the {{ extra_kernel_args }} template
variable, see the other change to add base-depthcharge.jinja2:
https://review.linaro.org/#/c/22992/1/lava_scheduler_app/tes
ts/device-types/base-depthcharge.jinja2
If anything more special ever needs to be done with some
parameters such as inserting some IP address, it can be done in
DepthchargeCommandOverlay where the command line file is
generated.
The only command sent is to start the tftp
>> boot with the server IP and the relative paths to the kernel and
>> cmdline files.
>>
>
On this topic, the changes to add the tftpboot command in
Depthcharge are still under review:
https://chromium-review.googlesource.com/c/chromiumos/platfo
rm/depthcharge/+/451382
So I think it would actually be wiser to not merge
base-depthcharge.jinja2 until the review above has been merged in
case the command line syntax needs to be adjusted.
* look for a bootloader message to know when the kernel starts
>>
>>> to load and hand over to the next action (login...)
>>>>
>>>>
>>> Done as well, I've now got the veyron-jaq device booting fine
>> with NFS rootfs. There was an issue with adding a ramdisk to the
>> FIT image as it was to big to boot on the device, will
>> investigate this part to add "ramdisk" boot commands.
>>
>>
>> Please let me know if this sounds reasonable or if we should be
>>
>>> doing anything differently. I think it would be good to have
>>>> some agreement and a clear understanding of how this is going to
>>>> be implemented before starting to work on the code again.
>>>>
>>>
Best wishes,
Guillaume
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lava-users