Hello,
I have another problem/ignorance with Lava, so I need a bit of push here. I
did execute cyclic test (timer interrupt measurements, time of reaction),
using as application hackbench, which introduces tremendous application
load (in the peak around 300 loads per 1m/3m/15m) on the core.
So, poking around, I found the following JSON script:
https://github.com/kernelci/lava-ci/blob/master/templates/cyclictest/generi…
Since i know (at least this is my understanding) that the Lava V2 works as
a pipeline, so the test jobs must be broken to some blocks *(deploy*, *boot*
and *test)*.
I would like to a bit change this JSON script to get hackbench as load, but
much more important is how I can use this JSON script as Lava test job?
I guess, I need to incorporate this script into some yaml format, as
explained/described above... Am I correct?
How to do this? Any description, example... Advice?
Thank you,
Zoran
_______
Hello Folks,
I have really interesting problem with Lava, this time while executing tests.
Lava versions:
root@stretch:/usr/share# dpkg -l lava-server lava-dispatcher
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==========================-==================-==================-=========================================================
ii lava-dispatcher 2018.2.post2-1+str amd64
Linaro Automated Validation Architecture dispatcher
ii lava-server 2018.2-1+stretch all
Linaro Automated Validation Architecture server
root@stretch:/usr/share#
_______
This problem appears while downloading initramfs. Although the
initramfs I am using from localhost://8010 has the fixed size:
46275637 (decimal), while I download ramdisk.cpio.gz.uboot
(which in theory should be 64 bytes more, exactly 46275701):
I am getting the following on The Same Jobs, which I repeated in Lava:
Job #113: 46597208 (decimal)
Job #114: 46596349 (decimal)
Job #115: 46595788 (decimal)
In other words, I am downloading each time The Same ingredients:
http://localhost:8010/initramfs/initramfs.cpio.gzhttp://localhost:8010/cip-example/cip_v4.4.120-cyclic/v4.4.120-cip20-rt13/a…http://localhost:8010/cip-example/cip_v4.4.120-cyclic/v4.4.120-cip20-rt13/a…
Where I have all three time exact size for:
zImage - 4167704 (3f9818 hex)
am335x-boneblack.dtb - 31552 (7b40 hex)
I removed u-boot-tools, then I installed it back. But this did not
help. service tftpd-hpa restart did not help as well.
So, Iwill continue investigating this problem myself, but did amybody
notice the same?
Thank you,
Zoran
Good morning everyone,
I would like to know if someone here already face the following message
while opening a LAVA results page or LAVA alljobs page :
I am looking around LAVA documentation to check if it is a known issue. But
if someone here already knows something about it, it could help me also.
regards,
On 19 March 2018 at 16:26, Neil Williams <neil.williams at linaro.org> wrote:
>The problem of picking a number is that it depends a lot on the resources
>available to the server and the performance of the devices themselves.
I get the point. But that information is helpful as well, so maybe some examples of the dimensions in which LAVA can be (and actually is) used might be worth mentioning. I have read most of the LAVA documentation, but had no idea that there are actually running test jobs with 50,000 test cases in it on your servers.
>Templating.
>
>Check out how Jinja2 is used for the server-side device configuration
>templates - Jinja2 can output any text-based format, we chose YAML. The
>same principles are used by the Linaro QA team to produce the test job
>submissions. Templates live in version control, the commit hash of the
>template gets included into the metadata of the output.
That’s a very good hint, thank you. I haven’t thought of this, yet. Are the repositories containing your templates publicly available?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
>150 test cases does not necessarily mean large - we have test jobs which
>produce 50,000 test cases per test job, that is when a large test set can
>become a problem, depending on the available resources on the server. It
>was these test jobs which led to the sections in the documentation which
>you mention later. If your server doesn't struggle with the current test
>jobs, you might not have anything to do at this stage.
>
>However, if you have clear groups of test cases, you should investigate
>using test sets which preserve those boundaries within the reported results:
>https://staging.validation.linaro.org/static/docs/v2/results-intro.html#tes…
Thanks for the information. Perhaps it would be a good idea to add these figures to the documentation, so that new users have an idea of what a “large” test set is.
I am just starting out with LAVA, so I haven’t set up any productive tests yet. At the moment I am creating a concept of how we can include LAVA into our workflow. Part of this is the question how we handle test jobs and how we store them in our SCM.
>MultiNode does offer a possible option but MultiNode itself is complex and
>not all test jobs would be able to use it - essentially you can use
>MultiNode with SSH to have multiple logins to the same device running the
>same software. Problems include issues with locking resources, load on the
>device from running tests in parallel, problems within the test being able
>to run in parallel in the first place amongst others.
>
>>From your enquiry, it does not sound as if you need any of that. You might
>want to look at a wrapper script which consumes the output of the tests and
>filters out noise but apart from that, 150 test cases is really not large.
>We routinely run functional tests which produce more test cases than that:
>https://staging.validation.linaro.org/results/214157
>https://staging.validation.linaro.org/scheduler/job/214157
I actually have to use MultiNode for some of our testcases anyway. These testcases need a remote server or client connected to the DUT (e.g. testing hardware interfaces like RS485, CAN, etc.).
And this is actually part of the question: When I declare all of my testcases in one test job, I have to declare the remote nodes for ALL of the tests in there as well. This makes the test job huge and confusing, though, I think. How do you handle such cases? Do you ever test that kind of interfaces at all?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello,
I have very interesting problem: I would like to do Lava testing of
BBB01, but I am not suceeding.
Simple and plain, my U-Boot scripts gets somehow rejected, they are
not executed.
I do download correctly, I have all the correct ingredients in place,
but the U-Boot scripts are not activated, at the time of U-boot
prompt.
Here is my output, from the testing:
https://pastebin.com/hTQQSLU1
Rather, the thing gets executed from SDCard (from /boot on rootfs).
I have no idea why this is not executed.
Help appreciated!
Thank you,
Zoran
Hi Neil,
thanks for your reply.
>>(*) As a sidenote I'd like to add, that using "BootloaderCommandsAction" alone does not work. I had to add "BootloaderCommandOverlay" as well, because the "commands" are set at the end of the "run" function of this class:
>>
>> self.set_namespace_data(action='bootloader-overlay', label=self.method, key='commands', value=subs)
>>
>>Is this by design? It seems like a bug to me, since I did not find any documentation about this dependency.
>
>The majority of classes in lava-dispatcher have dependencies which are entirely determined by the way that the test devices need to operate. A new Strategy class would ensure that both are added for this use case.
>
>You also need to consider whether the test job itself contains a test action and therefore whether an overlay is needed at all. Simple boot testing doesn't have to include a test action at all, just deploy and boot.
That's the point: I don't need the overlay action in my strategy. But I had to include it, because the command action did not work without it. And to clarify things: The overlay you are talking about (adding LAVA files to the rootfs) is not the overlay I am talking about (replacing placeholders in the commands with actual values).
>I assume that simply no one has ever used "BootloaderCommandsAction" without "BootloaderCommandOverlay", so no one ever noticed. In my opinion "BootloaderCommandsAction" should work on its own.
>
>No, it should not. The commands frequently need to be modified by other actions.
That's okay. But why are they set in "BootloaderCommandOverlay" in the first place? This is totally unexpected. I am new to the code, so I don't know anything about the internals. But from a user's perspective, it seems like bad design to have an action's essential variable set in a different action.
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello everyone,
I have a device with a very basic proprietary bootloader and want to automate it with LAVA. I figured out that the "minimal" bootloader class covers basically everything I need, except that it cannot send commands to the bootloader. So for a quick test, I added "self.internal_pipeline.add_action(BootloaderCommandsAction())" to actions/boot/minimal.py. With this modification (*) I was able to create a device class which can run a smoke test successfully on my device.
In a former question on this mailing list concerning the integration of my bootloader, someone recommended me to implement a new boot strategy. Would you accept a code contribution which adds a new boot strategy only differing from the "minimal" strategy in this one addition? Or would it perhaps make sense to add the "BootloaderCommandsAction" directly to the "minimal" strategy?
(*) As a sidenote I'd like to add, that using "BootloaderCommandsAction" alone does not work. I had to add "BootloaderCommandOverlay" as well, because the "commands" are set at the end of the "run" function of this class:
self.set_namespace_data(action='bootloader-overlay', label=self.method, key='commands', value=subs)
Is this by design? It seems like a bug to me, since I did not find any documentation about this dependency. I assume that simply no one has ever used "BootloaderCommandsAction" without "BootloaderCommandOverlay", so no one ever noticed. In my opinion "BootloaderCommandsAction" should work on its own.
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello everyone,
we have a quite large test suite (> 150 testcases, and counting, devided into ~50 groups) which we want to run on our devices. This is required for each software release and would be nice for nightly builds as well.
Our deployment method flashes kernel and root filesystem onto the internal flash memory, which takes ~3 minutes. Booting the OS from RAM or using remote filesystems (NFS) is not an option for us. We need to run all tests on a device booted completely from its internal flash memory. Ideally our OS image should be deployed once and then all the tests should run on top of that deployment.
According to the LAVA documentation, best practice is not to put too many tests into one job file, as this would be hard to maintain and logs would become huge and difficult to read.
What is recommended in such a scenario? My intentional tendency was to create a job for each test group (each containing ~3 testcases average), which occurs reasonable to me. However, this would result in 50 deployment cycles with 3 minutes each, resulting in 2,5 hours spent with basically unnecessary work. This, in turn, does not seem reasonable to me.
Is there a possibility to combine jobs to run on the same device subsequently, so that the images need to be deployed only once? Or can jobs be nested somehow, so that one job does the deployment and contains sub-jobs which perform the actual tests? Or are there any other recommendations for this?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello to everyone,
I need to add some scripts to the
/etc/lava-server/dispatcher-config/device-types.
It says to contact Lava mailing list in such a case, to get some guidance.
Says here:
https://staging.validation.linaro.org/static/docs/v2/device-integration.html
*"Please* talk to us *before* starting on the integration of a new device
using the Mailing lists
<https://staging.validation.linaro.org/static/docs/v2/support.html#mailing-l…>
."
The device I need to add is Renesas iwg20m:
https://mp.renesas.com/en-us/boards/iW-RainboW-G20D-RZG1M_RZG1N_QsevenDevKi…
I have on the device working MLO and U-Boot with some U-Boot environment
which boot mmc, I also have working ser2net in Lava VM, it works
seamlessly. So these are all the prerequisites for adding the device type,
as my best understanding is?
My Lava is upgraded:
||/ Name Version Architecture Description
+++-======================-================-================-==================================================
ii lava-dispatcher 2018.2.post2-1+s amd64 Linaro
Automated Validation Architecture dispatche
ii lava-server 2018.2-1+stretch all Linaro
Automated Validation Architecture server
I have beaglebone-black
<http://localhost:8080/scheduler/device_type/beaglebone-black> device type
and added to it bbb01 device, which I finally made working with Lava. I
updated the beaglebone-black-jinja2 script, and created in
/etc/lava-server/dispatcher-config/devices bbb01.jinja2, added this device
to beaglebone-black device type.
I am wondering what should I do else, besides to write iwg20m.jinja2 which
inherits base-uboot.jinja2 script???
iwg20m is similar to Beaglebone Black (it is, after all, armv7
silicon/platform based). And I can make iwg20m.jinja2 as similar analogy to
beaglebone-black.jinja2 .
And, btw, how to add the device-type? Using GUI? CLI? Any description?
Thank you,
Zoran