On Tue, 9 Oct 2018 at 15:19, Diego Russo <Diego.Russo@arm.com> wrote:

 

 

From: Neil Williams <neil.williams@linaro.org>
Date: Tuesday, 9 October 2018 at 08:31
To: Diego Russo <Diego.Russo@arm.com>
Cc: Lava Users Mailman list <lava-users@lists.linaro.org>
Subject: Re: [Lava-users] Getting tests on DUT: other solutions?

 

On Mon, 8 Oct 2018 at 17:15, Diego Russo <Diego.Russo@arm.com> wrote:

Hello,

Currently I'm testing a board type which deploys OE via u-boot-ums. Before flashing the image into the device, LAVA slave modifies the image copying tests definition onto it. Once the image is modified with all the tests, this is dd'ed into the device, booted and tests will be running.
This is OK so far but as soon as we enable signed images (roofs will be R/O but we will be having other R/W partitions) this won't work anymore as we are changing the image and it needs to be resigned. Moreover, this is very board specific.

I'm here to investigate alternative solutions which have a more "generic" approach which also a developer (without LAVA) can run. The only assumption is that *the DUT has always wired network connectivity with the SLAVE*.

 

Then you can use the existing transfer-overlay support. https://master.lavasoftware.org/static/docs/v2/actions-boot.html#transfer-overlay

 

That’s a good suggestion, thanks.

 

The restriction you didn't mention but which must be implemented is: the DUT must have a static IPv4 address.

 

I guess you are right. It was implicit but yes, DUT and SLAVE need to be able to talk at IP level.

 


The workflow I have in mind is something like:
1) I have a signed image which I deploy onto the DUT
2) Boot the DUT
3) Instruct the device to get tests from somewhere (either from the SLAVE or internet)

 

That needs to be done by the automation or it will not scale. The DUT cannot know, in advance, where to find the specific "thing" for this test job as opposed to the slightly different "thing" which is meant to be used by the next or previous test job on another DUT of the same kind right next to it on the same shelf in the same lab - not without building an enormous matrix of almost identical images. It does not scale.

 

“That needs to be done by the automation” what do you mean by that? I mean I might understand the meaning but I’m not sure I understood your meaning. Are you able to give me an example?


Instructing the device to download the tests needs to be done by the automation or it will not scale. As well as coming from the worker (not a random internet location), the instruction to do the work needs to come from the automation as well, not baked into the image deployed to the DUT. The location needs to be unique for this test job and therefore temporary, for the life of this test job.

 

 

 

4) Run those tests

The step I'd like to solve is 3). I was thinking something like that:
* download/compile all I need on the SLAVE (it is not possible to do it on the DUT due to limited resources/libraries/tooling)
* setup some sort of server http on the SLAVE in order to serve those files
* wget those files onto the DUT
* setup and execute the tests.

The above approach should work WITHOUT LAVA as well. Basically, replace SLAVE with "developer computer"

 

It needs to be under the control of LAVA. Avoid thinking that the signed image on the DUT should go off and find this thing automatically - there are always temporary path names involved or it simply will not scale. All LAVA does is initiate the call to something like wget on the DUT with a known path name which includes a unique location for this test job (so that another test job on a second DUT in the same lab can simultaneously get it's own) , unpacks it with tar *and* knows where to find the script to start the process.

 

This is exactly my intention, sort of. We need though to enable developers who don’t have LAVA access to be able to test their personal board they are developing on. That’s the main reason of having the *same* behaviour without LAVA, maybe scripting something out.


However, you will still be relying on something outside LAVA providing the temporary file locations and something to initiate the operation and neither of those can be baked into the image.

So you won't be able to have the same behaviour without automation - all you can do is provide a different automation solution to do the tasks that LAVA does - or make it entirely manual and expect developers to get the files to run their own tests.

When you think about it, this is exactly the same as any other test in LAVA. Developers can TFTP boot their beaglebone-black but to be able to run the tests, the developer needs to login to the DUT, download or install their own scripts and start the execution of those scripts. Whatever gets scripted, the developer still has to get those scripts onto the DUT and that either happens manually or it happens by replicating some method of home-brew automation. This step cannot be baked into the image deployed onto the DUT, it simply does not scale. It's the my-one-board-my-one-machine mindset and it is incompatible with automation where there are lots of machines, lots of users and lots of devices.

transfer_overlay just runs a wget and a tar -xaf then LAVA calls a pre-determined script in a known location (specific to that one test job). The developer has to do something similar but the download differs. e.g. LAVA has support for metadata and parameters which change how the test definition is executed - the lone developer would have to replicate that.

What you do need to consider is that the *content* of your test defnitions also need to take the developer-outside-LAVA into account. This is recommended best practice in the LAVA docs and the Linaro QA team. Don't use install: deps: in the test shell definition - write a script which installs everything you need, put it in the test definition git repo so that the developer has access to it. Make your scripts check for lava-test-case in $PATH and only call it if it exists (use print or echo or whatever instead) so that the lone developer can call the same scripts and get output without needing to fake up lava-test case and other scripts from the LAVA overlay. All of this is up to the test writer to plan and implement in their own git repositories.

 

The requirement/restriction is always the same: network connection with IPv4 working between DUT and computer.

We haven’t decided how to do it yet as we are still in requirement phase. No implementation details have been decided.

 

LAVA provides an apache service on the worker to support the download.

 

Oh, that’s quite interesting.

 

Anyway in general it seems possible to do what I’d like to do and LAVA does provide tools for it. That’s a good start.

Are you able to give me public LAVA pipeline which are similar to what I’d like to do?



 

 

Thanks

 


Is it something I can architect with LAVA? Does LAVA give me this flexibility?

Thanks for your help

Regards

--
Diego Russo
Staff Software Engineer - diego.russo@arm.com
Direct Tel. no: +44 1223 405920
Main   Tel. no: +44 1223 400400
ARM Ltd. CPC1, Capital Park, Cambridge Road, Fulbourn, CB21 5XE, United Kingdom
http://www.diegor.co.uk - http://twitter.com/diegor
http://www.linkedin.com/in/diegor

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
_______________________________________________
Lava-users mailing list
Lava-users@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lava-users


 

--

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.


--