On 09/10/2018, 16:13, "Neil Williams" <neil.williams@linaro.org> wrote:
“That needs to be done by the automation” what do you mean by that? I mean I might understand the meaning but I’m not sure I understood your meaning. Are you able to give me an example?
Instructing the device to download the tests needs to be done by the automation or it will not scale. As well as coming from the worker (not a random internet location),
the instruction to do the work needs to come from the automation as well, not baked into the image deployed to the DUT. The location needs to be unique for this test job and therefore temporary, for the life of this test job.
Ok, got it. We are well aligned on this. The image won’t contain anything about testing (that’s a requirement).
This is exactly my intention, sort of. We need though to enable developers who don’t have LAVA access to be able to test their personal board they are developing on. That’s the main reason of having the *same* behaviour without LAVA, maybe scripting something out.
However, you will still be relying on something outside LAVA providing the temporary file locations and something to initiate the operation and neither of those can
be baked into the image.
Yes, what I had in mind was a simple “python -m SimpleHTTPServer 8000” on the developer computer and to run a wget on DUT. But yes, I agree with you.
So you won't be able to have the same behaviour without automation - all you can do is provide a different automation solution to do the tasks that LAVA does - or make
it entirely manual and expect developers to get the files to run their own tests.
Agreed.
When you think about it, this is exactly the same as any other test in LAVA. Developers can TFTP boot their beaglebone-black but to be able to run the tests, the developer
needs to login to the DUT, download or install their own scripts and start the execution of those scripts. Whatever gets scripted, the developer still has to get those scripts onto the DUT and that either happens manually or it happens by replicating some
method of home-brew automation. This step cannot be baked into the image deployed onto the DUT, it simply does not scale. It's the my-one-board-my-one-machine mindset and it is incompatible with automation where there are lots of machines, lots of users and
lots of devices.
Totally agreed. We don’t plan to bake *anything* into the image which is related to testing. The image is a generic one and we need tooling around it to get those tests onto it.
transfer_overlay just runs a wget and a tar -xaf then LAVA calls a pre-determined script in a known location (specific to that one test job). The developer has to do
something similar but the download differs. e.g. LAVA has support for metadata and parameters which change how the test definition is executed - the lone developer would have to replicate that.
Agreed. This is exactly what I was expecting.
What you do need to consider is that the *content* of your test defnitions also need to take the developer-outside-LAVA into account. This is recommended best practice
in the LAVA docs and the Linaro QA team. Don't use install: deps: in the test shell definition - write a script which installs everything you need, put it in the test definition git repo so that the developer has access to it. Make your scripts check for lava-test-case
in $PATH and only call it if it exists (use print or echo or whatever instead) so that the lone developer can call the same scripts and get output without needing to fake up lava-test case and other scripts from the LAVA overlay. All of this is up to the test
writer to plan and implement in their own git repositories.
Agreed with the script that acts as “entry point” for encapsulating the logic for setting up the environment and running tests.
I think overall we are on the same page but of course I wanted a confirmation from more experienced people of what I had in mind. With this email you’ve just given it
☺
Are you able to give me public LAVA pipeline which are similar to what I’d like to do?