Hi,
I have some questions about the new adb/fastboot support in LAVA 2020.02 described
in the recent Tech Days LAVA presentation [1]. Main initial use cases are Android boot test,
before moving on to running Android tests. Android is AOSP 9/10.
[1] https://connect.linaro.org/resources/ltd20/ltd20-304/
Taking the boot test use case I was looking at Antonio's Example 1 from his presentation
for fastboot deploy, boot and test. I found doc for test in the 2020.02 release note but
nothing I could see for deploy or boot.
In the LXC approach a typical job definition would have had target and host namespaces,
with deploy and boot for the host space to create and boot the LXC container. Looking at
Example 1 deploy from the presentation it looks like that is a largely a target fragment as
it contains the image binary etc. A host deploy no longer being required as the Docker
container is now created outside lava.
Similarly the Example 1 boot looks like a target fragment and host boot fragment is not
required as LAVA simply needs to run the specified Docker container. Then finally in
Example 1 test the scope is the specified docker container so no namespace is required.
Is my interpretation of the Example 1 slides correct? I tried some fragments that were causing
fundamental errors that caused me to check here. Although as is often the case writing it out
helps you get it straighter in your mind.
If you have reasons to want to control fastboot for the flashing on the host is that possible?
For example if you had the host side process scripted.
Does the fastboot Docker OS need to be specified?
I'm running 2020.02 on the Worker via lava-docker, with Docker support within the Worker
container by sharing the host Docker socket to gain access to the Docker daemon.
Regards
Steve
Hi,
I'm trialling the fastboot support in Docker introduced in LAVA v2020.02 and getting a fundamental job
error about the deploy action of the job definition. I've checked against the examples from the recent
Linaro Tech Day LAVA presentation but I can't see the source of the error. Could someone familiar with
this new support please take a look at the job definition please? I'm thinking it must be something obvious.
Error snippet:
error_msg: None of the deployment strategies accepted your deployment parameters, reasons given: overlay:
"to" parameter is not "overlay" docker: 'docker' not in the device configuration deploy methods download: "to"
parameter is not "download" qemu-nfs:
Full error:
https://lava.genivi.org/scheduler/job/718
Job definition:
https://lava.genivi.org/scheduler/job/718/definition
One possible cause I can think of is that the LAVA Worker is running 2020.02, whilst the Server is running 2019.03+stretch.
My assumption is that the job definition parsing occurs in the dispatcher but maybe that is not correct? The Server will be
upgraded to match the Worker of course, but we took a two step approach whilst we first looked into Android support.
Thanks for any help,
Steve
Hi Team,
I have recently installed lava server (master) in my Debian machine.
My goal is to configure lava-server with *master- multiple workers*
*Lava-server : Master*
*[PFA] Debian configuration *:
root@LavaServer:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Now i am trying to setup multiple workers on other machine:
Q : *Can i configure/Install the worker on different Operating System other
than Debian ??*
If [No], please describe the process or share the useful Doc
thanks
Regarding
Nagendra S
Hi,
initially we have plan to connect 5 boards to my lava-master and
lava-dispatchers.
Please can some let me know servers configurations for this setup as
mentioned below?
a. Can you let me know the LAVA master server configuration ? (
Processor/DDR/memory etc... configurations details)
b. Can you let me know the Dispatcher server configuration ?
(Processor/DDR/memory etc...configuration details) . ( I guess
Dispatcher server configuration may be less than Master server
configuration)
c. PDU (I guess initial we have plan to connect 5 targets) . (If possible
please can you share me the Amazon link to orfer)
d. Serial port hub . (I guess initial we have plan to connect 5 targets)
. (If possible please can you share me the Amazon link to order)
Regards,
Koti
Hi, there,
If lava in some place possible to find who cancel a job? Or use lavacli or manual cancel a job?
Or for which detail reason, the job was canceled?
We are struggling to find why my job canceled sometimes, not sure someone not carefully do it or our external program by chance do it, especially when there are many people used the same master.
Regards,
Lary
Sometimes for some device, when lava enter "/lava-55904/bin/lava-test-runner /lava-55904/0"
It looks it will print:
-sh: /lava-55904/bin/lava-test-runnera-55904/0: No such file or directory
Looks some character miss.
We increases the test_character_delay, looks useful, but I'm interested about the details, so for next which you mentioned in the code:
What exactly "slow serial problems(iPXE)" mean, could you explain more about it or any reference materteral I can had a look?
Then I could know: yes, it's exactly the same problem I have.
>>> Extends pexpect.sendline so that it can support the delay argument which allows a delay
between sending each character to get around slow serial problems (iPXE).
pexpect sendline does exactly the same thing: calls send for the string then os.linesep.
Hi, there,
Frequently, I found when a job becomes incomplete there will immediately a health check job on that device happen automatically to check the status of the device.
But strange, some times I won't see that healthy check job.
So, my question is: when I see that healthy check job, is it just by chance? Or it really designed that there will be a healthy check after incomplete job? Thanks.
Regards,
Larry
Hello,
I hope that the mail subject set up an easy, cheerful background for
this discussion ;-). This definitely goes into the "stupid questions"
department. My plan was to collect "evidence"/feedback from my
colleagues, then at the Connect BUD20, crash into LAVA/QA rooms and ask
what I/we are doing wrong. As circumstances changed, but the "testing
debt" only builds up, there's little choice but to try figure out these
matters over a much narrower pipe.
So, before proceeding to the question per se, let me theorize that the
reasons why such questions come up at all, are sufficient differences in
the intended LAVA usage patterns. In other words, how I'd like to use
LAVA on the LITE team side may differ from how LAVA team intends it to
be used, or how QA team uses (the biggest user would it be). The
issue? How I'd intend to use it is IMHO one the most basic ways to use a
test system.
So, what's that usage? Well, I'm not much interested in "interactive"
use (submit jobs manually from my machine). Our interest is in
unattended automated CI, of which the testing system is the second half
after the build system. So let me remind how our build system, Jenkins,
works. Normally, it just builds binaries and uploads them to a
publishing server. It's invisible to me in this phase, and my
engineering work goes uninterrupted. But when a build fails, I get an
email with details about a failure. And I'll continue to get them while
it continues to fail. So, the only option I have is to go see the
failure, investigate, and fix it. When I arrive at Jenkins, I can easily
see which jobs failed and which not, then within each job, see which
builds failed and which succeeded. That's very easy, because failed
things are red, and successful things are green.
So, we've now arrived at the main question of this email - Why I don't
seem to be able to use LAVA in the same way? Why LAVA offers only
"Incomplete" and "Complete" job statuses? "Incomplete" is understood -
it's an infrastructure failure, such a job is definitely "failed". But
"Complete" doesn't give me any useful information whether the job
succeeded or failed. Because a "Complete" job may still have any number
of tests failed. And that's exactly the "last mile" LAVA misses to go:
for any test job, I want to see a cumulative number of test cases which
failed, straight at the page like
https://lite.validation.linaro.org/scheduler/alljobs . Then, I'd like
to filter out jobs which has this number >0. Then I'd like to receive a
notification only for "failed" jobs, "failed" defined as
"status != Complete OR failed_testcases > 0".
So, what am I missing and how to make LAVA work like the above?
Thanks,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,
After the recent Linaro Tech Day LAVA presentation I have been looking into the new
support in LAVA v2020.02 for deploying fastboot on the host to docker rather than lxc.
I am running my lava worker in a docker image derived from the linaro lava docker images
and so wanted to ask if the new feature supported docker in docker for the fastboot
containerisation?
Regards
Steve
Hello,
I just started working with hardware, firmware and lava. You have a big
documentation, but for me it's sometimes difficult to find something.
I wanted to submit a job on just one device like my "bbb-03" and not on
every beaglebone-black. Is there a device-tag to choose a device like
"device_type: "? My next question is if there is also a worker tag for a
job submit? For example I just want to run all "beaglebone-black" on my
"worker2".
Is there also a list of all job submissions tags with examples? I just
found the one for the "actions" part.
I'm sorry for disturbing and maybe the stupid question. But thank you for
your answer in advance.
I wish you a nice day