Hello,
we are trying to add external interfaces on the worker to connect with the DUTs, for example a 4-port USB-to-RS232 converter. Our DUTs have multiple RS232 ports which shall be tested using this remote interface.
We have already figured out how to integrate this hardware into the LAVA environment, so that it can be used within the LAVA LXC (using static_info in the device dictionary, resulting in the four /dev/ttyUSB* devices being visible there).
First question: We need multiple of these converters attached to the worker. How do we integrate these into LAVA? They all have the same board_id, vendor_id and product_id. If I specify the board_id in the device dictionary multiple times, the device is still added only once.
Second question: We need a way to specify to which of the /dev/ttyUSB* ports a certain RS232 port of the DUT is connected. The place where I would assume to put such information is the device dictionary. But how can we access this information within a LAVA test shell?
The documentation specifies some similar mechanism for energy probes:
https://validation.linaro.org/static/docs/v2/admin-lxc-deploy.html?highligh…
It says "Devices which are not directly attached to the worker can also be supported, for example energy probes which communicate over the network".
As far as I can tell from the code, though, this seems to be a hard-coded feature without any possibility of adding other custom hardware. Is that correct?
If yes, why isn't there a generic mechanism to supply static_info from the device dictionary in the LAVA test shell? Or is there?
How can we implement our scenario described above using LAVA?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
After getting stats on my setup robustness, the step forward is have a complete view on the lava errors we meet in incomplete jobs.
>From what I see in incomplete jobs, my intention is to query on test suite lava and the name "job".
In the query builder, if I use test suite as condition model, I can't use the job field name.
Do you have any advice on how to proceed?
Denis
Hi,
Are the Ansible playbook for setting up LAVA available somewhere? There is
an old migrated issue on GitLab [1] which is closed, but the link to an
implementation in there is dead. Is that playbook only internally available
for Linaro? Is there anything you could share?
It looks like many people are moving to Docker in the moment, but that's
not an option for us (at least not for dispatchers), as we need LXC for
Android testing.
Cheers,
Karsten
[1] https://git.lavasoftware.org/lava/lava/issues/27
How can you have more than one LAVA user have the same token secret
(e.g. for a notify callback.)?
Example use case:
- LAVA job with notify callbacks using token names
- submited as user "bob", token names of "bob" map to actual token secrets
- job fails
- user "lab-admin" fixes some lab issues, re-submits job
- job passes, but callbacks fail because tokens are associated with user "bob"
Since the re-submitted job runs as user "lab-admin", the same token
names and corresponding secrets don't exist.
Naively, user "lab-admin" tries to copy the token secrets from user
"bob" keeping the same token names, but this fails saying that "secret
already exists".
Why can't different users have the same secrets?
I haven't looked at the code, but this limitation kind of suggests that
the secret itself is the key in the db, which would prevent multiple
secrets of the same.
Kevin
I started playing with the official lava images today and wanted to
share my work in progress, in case others are doing something similar or
have feedback. My goal is to deploy a lava lab locally. My architecture
is a single host (for now) that will host both the lava server and one
dispatcher. Once it's all working, I'll start deploying a qemu worker
followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3'
services:
database:
image: postgres:9.6
environment:
POSTGRES_USER: lavaserver
POSTGRES_PASSWORD: mysecretpassword
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- ${PWD}/pgdata:/var/lib/postgresql/data/pgdata
server:
image: lavasoftware/amd64-lava-server:2018.11
ports:
- 80:80
volumes:
- ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf
- ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf
depends_on:
- database
dispatcher:
image: lavasoftware/amd64-lava-dispatcher:2018.11
environment:
- "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org"
- "LOGGER_URL=tcp://server:5555"
- "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir
pgdata; docker-compose up' and the 3 containers come up and start
talking to each other. The only thing exposed to the outside world is
lava-server's port 80 at the host's IP, which gives the lava homepage as
expected. The first time they come up, the database isn't up fast enough
(it has to initialize the first time) and lava-server fails to connect.
If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB
volume is possible with the existing lava-server container, because the
DB is initialized at container build time rather than run-time, so
there's not really a way to mount in a volume for the data. Anyway,
postgres already solves this. In fact, I found their container
documentation and entrypoint interface to be well done, so it may be a
nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and
instance.conf out of the original container and into ./etc/lava-server/ and
modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing
references to "database", "server", "dispatcher" to resolve within the
containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue(a)linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a
separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to
docker, but the two vars in settings.conf didn't seem to help. (I'm
terminating https outside of the container context, and then proxying
into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the
architecture name was in the container name. My understanding is
that's the 'old' way to do it. The better way is to transparently
detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see
the container has some support for running under a different uid,
which I might try.
- If the entrypoint of server supported some variables like
LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I
wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to
https://github.com/danrue/lava.home.therub.org. Git clone and then run
'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will
hopefully really simplify deploying lava. My next step is to debug some
of the issues I mentioned above, and then start looking at dispatcher
config (hopefully it's just a local volume mount).
Dan
Hi,
In most cases, we don't need multiple node job as we can control AOSP
DUT from lxc via adb over USB. However, here is the use case.
CTS/VTS tradefed-shell --shards option supports to split tests and run
them on multiple devices in parallel. To leverage the feature in LAVA,
we need multinode job, right? And in multinode job, master-node lxc
needs access to DUTs from salve nodes via adb over tcpip, right?
Karsten shared a job example here[1]. This probably is the most
advanced usage of LAVA, and probably also not encouraged? To make it
more clear, the connectivity should look like this.
master.lxc <----adb over usb----> master.dut
master.lxc <----adb over tcpip ---> slave1.dut
master.lxc <----adb over tcpip ---> slave2.dut
....
I see two options for adb over tcpip.
Option #1: WiFi. adb over wifi can be enabled easily by issuing adb
cmds from lxc. I am not using it for two reasons.
* WiFi isn't reliable for long cts/vts test run.
* In Cambridge lab, WiFi sub-network isn't accessible from lxc
network. Because of security concerns, there is no plan to change
that.
Option #2: Wired Ethernet. On devices like hikey, we need to run
'pre-os-command' in boot action to power off OTG port so that USB
Ethernet dongle works. Once OTG port is off, lxc has no access to the
DUT, then test definition should be executed on DUT, right? I am also
having the following problems to do this.
* Without context overriding, overlay tarball will be applied to
'/system' directory and test job reported "/system/bin/sh:
/lava-247856/bin/lava-test-runner: not found"[2].
* With the following job context, LAVA still runs
'/lava-24/bin/lava-test-runner /lava-24/0' and it hangs there. It is
tested in my local LAVA instance, test job definition and test log
attached. Maybe my understanding on the context overriding is wrong, I
thought LAVA should execute '/system/lava-24/bin/lava-test-runner
/system/lava-24/0' instead. Any suggestions would be appreciated.
context:
lava_test_sh_cmd: '/system/bin/sh'
lava_test_results_dir: '/system/lava-%s'
I checked on the DUT directly, '/system/lava-%s' exist, but I cannot
really run lava-test-runner. The shebang line seems problematic.
--- hacking ---
hikey:/system/lava-24/bin # ./lava-test-runner
/system/bin/sh: ./lava-test-runner: No such file or directory
hikey:/system/lava-24/bin # cat lava-test-runner
#!/bin/bash
#!/bin/sh
....
# /system/bin/sh lava-test-runner
lava-test-runner[18]: .: /lava/../bin/lava-common-functions: No such
file or directory
--- ends ---
I had a discussion with Milosz. He proposed the third option which
probably will be the most reliable one, but it is not supported in
LAVA yet. Here is the idea. Milosz, feel free to explain more.
**Option #3**: Add support for accessing to multiple DUTs in single node job.
* Physically, we need the DUTs connected via USB cable to the same dispatcher.
* In single node job, LAVA needs to add the DUTs specified(somehow) or
assigned randomly(lets say both device type and numbers defined) to
the same lxc container. Test definitions can take over from here.
Is this can be done in LAVA? Can I require the feature? Any
suggestions on the possible implementations?
Thanks,
Chase
[1] https://review.linaro.org/#/c/qa/test-definitions/+/29417/4/automated/andro…
[2] https://staging.validation.linaro.org/scheduler/job/247856#L1888
Dear All,
I'm currently trying to check and implement a complete validation of my
PSCI solution.
The standard behavior for PSCI is to manage shutdown, reset and low
power mode.
I'm wondering to find the best way to manage it through LAVA.
So two questions:
- Is there a proper way to check the reboot behavior on a target (soft
reboot)? Using shell command is not possible as the is no return from
reboot.
- Shutdown? I'm wondering to test the shutdown command and trig an
automatic wake up after x seconds. My wish is to check that no watchdog
occurred during that time (which is the only way to know if the shutdown
was properly working). So it will be a similar behavior that the reboot
command.
Thanks for your support,
BR
Lionel
Hi Remi
Thanks for the quick reply. Attached please find the document(the raw job log and the job definition).
Hello,
that's in fact maybe a bug in LAVA. To help me reproduce the error, could you send:
* the raw job log (click on "actions / plain logs" in the job page)
* the job definition
Thanks
Le jeu. 21 févr. 2019 à 04:05, Chenchun (coston) <chenchun7(a)huawei.com<mailto:chenchun7@huawei.com>> a écrit :
Dear Sir/Madam,
could you please help us analyze the problems encountered in recent Lava tests?
Detailed log in the attachment.
lava-dispatcher version: 2018.11+stretch.
The key information is as follows: Bug error: argument of type 'NoneType' is not iterable
Chase Qi preliminary positioning is a lava bug. We look forward to your reply.
Thank you for your assistance.
Best Regards,
Caili Liu
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.lavasoftware.org<mailto:Lava-users@lists.lavasoftware.org>
https://lists.lavasoftware.org/mailman/listinfo/lava-users
--
Rémi Duraffort
LAVA Team, Linaro
Hello everyone,
I know from the LAVA documentation how to add metadata to jobs and test suites. When I look at test results, I see that test cases have metadata, too. E.g. https://validation.linaro.org/results/testcase/9759970 shows the following metadata:
case: linux-linaro-ubuntu-lscpu
definition: 0_smoke-tests-lxc
result: pass
Is there a possibility to add custom metadata to test cases?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz