In trying to run lava-dispatcher inside a docker container and connect a FRDM-K64F board ran into some issues related to the fact that udev events aren’t seen inside the container since we aren’t typically running systemd/udevd there.
I came across this project that will forward udev events from the host to a container that worked pretty well:
https://github.com/eiz/udevfw
I’ve re-implemented this in python for easier development (added some docker awareness):
https://git.lavasoftware.org/galak/docker-compose/blob/lite/contrib/udev-fo…
Right now running udev-forward.py is kinda kludgy. Wanted to get input on how people think this should work, should we make a daemon out of it? Should there be some kinda of config file? Do we think we need to filter events (and if so how)? Need to look at support for multicasting (support sending to multiple dispatchers). Where should this live, in docker-compose repo?
Other thoughts.
- k
Hello,
I have a remote worker that needs to add a new device-type on the
master.
The remote worker has an auth token with admin/staff/superuser
privileges, and lavacli configured with that token, but when I attempt
to set the new template, I get permission denied:
# lavacli device-types template set da850-lcdk /tmp/new-device-type.jinja2
Unable to call 'device-types.template': <Fault 400: 'Unable to write device-type configuration: Permission denied'>
What permissions am I missing?
Kevin
Hi!
I'm seeing the following issue with lavasoftware/lava-server:2019.04:
http://localhost/static/docs/v2/ gives "Forbidden". Apache log shows the
following:
[Mon May 06 17:57:42.206713 2019] [autoindex:error] [pid 766:tid 140404225701632] [client 172.18.0.1:46780] AH01276: Cannot serve directory /usr/share/lava-server/static/docs/v2/: No matching DirectoryIndex (index.html,index.cgi,index.pl,index.php,index.xhtml,index.htm) found, and server-generated directory index forbidden by Options directive, referer: http://localhost/
The easiest way to reproduce is to run:
$ docker run --rm -p 80:80 -it lavasoftware/lava-server:2019.04
And load http://localhost/static/docs/v2/
Change 2019.04 to 2019.03 and it works fine.
I didn't see anything about this mentioned in the release announcement.
I guess the apache config needs some update?
Thanks!
Dan
The name of the action for http download uses a hyphen, not an
underscore. Fix the typos.
Signed-off-by: Kevin Hilman <khilman(a)baylibre.com>
---
doc/v2/timeouts.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/v2/timeouts.rst b/doc/v2/timeouts.rst
index c330f9d89548..50070766abe0 100644
--- a/doc/v2/timeouts.rst
+++ b/doc/v2/timeouts.rst
@@ -259,7 +259,7 @@ block override.
timeouts:
actions:
- http_download:
+ http-download:
minutes: 2
.. _individual_connection_timeout_overrides:
@@ -275,7 +275,7 @@ specific connection timeout which can be longer or shorter than the default.
timeouts:
connections:
- http_download:
+ http-download:
minutes: 2
.. _action_block_timeout_overrides:
--
2.21.0
Hello everyone,
the current LAVA version in stretch-backports is still 2018-11. Is there a reason why it has not been updated since then?
Will newer releases go into buster only? Or will there be updates in stretch-backports in the future?
For stretch users, do you recommend using the LAVA repositories to upgrade to the latest version?
Or should production systems keep using 2018-11 at this moment?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com<http://www.garz-fricke.com/>
WE MAKE IT YOURS!
[cid:image001.jpg@01D4FF4A.A2ED9910]
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
Hello,
as said in the previous email, that's currently not possible to see kernel
crashes outside of the boot action.
That's something we want to improve. I will soon create an issue in our
gitlab instance where you would be able to comment.
Rgds
Le jeu. 11 avr. 2019 à 14:33, Frank, Matthias <Matthias.Frank(a)tq-group.com>
> a écrit :
>
> Hi lava users,
>
>
>
> sometimes I how in memory allocator stress tests a kernel panic. How can I
> evaluate this? Is it possible to set a testcase or job to fail if a kernel
> panic occurs?
>
>
>
> Matthias
>
>
>
> Sometimes a test triggers a kernel panic and the dut will reboot to U-Boot
> and stuck because there is no boot process. Lava waits until timeout and
> stop the job.
>
>
>
--
Rémi Duraffort
LAVA Team, Linaro
Hi,
I'd like to know is this site official? https://github.com/kernelci/lava-docker
Does this project same as the one in dockerhub, lavasoftware/lava-server?
I attach a tag on one of my device, and when submit job I will add "tags:" to my job, it's ok.
But, when others submit their job with same device-type which same as my device, as they did not specify "tags", they will have chance to run their job on my device.
How to avoid it? I want the job not be scheduled to the device which have a tag if the job did not specify any tag.
I tried to set the device as private, but then only me can use this device, I have other guys in groupA which want to use the device, while I hae another some guys in groupB which didid not want to use these devices as some modules on the device is not same.
Hi,
There is an idea of device type 'alias' in LAVA. I don't quite
understand what the use case for the current implementation was [1]. I
tried using it but it wasn't very useful. My use case is that I need
to submit jobs to a device type with different device type name. This
is used to align device type naming between different labs in a bigger
project (kernelci.org in this case). So the questions I have about
current implementation:
- is there anyone using current implementation?
- if current implementation is used, how much trouble would it cause
to change the behaviour?
Change in behaviour is quite intrusive and will require database migration.
[1] https://master.lavasoftware.org/static/docs/v2/glossary.html#term-alias
Regards,
milosz