Thanks Rémi, but result is the same when I start only the db container, even after a full cleanup (docker container rm docker-compose_db_1 && docker volume rm lava-server-pgdata):
ERROR: database "lavaserver" already exists
Seems to be related somehow to "POSTGRES_USER: lavaserver" in docker-compose.yaml.
Reading the entrypoint script for postgres (https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh), I thought using PGUSER could help, but I didn't manage yet to get that working.
I will continue digging into this... but if you have a detailed procedure somewhere, please share 😊
Thanks a lot!
Philippe
From: Lava-users <lava-users-bounces(a)lists.lavasoftware.org> On Behalf Of Remi Duraffort
Sent: Monday, March 22, 2021 9:25 AM
To: Philippe Mazet (OSS) <philippe.mazet(a)oss.nxp.com>
Cc: lava-users(a)lists.lavasoftware.org <lava-users(a)lavasoftware.org>
Subject: Re: [Lava-users] LAVA master backup/restore
Le jeu. 18 mars 2021 à 16:22, Philippe Mazet (OSS) <philippe.mazet(a)oss.nxp.com> a écrit :
Hi all,
I am using the docker-compose solution to run a master: https://git.lavasoftware.org/lava/pkg/docker-compose
I was wondering how to extract a backup of the DB, and re-inject it in a different instance.
I found that postgres image has its own "restore" mechanism, based on initdb.d folder, as mentioned in the documentation:
https://registry.hub.docker.com/_/postgres/
The entrypoint.sh script (https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh) handles the restore whenever the folder is /docker-entrypoint-initdb.d/ contains a .sql file.
But when we store a backup in the container's /docker-entrypoint-initdb.d/ folder, and remove both the postgres image and its db-data volume, we get this error on next start:
ERROR: database "lavaserver" already exists
Full startup log attached.
Questions:
- Does Linaro's docker-compose solution modify postgres restore mechanism in any way?
I guess that's only because lava will automatically create an empty lavaserver database. So in your use case, you should only start the db service and not any lava-* services. When the restoration has been done, you can restart every lava services.
- How do you handle backup/restore at linaro?
Thanks a lot in advance,
Philippe Mazet
NXP Semiconductors - Edge Processing
Email: philippe.mazet(a)nxp.com
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.lavasoftware.org
https://lists.lavasoftware.org/mailman/listinfo/lava-users
--
Rémi Duraffort
LAVA Architect
Linaro
Dear LAVA-Users,
as far as we know the lxc support for LAVA test is going to be depricated and docker should be used instead, so we tried to define a test job with docker. The lxc job for this device was completing but when we tried to follow this tutorial:
https://www.youtube.com/watch?v=y-5DjzQztAk
with the following test job:
timeouts:
job:
minutes: 30
connection:
minutes: 2
actions:
finalize:
seconds: 60
context:
test_character_delay: 10
device_type: dragonboard_845_uSOM
job_name: Docker-Test
priority: 50
visibility: public
metadata:
build-url: ''
actions:
- deploy:
to: fastboot
docker:
image: adb-fastboot
timeout:
minutes: 15
images:
boot:
url: https://path/to/image/boot.img
reboot: hard-reset
- boot:
method: fastboot
docker:
image: adb-fastboot
prompts:
- root@(.*):/#
timeout:
minutes: 15
- test:
docker:
image: adb-fastboot
timeout:
minutes: 10
definitions:
- repository: https://github.com/Linaro/test-definitions.git
from: git
path: automated/linux/android-platform-tools/install.yaml
name: install-android-platform-tools-r2800
parameters:
LINK: https://dl.google.com/android/repository/platform-tools_r28.0.0-linux.zip
We got this error:
[cid:image001.jpg@01D7224A.65849130]
Are we doing something wrong or is this a bug?
Regards,
Marcel
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.
Hi Larry,
That is really underpowered for the sort of work you’re doing. We have a guideline of at least 8 cores and 16GB RAM. With only 4 cores you’re going to run into resource starvation issues.
Thanks
Dave
----------------
Dave Pigott
LAVA Lab Lead
Linaro Ltd
t: (+44) (0) 1223 400063
Hi, I'd like to know what's linaro's pc setup for android cts?
I ask this question because in one of our pc, if run 3 cts together at the same time, after about 8 hours, some of job' log will hang at web.
Check the server log, it looks the lava worker already sent the "END", while lava serer also sent the "END OK", also "description.yaml" received by server which means lava-run not crash.
Just there is also one time's END resend, and the job/device still in running status for ever.
Looks the zeromq connection to lava-log down? I'm not sure (We still link the device to 2020.08, not sure 2021.03 which don't use zmq helps or not).
And, the dmesg keeps tell `docker0: port 3(vethb545eb3) entered disabled state`, so I suspect high resource usage in this pc when test cts may result in some connection issue.
Our pc has 8G memory, Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz, 4 cores. What's your suggestion for pc configuration to run parallel cts jobs? Have your ever encountered similar issue? Thanks.
Hi Lava user,
I complied the stable AGL from gerrit repo.
Facing an issue, the init process is failed. I have used the below command to build the ramdisk.
mkimage -A arm64 -T ramdisk -C none -d
[cid:image001.png@01D71CCC.11853E50]
Please suggest what wrong I am doing.
Kind Regards,
Gaurav Kumar
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.
I am reopening this old thread – we switched this morning to version 2021.03 – which seems to work just fine.
However, the annoying problem mentioned below seems to be still there: without admin rights no one seems to be able to modify the health state of any device, even those who have the proper “can change device” permission for members of our group.
Is this something experienced by other people, or just us ?
Regards,
Seb
From: Sebastien Haezebrouck
Sent: Thursday, January 28, 2021 7:19 PM
To: Remi Duraffort <remi.duraffort(a)linaro.org>
Cc: lava-users(a)lists.lavasoftware.org
Subject: RE: [EXT] Re: [Lava-users] Unable to change device health state without being admin
Hi Remi,
Thanks for your answer.
* You can restrict view/submit/change permissions per device and per groups. So go into the specific device admin page (something
* like https://<instance>/admin/lava_scheduler_app/device/<hostname>/change/<https://%3cinstance%3e/admin/lava_scheduler_app/device/%3chostname%3e/change/>).
* At the bottom of the page you will be able to add the restrictions.
This is exactly what I had done previously. Adding the “lava_scheduler_app | device | Can change device” permission to our group does not seem to have any effect.
Regards,
Seb
From: Remi Duraffort <remi.duraffort(a)linaro.org<mailto:remi.duraffort@linaro.org>>
Sent: Tuesday, January 26, 2021 9:55 AM
To: Sebastien Haezebrouck <sebastien.haezebrouck(a)nxp.com<mailto:sebastien.haezebrouck@nxp.com>>
Cc: lava-users(a)lists.lavasoftware.org<mailto:lava-users@lists.lavasoftware.org>
Subject: [EXT] Re: [Lava-users] Unable to change device health state without being admin
Caution: EXT Email
Hello,
the authorization models changed some time ago. The doc for the new model is here: https://docs.lavasoftware.org/lava/authorization.html<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.lava…>
You can restrict view/submit/change permissions per device and per groups. So go into the specific device admin page (something like https://<instance>/admin/lava_scheduler_app/device/<hostname>/change/<https://%3cinstance%3e/admin/lava_scheduler_app/device/%3chostname%3e/change/>).
At the bottom of the page you will be able to add the restrictions.
Rgds
Le jeu. 17 déc. 2020 à 16:47, Sebastien Haezebrouck <sebastien.haezebrouck(a)nxp.com<mailto:sebastien.haezebrouck@nxp.com>> a écrit :
Hi,
We used to set devices access permission using groups. So we would restrict access to a subset of devices to members of a specific group.
When switching to lava 2020.10, it seems the permission to change a device does not allow to change device health state anymore. This used to be possible in previous version, and was very convenient for maintenance. With 2020.10 it seems only a lava admin is allowed to change a device health.
Is it the expected behavior ? Did I miss an option somewhere ?
A bit more details:
All our devices are set to:
* “device owner”->”Group with physical access” -> our_group.
* “group device permissions”->”lava_sched_app |device | Can change device” -> “Group” -> our_group.
* I intentionally did not give any specific permission to the “our_group” group, like “Can change device”, since it would allow members of this group to change any devices. Access control has to work both ways 😊
Kind regards,
Seb
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.lavasoftware.org<mailto:Lava-users@lists.lavasoftware.org>
https://lists.lavasoftware.org/mailman/listinfo/lava-users<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lav…>
--
Rémi Duraffort
LAVA Architect
Linaro
Dear LAVA-Team,
I'm writing you again, because we are still struggling to get our LAVA infrastructure running. Since the last time we have switched from the more advanced Board to a set of three Raspberry Pis, which we want to test with different Operating Systems to gain more experience in using LAVA.
For the configuration of our Raspberry Pi, running Rasbian, we can't quite get past the login and reboot phase. It is connected via serial and integrated with Ser2Net, the device dictionary looks like this now:
{% extends 'bcm2711-rpi-4-b.jinja2' %}
{% set soft_reboot_command = 'sudo reboot' %}
{% set auto_login_prompt = "raspberrypi login:" %}
{% set auto_login_username = "pi" %}
{% set ssh_host = '192.168.1.11' %}
{% set connection_list = ['uart0'] %}
{% set connection_commands = {'uart0': 'telnet <HOST IP> 7102'} %}
{% set connection_tags = {'uart0': ['primary', 'telnet']} %}
In the health check we just want to pull some standard images and deploy them with tftp, then login and run smoke tests by Linaro. The current health job looks like this:
device_type: bcm2711-rpi-4-b
job_name: RPI-01 (Raspbian) Health Check
timeouts:
job:
minutes: 10
connection:
minutes: 2
action:
minutes: 10
priority: medium
visibility: public
actions:
- deploy:
timeout:
minutes: 10
to: tftp
kernel:
url: https://storage.kernelci.org/mainline/master/v5.11/arm64/defconfig/clang-10…
type: image
ramdisk:
url: https://storage.kernelci.org/images/rootfs/buildroot/kci-2020.05/arm64/base…
compression: gz
modules:
url: https://storage.kernelci.org/mainline/master/v5.11/arm64/defconfig/clang-10…
compression: xz
dtb:
url: https://storage.kernelci.org/mainline/master/v5.11/arm64/defconfig/clang-10…
os: oe
- boot:
timeout:
minutes: 2
method: u-boot
commands: ramdisk
prompts:
- 'pi@raspberrypi:~$'
auto_login:
login_prompt: 'raspberrypi login:'
username: pi
password_prompt: 'Password:'
password: 1234567890
login_commands:
- sudo su
- test:
definitions:
- {from: git, name: smoke-tests, path: lava-test-shell/smoke-tests-basic.yaml,
repository: 'http://git.linaro.org/lava-team/lava-functional-tests.git'}
timeout: {minutes: 5}
But with this we are having issues when the DUT is supposed to reboot because we get stuck at the login prompt. First problem is probably the auto_login, we
have defined username and password, but for the password definition we get this problem:
[cid:image001.png@01D711E1.6A14C9A0]
Can you explain, what we are doing wrong?
Then LAVA is sending the reboot commands but those always seem to fail at the login. The Error looks somewhat like this:
[cid:image002.png@01D711E1.6A14C9A0]
Does this have to do with the deployment through tftp or is it the autologin? Also the login and password prompts won't get recognized
and are not filled in.
Can you tell us, what we are doing wrong?
Regards
Marcel
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.
Hi all,
I am using the docker-compose solution to run a master: https://git.lavasoftware.org/lava/pkg/docker-compose
I was wondering how to extract a backup of the DB, and re-inject it in a different instance.
I found that postgres image has its own "restore" mechanism, based on initdb.d folder, as mentioned in the documentation:
https://registry.hub.docker.com/_/postgres/
The entrypoint.sh script (https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh) handles the restore whenever the folder is /docker-entrypoint-initdb.d/ contains a .sql file.
But when we store a backup in the container's /docker-entrypoint-initdb.d/ folder, and remove both the postgres image and its db-data volume, we get this error on next start:
ERROR: database "lavaserver" already exists
Full startup log attached.
Questions:
- Does Linaro's docker-compose solution modify postgres restore mechanism in any way?
- How do you handle backup/restore at linaro?
Thanks a lot in advance,
Philippe Mazet
NXP Semiconductors - Edge Processing
Email: philippe.mazet(a)nxp.com
Dear LAVA-Users,
We are trying to integrate a new device into our LAVA infrastructure, apart from our work with Raspberry Pis. We still lack experience with LAVA and I wanted to reach out to the community for some help with the setup.
The Device we want to integrate is a QC-DB-P10003 from mouser with Android 9 and the following specifications:
[cid:image001.png@01D71691.AA0089F0]
Also here<https://www.mouser.de/ProductDetail/515-QC-DB-P10003/> a link to the product page, which contains the complete data sheet for the device.
Does someone have experience with this device and can give some advice on how to create a device templates or if there is already one available in LAVA which would work.
Also tips for defining a device dictionary and test jobs to run Android on this device are very welcome. Has anyone maybe configured a similar device and can suggest us some
Steps on how to proceed.
Kind Regards,
Marcel
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.
Hi,
It looks only the scripts in "lava_dispatcher\lava_test_shell" will be overlayed to rootfs when lava do deploy.
So, my question is:
If we have some way in job to define a script file location: e.g. http://site/a.sh, then lava could also fetch the http://site/a.sh, then overlay it to rootfs?
Does lava has such kinds of manipulate rootfs function already? Or I have to put this scripts in the /usr/lib/python3/dist-packages/lava_dispatch/lava_test_shell of every worker?
Thanks.