Hi,
My test job .yaml does not provide any error while health-check trigger
from GUI and give error while submitting the same test job from the command
line .
you can find the command line submission error as mentioned below.
"
lavamaster@lava$lavacli -i admin@validation jobs submit /tmp/test.yaml
Unable to submit /tmp/target.yaml: <Fault 400: "Problem with submitted job
data: expected str for dictionary value @
data['actions'][0]['boot']['parameters']['shutdown-message']">"
"
I have reported same type of bug for "soft-reboot" command support and
provided the patch by lava-team for this soft-reboot support issue i.e "
https://git.lavasoftware.org/lava/lava/-/merge_requests/1067/diffs#05da71b3…
"
Can someone provide the path for "shutdown-message" support also?
Regards,
Koti
Hi Team,
I have used the LAVA job definition including deployment data(OS parameter) as debian . As per the deployment_data.py it should export the shell as /bin/bash but it's exporting /bin/sh. I am observing this thing after updating LAVA to 2021.01.
Below I am giving the job definition file which I used. It will be very helpful if anyone reply on this. Thank you .
Job definition
=============
device_type: x86-simatic-slll
job_name: x86-simatic-ipc227e-slll health-check
timeouts:
job:
minutes: 20
action:
minutes: 20
connection:
minutes: 10
priority: medium
visibility: public
tags:
- slll-simatic-ipc-227e-01
actions:
- deploy:
to: overlay
- boot:
method: minimal
reset: true
failure_retry: 2
auto_login:
login_prompt: 'ebsy-isar login:'
username: root
password_prompt: 'Password:'
password: root
prompts:
- root@ebsy-isar:~#
transfer_overlay:
download_command: wget
unpack_command: tar -C / -xzf
# TEST_BLOCK
- test:
timeout:
minutes: 5
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: kernel-version-basic
description: "check kernel version"
os:
- debian
scope:
- functional
run:
steps:
- lava-test-case uname --shell uname -a
from: inline
name: kernel-version-inline
path: inline/kernel-version-basic.yaml
lava-signal: kmsg
Regards
Sarath P T
Hi, guys,
I'm currently debug permission issue on our production master, it looks like next:
1. I set user e.g. "zz" to group "imxcn", and assign device "imx8mm-evk-sh99" with the permission "change_device".
But, in our production master, "zz"(not admin), can't change the device status.
What's strange is: I have a local master on my hand, on this master, I do the same configure, the non-admin user can change the device status.
2. I check the database of our production master as next;
auth_user:
39 | | 2021-03-30 08:37:32.98708+00 | f | zz
auth_group:
8 | imxcn
auth_user_groups:
290 | 39 | 8
lava_scheduler_app_groupdevicepermission:
91 | imx8mm-evk-sh99 | 8 | 9
92 | imx8mm-evk-sh99 | 8 | 5
auth_permission:
5 | Can submit jobs to device | 2 | submit_to_device
9 | Can change device | 2 | change_device
django_content_type:
2 | lava_scheduler_app | device
Looks all data is ok, so, base on above data, do you know any possible reason why our production server can't allow non-admin user change device permission? I'm a little lost as my local on hand server works, just central production server not work... Thanks.
Hello, LAVA maintainer:
I used to use xmlrpc to change the status of our device, now I want to switch to rest api, but I don't know how to handle it with next:
https://validation.linaro.org/static/docs/v2/data-export.html#v0-2
It didn't give an example, and also did not specify using GET, POST, PUT or DELETE, could you give an example to me? Like a curl command format.
Thank you!
Hello,
We have job definition with multiple test definitions as shown below.
test:
timeout:
hours: 4
definitions:
- repository: ssh://git@github.com:7999/<MytestRepo>.git
from: git
path: automated/linux/lttng/lttng.yaml
name: lttng
- repository: ssh://git@github.com:7999/<MytestRepo>
from: git
path: automated/linux/docker/docker.yaml
name: docker
We want to avoid multiple cloning of same repository as all our test-definitions are in one repo.
We require suggestions on how to avoid cloning of same repository multiple times.?
Regards,
Maanas M D
Thanks Rémi, but result is the same when I start only the db container, even after a full cleanup (docker container rm docker-compose_db_1 && docker volume rm lava-server-pgdata):
ERROR: database "lavaserver" already exists
Seems to be related somehow to "POSTGRES_USER: lavaserver" in docker-compose.yaml.
Reading the entrypoint script for postgres (https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh), I thought using PGUSER could help, but I didn't manage yet to get that working.
I will continue digging into this... but if you have a detailed procedure somewhere, please share 😊
Thanks a lot!
Philippe
From: Lava-users <lava-users-bounces(a)lists.lavasoftware.org> On Behalf Of Remi Duraffort
Sent: Monday, March 22, 2021 9:25 AM
To: Philippe Mazet (OSS) <philippe.mazet(a)oss.nxp.com>
Cc: lava-users(a)lists.lavasoftware.org <lava-users(a)lavasoftware.org>
Subject: Re: [Lava-users] LAVA master backup/restore
Le jeu. 18 mars 2021 à 16:22, Philippe Mazet (OSS) <philippe.mazet(a)oss.nxp.com> a écrit :
Hi all,
I am using the docker-compose solution to run a master: https://git.lavasoftware.org/lava/pkg/docker-compose
I was wondering how to extract a backup of the DB, and re-inject it in a different instance.
I found that postgres image has its own "restore" mechanism, based on initdb.d folder, as mentioned in the documentation:
https://registry.hub.docker.com/_/postgres/
The entrypoint.sh script (https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh) handles the restore whenever the folder is /docker-entrypoint-initdb.d/ contains a .sql file.
But when we store a backup in the container's /docker-entrypoint-initdb.d/ folder, and remove both the postgres image and its db-data volume, we get this error on next start:
ERROR: database "lavaserver" already exists
Full startup log attached.
Questions:
- Does Linaro's docker-compose solution modify postgres restore mechanism in any way?
I guess that's only because lava will automatically create an empty lavaserver database. So in your use case, you should only start the db service and not any lava-* services. When the restoration has been done, you can restart every lava services.
- How do you handle backup/restore at linaro?
Thanks a lot in advance,
Philippe Mazet
NXP Semiconductors - Edge Processing
Email: philippe.mazet(a)nxp.com
_______________________________________________
Lava-users mailing list
Lava-users(a)lists.lavasoftware.org
https://lists.lavasoftware.org/mailman/listinfo/lava-users
--
Rémi Duraffort
LAVA Architect
Linaro
Dear LAVA-Users,
as far as we know the lxc support for LAVA test is going to be depricated and docker should be used instead, so we tried to define a test job with docker. The lxc job for this device was completing but when we tried to follow this tutorial:
https://www.youtube.com/watch?v=y-5DjzQztAk
with the following test job:
timeouts:
job:
minutes: 30
connection:
minutes: 2
actions:
finalize:
seconds: 60
context:
test_character_delay: 10
device_type: dragonboard_845_uSOM
job_name: Docker-Test
priority: 50
visibility: public
metadata:
build-url: ''
actions:
- deploy:
to: fastboot
docker:
image: adb-fastboot
timeout:
minutes: 15
images:
boot:
url: https://path/to/image/boot.img
reboot: hard-reset
- boot:
method: fastboot
docker:
image: adb-fastboot
prompts:
- root@(.*):/#
timeout:
minutes: 15
- test:
docker:
image: adb-fastboot
timeout:
minutes: 10
definitions:
- repository: https://github.com/Linaro/test-definitions.git
from: git
path: automated/linux/android-platform-tools/install.yaml
name: install-android-platform-tools-r2800
parameters:
LINK: https://dl.google.com/android/repository/platform-tools_r28.0.0-linux.zip
We got this error:
[cid:image001.jpg@01D7224A.65849130]
Are we doing something wrong or is this a bug?
Regards,
Marcel
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.
Hi Larry,
That is really underpowered for the sort of work you’re doing. We have a guideline of at least 8 cores and 16GB RAM. With only 4 cores you’re going to run into resource starvation issues.
Thanks
Dave
----------------
Dave Pigott
LAVA Lab Lead
Linaro Ltd
t: (+44) (0) 1223 400063
Hi, I'd like to know what's linaro's pc setup for android cts?
I ask this question because in one of our pc, if run 3 cts together at the same time, after about 8 hours, some of job' log will hang at web.
Check the server log, it looks the lava worker already sent the "END", while lava serer also sent the "END OK", also "description.yaml" received by server which means lava-run not crash.
Just there is also one time's END resend, and the job/device still in running status for ever.
Looks the zeromq connection to lava-log down? I'm not sure (We still link the device to 2020.08, not sure 2021.03 which don't use zmq helps or not).
And, the dmesg keeps tell `docker0: port 3(vethb545eb3) entered disabled state`, so I suspect high resource usage in this pc when test cts may result in some connection issue.
Our pc has 8G memory, Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz, 4 cores. What's your suggestion for pc configuration to run parallel cts jobs? Have your ever encountered similar issue? Thanks.
Hi Lava user,
I complied the stable AGL from gerrit repo.
Facing an issue, the init process is failed. I have used the below command to build the ramdisk.
mkimage -A arm64 -T ramdisk -C none -d
[cid:image001.png@01D71CCC.11853E50]
Please suggest what wrong I am doing.
Kind Regards,
Gaurav Kumar
_____________________________________________________________________________________
Expleo Germany GmbH
Sitz der Gesellschaft | Corporate Headquarters: M?nchen
Handelsregister | Commercial Register: Amtsgericht M?nchen HRB 83252
Gesch?ftsf?hrung | Management: Ralph Gillessen, Marcus Ganguin
This message contains information that may be privileged or confidential. It is intended only for the person to whom it is addressed.
If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof.
If you receive this message in error, please notify the sender immediately and delete all copies of this message.
_________________________________________________________
EMAIL LEGAL MENTION / DISCLAIMER
This message contains information that may be privileged or confidential and is the property of the Expleo Services SAS, RCS Versailles 831 178 348, located, 3 avenue des Pr?s, 78180 Montigny Le Bretonneux - France. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
Ce message contient des informations qui peuvent ?tre privil?gi?es ou confidentielles et elles sont de la propri?t? d'Expleo Services SAS, RCS Versailles 831 178 348, situ?, 3 avenue des Pr?s, 78180 Montigny le Bretonneux-France. Il est destin? uniquement ? la personne ? qui est adress?. Si vous n'?tes pas le destinataire vis?, vous n'?tes pas autoris? ? lire, imprimer, conserver, copier, diffuser, distribuer ou utiliser ce message ou toute partie de celui-ci. Si vous recevez ce message par erreur, veuillez en avertir imm?diatement l'exp?diteur et supprimer toutes les copies de ce message.