Hi!
Lava as default at beginning runs that command:
nice fastboot -s 261a1c5d reboot-bootloader
Is it possible to avoid this step? Dont run this command
Ilya
Hi Milosz,
Please take a look at the first message of this thread. There are 2
different information for this error.
First, this COMMA error happens when I'm trying to access the URL (LAVA
WebU), manually inserting the job_id in a browser, and not through the code.
Second, the previous code is returning:
lavac.server.NoSuchJob: No such job: 68271.0
Even though this job_id is been returned by the LAVA server.
Thanks
Em seg, 2 de set de 2019 às 11:32, Milosz Wasilewski <
milosz.wasilewski(a)linaro.org> escreveu:
> On Mon, 2 Sep 2019 at 10:26, Fabiano Ferronato <fabiferro(a)hotmail.com>
> wrote:
> >
> > Hi Milosz,
> >
> > I still couldn't update the server but, just to clarify, the mentioned
> job pasted in URL is 68271.0 as we can see in the error message:
> >
> > URL : http://lava.server.net/scheduler/job/68271.0
> >
> > > Trying to access LAVA WebUI using the jobid (68271.0)
> >
> > And then some process is translating that jobid into a comma " , " :
> >
> > > Reverse for 'lava.scheduler.job.detail' with arguments '('',)' not
> found. 1 pattern(s) tried: ['scheduler/job/(?P<pk>[0-9]+|[0-9]+\\.[0-9]+)$']
> >
> > Otherwise, a really not existent jobid in the URL, let's say 99999.9,
> results in "404 Not found".
> >
> > Here is the job submission code:
> >
> > try:
> > job_id = self._server.scheduler.submit_job(job_data)
> > if not isinstance(job_id, list):
> > job_id = [job_id]
> > return job_id
> >
> > And than the job_id is used to get job details:
> >
> > try:
> > return self._server.scheduler.job_details(job_id)
>
> scheduled.job_details expects a string:
> https://master.lavasoftware.org/api/help/#scheduler.job_details
>
> If I understand your code correctly you're passing a list to this
> function. Serialized list will contain "," character.
>
> milosz
>
> >
> > Best Regards,
> > Fabiano
> >
> > Em sex, 23 de ago de 2019 às 16:25, Milosz Wasilewski <
> milosz.wasilewski(a)linaro.org> escreveu:
> >>
> >> I don't think there was a problem with 2018.10 with this feature.
> >> Reading the error message I think you pasted "," character in the URL
> >> so the pattern didn't match. As you can see in the regex, "." is there
> >> and I don't recall any issues with multinode jobs then. Anyway, even
> >> if this is a bug it won't be fixed in 2018.10. When you migrate to
> >> latest version and you hit the same problem something can be done.
> >>
> >> If you can post full script you're using I can try on latest master
> >> and see what happens.
> >>
> >> milosz
> >>
> >> On Fri, 23 Aug 2019 at 13:10, Fabiano Ferronato <fabiferro(a)hotmail.com>
> wrote:
> >> >
> >> > Hi Milosz, thanks for your answer.
> >> >
> >> > Yes, it is a multinode job.
> >> > This is a known bug on version 2018.10? I need to install the new
> version and keep pipe lines running until I get the error to answer you.
> >> >
> >> > Fabiano
> >> >
> >> > Em qui, 22 de ago de 2019 às 18:58, Milosz Wasilewski <
> milosz.wasilewski(a)linaro.org> escreveu:
> >> >>
> >> >> On Thu, 22 Aug 2019 at 17:30, Fabiano Ferronato <
> fabiferro(a)hotmail.com> wrote:
> >> >> >
> >> >> > Hi,
> >> >> > we have a LAVA test setup working for some time. Automated
> pipelines are running tests on different devices in parallel.
> >> >> > After updating to version 2018.10+stretch and changing to in-line
> job definitions we started to get some sporadic errors.
> >> >> >
> >> >> > The error message shows up after jobs are submitted and the return
> from the submission is then used to ask for server job details:
> >> >> >
> >> >> > res = lava_server.submit_job(lava_test_job_description)
> >> >> > for entry in res:
> >> >> > job_details = lavasrv.job_details(entry)
> >> >> > ...
> >> >> >
> >> >> > Resulting in the following error:
> >> >> >
> >> >> > lib/python3.5/site-packages/lavac/server.py", line 272, in
> job_details
> >> >> > raise get_server_error(error, job_id)
> >> >> > lavac.server.NoSuchJob: No such job: 68271.0
> >> >>
> >> >> are you submitting a multinode job? Does this also happen in more
> >> >> recent version of LAVA (like 2019.07)?
> >> >>
> >> >> milosz
> >> >>
> >> >> >
> >> >> >
> >> >> > Trying to access LAVA WebUI using the jobid (68271.0):
> >> >> >
> >> >> > 500 Internal Server Error
> >> >> > Reverse for 'lava.scheduler.job.detail' with arguments '('',)' not
> found. 1 pattern(s) tried: ['scheduler/job/(?P<pk>[0-9]+|[0-9]+\\.[0-9]+)$']
> >> >> >
> >> >> > Can you give me a hint about this error?
> >> >> >
> >> >> > Thanks!
> >> >> >
> >> >> > _______________________________________________
> >> >> > Lava-users mailing list
> >> >> > Lava-users(a)lists.lavasoftware.org
> >> >> > https://lists.lavasoftware.org/mailman/listinfo/lava-users
> >> >
> >> > _______________________________________________
> >> > Lava-users mailing list
> >> > Lava-users(a)lists.lavasoftware.org
> >> > https://lists.lavasoftware.org/mailman/listinfo/lava-users
>
Hi!
I'm using lava from debian stretch.
I try flash image by fastboot.
At that step job just stuck and util timeout of 10 minutes.
nice fastboot -s '261a1c5d' flash persist /var/lib/lava/dispatcher/tmp/15/fastboot-deploy-7xj200yq/persist/persist.ext4
If run command from terminal
fastboot -s 261a1c5d flash persist /tmp/persist.ext4
device is flashed in 3 seconds
It looks like, that LAVA doesnt see my device.
Job desc : https://pastebin.com/ZNeS71Ev
Device desc: https://pastebin.com/rYRyEnjy
Job log : https://pastebin.com/fH0u3bUb
I know, that for fastboot better to work with lxc. I also tried with lxc. And it stuck on the same place.
Job log with lxc : https://pastebin.com/RnZLNfvN
Ilya
Hello,
I have a question regarding running a test that I have on my host machine inside a Lava job.
Basically, I start Lava and try to submit a job through: Scheduler/Submit
The job description looks like this:
device_type: qemu
job_name: qemu amd64 LTP
timeouts:
job:
minutes: 120
action:
minutes: 120
connection:
minutes: 120
priority: medium
visibility: public
metadata:
source: https://ci.linaro.org/view/lava-ci/job/lava-debian-stable-amd64-vm/
path: https://git.linaro.org/ci/job/configs.git/blob/HEAD:/lava-debian-stable-amd…
build-readme: https://images.validation.linaro.org/snapshots.linaro.org/components/lava/s…
build-console: https://ci.linaro.org/view/lava-ci/job/lava-debian-stable-amd64-vm/console
build-log: http://images.validation.linaro.org/snapshots.linaro.org/components/lava/st…
# CONTEXT_BLOCK
context:
arch: amd64
# ACTIONS_BLOCK
actions:
- deploy:
timeout:
minutes: 120
to: tmpfs
images:
rootfs:
image_arg: -drive format=raw,file={rootfs}
url: https://images.validation.linaro.org/snapshots.linaro.org/components/lava/s…
sha256sum: 4ab50cc69fc61faa9bf48edada8bc1a317247f77ced5a815f40e75cef1d62cc7
compression: gz
# BOOT_BLOCK
- boot:
method: qemu
media: tmpfs
timeout:
minutes: 120
prompts:
- "root@debian:"
auto_login:
login_prompt: "login:"
username: root
- test:
timeout:
minutes: 120
definitions:
- repository:
metadata:
format: Lava-Test Test Definition 1.0
name: apache-server
description: "server installation"
os:
- debian
scope:
- functional
run:
steps:
Here I would like to have my test.Something like:
- make
- ./home/user/folder/mytest
from: inline
name: apache-server
path: inline/apache-server.yaml
Can you maybe give me a short example on how to do that?
I tried using the "inline" keyword, but to no avail.
Best regards,
Emanuel-Vladut Magas
L4B Software, Iasi, Romania
E-mail: vladut.m(a)l4b-software.com
[cid:37eb12b3-1085-46f2-8e9a-e636f5edc7d8]
We've just discovered that the bulk canceling of test jobs in LAVA admin
does not work.
Issue in gitlab: https://git.lavasoftware.org/lava/lava/issues/310
We will fix this asap and roll it out with next release or hotfix,
whichever comes first.
A workaround is to either use XMLRPC API or cancel jobs one at a time.
Cheers,
--
Stevan Radaković | LAVA Engineer
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs
Dear Lava users,
I'm looking for a way to measure, with Lava, time measurements between some power modes. Let's say the time between two messages :
- Standby exit trigger
- Actual standby exit from the kernel
The constraint is that we can't rely on kernel timestamps (frozen during standby) and the power process implies not only kernel but also boot stages. We can only rely on the time spent between two specific messages in the console.
Basically, if it was done without Lava, we'd use a tool like grabserial.
Do you have any clue on what can be done?
Best regards,
Hello again,
I have several test cases where we use LAVA multinode to test hardware and software interfaces externally. E.g. we have an SFTP server running on our DUT. In order to test that, we submit a test using two nodes:
1. The DUT
2. An LXC container
The LXC device connects to the DUT via SFTP and uploads a file. Both sides determine the MD5 sum and the DUT compares them.
This works as long as both the DUT and the LXC device are in the same network (or at least can reach each other via the network).
Now there are more test cases which require additional hardware connections between the worker and the DUT, e.g. a serial interface test. The serial interface on the DUT is connected via an RS232-USB converter to the worker. The LXC can access this converter and send or receive data from the serial interface.
This works as long as the LXC is running on the expected worker the serial interface of the DUT is connected to.
As we are growing our lab, we will add more workers to our setup. There will be LXC devices on all of the workers.
When submitting such a multinode job, which relies on hardware connections between the DUT and the worker, how can I make sure that the LXC part of the job is scheduled on an LXC device on the correct worker?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun