Hi Lava Team,
I am using Lava image on debian stretch from https://images.validation.linaro.org/production-repo stretch-backports main & in It chart/query api is working.
Then I took latest code from lava release branch https://github.com/Linaro/lava-server/commits/release to my local branch
Tag taken :- c0b431f1b56227f6b97ba0e7859bfcc3fb29b62f refs/tags/2017.7
& build it but Lava charts is not coming when I run lava server["python manage.py runserver"] using latest code from git??
Can you please share where is latest code being used in debian stretch located?
Thanks,
Varun
Hi,
In LAVA v1, as I remember, I can use either "device" or "device_type" to
request a device for the test.
If I use "device", for example "device": "beaglebone-black03", then the
specific device, beaglebone-black03, will be used for the test.
Can I do the same thing in LAVA v2? To request a device by using device
name?
Thanks,
Arthur
Hi,
I've got 2 jobs stuck in canceled mode which are preventing any other job from running.
I'm running lava (2017-7) in a VM and have tried rebooting the VM to
clear the issue but without success (ie the jobs still block the queue).
an extract from /var/log/lava-server/django.log
is attached
I get this 500 error when viewing the results for the job
Is there a manual way of clearing this? The health check has
notification associated with it (and set to verbose) and every time I
reboot I get an email and irc saying that it's finished!
Robert
Hi everyone,
I'm implementing the nfsroot for my devices but it seems that when
extracting my rootfs.tar.xz lava keep the parent folder rootfs and didn't
extract all the files in extract-nfsrootfs-XXXXX so that the lava test
overlay is putting outside of rootfs folder and raises an error during
execution.
Is there a way in job definition to ask lava to put the test overlay in the
rootfs folder ?
Thanks
Benjamin AUCLAIR
--
- Confidential -
Hi everyone,
hope you're fine ! I'm quite stuck in my platform development: indeed, I
succedded in adding my own device type, I'm able to boot on linux by TFTP
and to perform auto-login actions.
However, I face difficulties with test-shell.
I have the following error:
https://pastebin.com/grPcvb14
And the definition of stage is:
https://pastebin.com/ArV11Gbb
Stage value seems to be none and I also realized that my test shell isn't
downloaded from git during server processing. Thus I think, that stage is
empty because the test shell definition isn't in the temporary files. Am I
wrong ?
Even if I think I found my problem, I don't know how to solve it, may it be
due to my device-type config ?
Thanks a lot and have a nice day !
Benjamin AUCLAIR
--
- Confidential -
Hi All,
I am adding BBB board on LAVA server I want to change
"UBOOT_AUTOLOAD_MESSAGE" in constant.py, I used "interrupt_prompt"
parameters in job submission but it took the message written in
constant.py. If I changed the message in constant.py its working but I know
this is not the right way to do that, Please suggest if any one has idea
what is the problem with me.
Below is the my Job:
device_type: beaglebone-black
# NFS fails on panda and arndale.
job_name: BBB smoke test
timeouts:
job:
minutes: 240
action:
minutes: 240
connection:
minutes: 2
priority: medium
visibility: public
metadata:
source: https://git.linaro.org/lava-team/refactoring.git
path: health-checks/beaglebone-black-health.yaml
build-readme:
http://images.validation.linaro.org/snapshots.linaro.org/components/lava/st…
build-console:
https://ci.linaro.org/view/lava-ci/job/lava-debian-armmp-armhf/1/console
build-script:
http://images.validation.linaro.org/snapshots.linaro.org/components/lava/st…
actions:
- deploy:
timeout:
minutes: 40
to: tftp
kernel:
url: file:////home/pi/lava/dl/vmlinuz
ramdisk:
url: file:////home/pi/lava/dl/initramfs.cpio.gz
compression: gz
# the bootloader needs a u-boot header on the modified ramdisk
add-header: u-boot
modules:
url: file:////home/pi/lava/dl/modules.tar.gz
compression: gz
nfsrootfs:
url: file:////home/pi/lava/dl/jessie-armhf-nfs.tar.gz
compression: gz
os: debian
dtb:
url: file:////home/pi/lava/dl/am335x-boneblack.dtb
- boot:
method: u-boot
commands: nfs
parameters:
shutdown-message: 'INIT: Sending processes the TERM signal'
interrupt_prompt: 'Press SPACE to abort autoboot in 10 seconds'
interrupt_char: ' '
send_char: False
type: bootz
auto_login:
login_prompt: 'beaglebone login: '
username: root
prompts:
- 'root@jessie:'
timeout:
minutes: 10
- test:
timeout:
minutes: 50
definitions:
- repository: git://git.linaro.org/qa/test-definitions.git
from: git
path: ubuntu/smoke-tests-basic.yaml
name: smoke-tests
Hi Lava Team
I am trying to mount a directory of worker (host computer) to LXC
instance running on it with job.
I have added this entry in lxc default configuration file of worker
computer.
Default Configuration file path is : /etc/lxc/default.conf
Entry is :-
lxc.mount.entry = /var/lib/nfsdir var/nfsmnt none bind,create=dir 0 0
After restart lxc service. I executed lava job then this directory
"/var/lib/nfsdir" of worker machine is not mounting on LXC instance in
directory "/var/nfsmnt".
But if i manually create and start lxc instance on worker, then this
directory "/var/lib/nfsdir" of worker machine is mounted on LXC instance.
Can anyone assist me on this issue, that how i can resolve this issue.
--
Thanks & Regards
Chetan Sharma
Hi Lava Team
Can you assist me on this usecase that how i can share LXC data with
DUT.
We have LXC and DUT TEST defined in a Job. LXC Tests generates
some data and logs which is required to be accessed by DUT TEST.
--
Thanks & Regards
Chetan Sharma
There are delays getting packages into stretch-backports, as expected.
In the meantime, this is a reminder of how to use backports: first
start with stable.
So when installing LAVA on Stretch, even if what you want is the
latest release from production-repo or staging-repo (currently
2017.7), then your first step is to install 2016.12 from Stretch.
# apt -y install lava-server
# apt -y install vim
# wget http://images.validation.linaro.org/production-repo/production-repo.key.asc
# apt-key add production-repo.key.asc
# vim /etc/apt/sources.list.d/lava.list
# # edit the file to specify: deb
http://images.validation.linaro.org/production-repo jessie-backports
main
Yes, that is jessie-backports - we don't have packages in
stretch-backports at this time.
# apt update
# apt upgrade
If you specify backports too early, you'll get a range of other
packages from backports - you don't actually need jessie-backports or
stretch-backports from Debian at this time.
Packages for jessie-backports are built on jessie. Packages for
stretch-backports are built on stretch. It's the same source code in
each case. Right now, there aren't any problems with installing from
jessie-backports on stretch - particularly if you install lava-server
from stretch in the first place so that the majority of your packages
come from stretch.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Lava Team,
Can anyone assist me with these 2 quires as
1. Can we enable interface inside LXC to access host network, so
that we can access any device on network of host machine and can access
internet inside LXC to execute script.
2. Can we pass Params in Lava job, which can propagate to
lava-test-action job or yaml.
If possible can you guide me with process to perform this
action ? share a reference job which perform this task.
--
Thanks & Regards
Chetan Sharma
Dear all,
Do you have any example or recommendation on the way to implement the following use case in Lava?
DUT under Linux
A PC under Linux to use as a "server" for VLC or iPerf test cases
I guess Multinode can be used to manage these use cases, but I don't know how to deal with the constraints linked to the PC: no reboot, no dedicated test kernel.
Thanks,
Denis
Hi Neil, Kevin !
> On 20 July 2017 at 02:57, Kevin Hilman <khilman at baylibre.com> wrote:
> > Hello,
> >
> > There are many configurable options when starting QEMU that are
> > controlled by environment variables, and I'm trying to figure out how to
> > pursuade LAVA v2 to set them.
>
> It's not something which should be done with the QEMU device type
> because that is installed on the worker - it is not a device that can
> be reconfigured to suit test writers and much of the functionality of
> QEMU must remain off-limits when QEMU is running on the worker.
>
> > As an example, configuring the host audio driver used by QEMU is set by
> > running QEMU with: QEMU_AUDIO_DRV=<option> qemu-system-x86_64.
>
> The worker is typically a server, it might not have any audio support.
> Even if it does, that functionality is off-limits to test writers. The
> available options to QEMU must be tightly constrained to protect other
> tasks on the worker.
A see. There is a valid point here in that there should not be control of the
environment from the 'outside' though a job description.
>
> > Unfortunately, device-types/qemu.jinja2 doesn't provide anyway to
> > override the qemu-system binary used (it's conditional based on arch),
>
> This is deliberate, to protect the worker from aberrant processes and
> to give the admins sufficient control over how the worker behaves.
Now for the above case:
QEMU_AUDIO_DRV=<option> qemu-system-*
I see this as a *very valid env var* for a server-side deployment - exactly as
you said - a server does not have a sound card and QEMU_AUDIO_DRV=none is what
we need to use there to prevent qemu from looking for a host sound card. We
should even set this by default - we don't expect any sound to reach the
worker - do we ;) ? I don't think so.
IMHO this should be an option set on the worker node in the lava-
dispatcher.conf (e.g. LAVA_EXTRA_QEMU_ENV="QEMU_AUDIO_DRV=none").
Enable the admins to choose this.
>
> No. This is not something that the worker can support. It needs to
> happen only on a test device. The worker is not itself a test device.
As I said, I think this belongs to the worker setup and we should enable the
admins to make their choice.
Dipl.-Ing.
Jan-Simon Möller
jansimon.moeller(a)gmx.de
Hello,
There are many configurable options when starting QEMU that are
controlled by environment variables, and I'm trying to figure out how to
pursuade LAVA v2 to set them.
As an example, configuring the host audio driver used by QEMU is set by
running QEMU with: QEMU_AUDIO_DRV=<option> qemu-system-x86_64.
Unfortunately, device-types/qemu.jinja2 doesn't provide anyway to
override the qemu-system binary used (it's conditional based on arch),
but even a quick hack to allow it to be overriden[1], and adding the env
as a prefix didn't because LAVA assumes the first item is an actual
binary expected in $PATH. My attempt led to:
Invalid job definition
Cannot find command 'QEMU_AUDIO_DRV=none qemu-system-x86_64' in $PATH
Seems like there should be a more general way to pass enviornment
variables to QEMU that I must be missing. If there's not, would be the
recommended way to add this feature?
Kevin
[1]
diff --git a/lava_scheduler_app/tests/device-types/qemu.jinja2 b/lava_scheduler_app/tests/device-types/qemu.jinja2
index 786f53bdb30d..e7c265a3048b 100644
--- a/lava_scheduler_app/tests/device-types/qemu.jinja2
+++ b/lava_scheduler_app/tests/device-types/qemu.jinja2
@@ -41,7 +41,7 @@ actions:
{% elif arch == 'arm' %}
qemu-system-arm
{% elif arch == 'amd64' or arch == 'x86_64' %}
- qemu-system-x86_64
+ {{ qemu_command|default('qemu-system-x86_64') }}
{% elif arch == 'i386' %}
qemu-system-i386
{% endif %}
With that change, I added
context:
qemu_command: "QEMU_AUDIO_DRV=none qemu-system-x86_64"
to the job definition
Dear lava Community,
I want to use Image charts2.0 for viewing my lava job results.
I am using "lava-test-case" to check pass/fail & I am getting results also.
steps:
- lava-test-case linux-linaro-ubuntu-pwd --shell pwd
- lava-test-case linux-linaro-ubuntu-uname --shell uname -a
- lava-test-case linux-linaro-ubuntu-vmstat --shell vmstat
I want to know how to get these results in image charts, I can see it ask to add chart & addfilter , but there no data is available when I try to add filter?
Similarly if I have to use query api, what kind of query should be used to fetch test data from lava suite?
Detailed info to use image chart will be appreciated, as I am new to Charts/Lava...or at any link.
Thanks,
Varun
Hi everyone,
I'm trying to add own device-type to my lab, but I'm facing some
difficulties when running jobs. I have the following log error:
https://pastebin.com/Eznq6RLA
I clearly understand the log but I'm not able to figure out what to do: I
thought it will be enough describing boot/power actions into device-type.
But it seems not... Do I need to create a .conf file into
/usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types
folder ?
By the way I'm not sure to understand the .conf purpose ? Are they here,
only to be some default files ?
I attached my device-type and my job, maybe it will help you !
Thanks a lot ;)
P.S: I already did some jobs on Qemu and bbb and read the whole
documentation.
--
- Confidential -
Hi,
I'm not entirely sure this job definition is correct, but the only
error I'm getting is only "Problem with submitted job data: Invalid
YAML - check for consistent use of whitespace indents.". The YAML
validates just fine so I'm unsure what is wrong. Any hints? The YAML
in question is enclosed.
milosz
Hello,
We use a python script, LAVA_via_API, to trigger our test jobs.
I must say that I don't know whether this script is a pure internal creation or whether it's been inspired by a Linaro script.
Its role is simple, create a lava job with quite a few parameters (job name, server, worker, kernel, rootfs, dtb, device, device_type, and so on), submit the job, get results and logs.
Whatever, before reworking completely this script, I assume that a reference one exists on one of the Linaro gits. Can you tell me where to find this?
Thanks,
Denis
ALWAYS keep the list in CC please.
On 7 July 2017 at 10:28, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
> Hi Neil
> Thanks for sharing detailed information to work with LXC.
> 1. I am following sample pipeline job with BBB.
> https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_dispatcher/pipeli…
>
> I have modified this job details with following value, but i am getting an
> Error as ['Invalid job - missing protocol']
Check with the valid job:
https://staging.validation.linaro.org/scheduler/job/178130
> As i have defined protocol as "lava-lxc" which is valid protocol, But job
> object does not have any protocol details, i have verified by printing
> details of self.job.protocols is []
Then that is an invalid job. Your modifications have got something
wrong. There are a lot of changes in your file against the example.
Change only one thing at a time.
> 2. How test action execute on lxc and device ?
Run a test action in the same namespace as the LXC. In the case of the
example, namespace: probe.
https://staging.validation.linaro.org/scheduler/job/178130/definition#defli…
> Can we execute test action in this process
> First lxc test action execute ---> Device1 test action execution start
> -> Device2 test action execution start
>
>
> ==================================================
> device_type: beaglebone-black
>
> job_name: LXC and bbb
Please attach files to emails to the list. There's no need to quote
the entire file to the list.
Take time to understand namespaces. The LXC is transparent and the
namespaces are used to tie the different actions together into one
sequence in the LXC and one sequence for the test device.
LXC protocol support is not the same as MultiNode - operations happen
in series. The LXC protocol is not a substitute for MultiNode either.
If you need parallel execution, you have to use MultiNode.
Split up your test shell definitions if necessary.
Also, attach (not include) the full test job log because that contains
details of the package versions being used and other information.
> On Fri, Jul 7, 2017 at 1:32 PM, Neil Williams <neil.williams(a)linaro.org>
> wrote:
>>
>> On 7 July 2017 at 07:23, Chetan Sharma <chetan.sharma(a)timesys.com> wrote:
>> > Hi Everyone
>> > Hopefully everyone is doing well in this group.
>> > Main intend of writing this email to seek assistance in regard to
>> > one
>> > feature of Lava as lava-lxc which help to create a LXC instance on
>> > worker
>>
>>
>> https://staging.validation.linaro.org/scheduler/job/174215/definition#defli…
>>
>> > and then we can execute any script on worker and propagate its
>> > characteristics and result in another actions of Job on board.
>>
>> That isn't entirely clear. What are you trying to do in the LXC?
>>
>> You need to deploy an LXC, start it with a boot action for the LXC and
>> then start a test shell in the LXC where you can install the tools you
>> need to execute.
>>
>> Talking to the device from the LXC can be difficult depending on how
>> the device is configured. To use networking, you would have to have a
>> fixed IP for the device. To use USB, you need to use the device_info
>> support in the device dictionary to add the USB device to the LXC.
>>
>> > I have gone through documentation shared on Lava V2 instance for
>> > LXC
>> > job creation, but i am not able to successfully execute job on Debian
>> > Jessie
>> > Release.
>> >
>> > https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
>> >
>> > Can you assist me and share a reference process/document to proceed
>> > further to create a job using this feature.
>>
>> More information is needed on exactly what you are trying to do, how
>> the LXC is to connect to the device and what support the device offers
>> to allow for those operations.
>>
>> --
>>
>> Neil Williams
>> =============
>> neil.williams(a)linaro.org
>> http://www.linux.codehelp.co.uk/
>
>
>
>
> --
> Thanks & Regards
> Chetan Sharma
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Everyone
Hopefully everyone is doing well in this group.
Main intend of writing this email to seek assistance in regard to one
feature of Lava as lava-lxc which help to create a LXC instance on worker
and then we can execute any script on worker and propagate its
characteristics and result in another actions of Job on board.
I have gone through documentation shared on Lava V2 instance for LXC
job creation, but i am not able to successfully execute job on Debian
Jessie Release.
https://validation.linaro.org/static/docs/v2/deploy-lxc.html#index-0
Can you assist me and share a reference process/document to proceed
further to create a job using this feature.
Look forward to hear with positive response.
--
Thanks & Regards
Chetan Sharma
Hi,
I have been investigating LAVA for use in our organisation and i'm stuck trying to get a hello world test case running on our hardware and looking for some help. We looked at the YOCTO test tools however it can only use devices with a fixed ip which we can't guarantee or want during our testing as we also test network settings. It's also limited in configuration, LAVA package seems to meet all our requirements however i'm still unsure on how to do a few things.
We use Yotco images and boot with the grub bootloader.
All our devices are connected via Ethernet only and power and peripheral switching is controlled via usb relays.
After reading through all the documentation i'm still unsure of how to set up and actually run a test on our hardware. What tools do i need to install in the test image and how do i get it to communicate with grub? I assume a base image is one that includes nothing but the tools and grub. We have a recovery partition with tiny core which could facilitate that but it's not required for the automated testing.
I've used the akbennet/lava-server docker image and it is up and running, although test jobs are scheduled but never run on the qemu devices so a little stuck there.
Basically, I need help to get LAVA to connect to one of our devices to load the image and run tests?
Choosing the image, writing tests and mostly configuring the pipeline I understand.
After 2 weeks i'm posting here hoping someone can assist me.
Regards,
Elric
Elric Hindy
Test Engineer
T +61 2 6103 4700
M +61 413 224 841
E elric.hindy(a)seeingmachines.com
W www.seeingmachines.com<http://www.seeingmachines.com>
[Seeing Machines]<https://www.seeingmachines.com/>
This email is confidential. If you are not the intended recipient, destroy all copies and do not disclose or use the information within. No warranties are given that this email does not contain viruses or harmful code.
Hi All,
I am trying to setup a remote lab using Raspberry pi on my local network.
I installed lava-server and a worker on my laptop and its working fine.
I installed raspbian on R-Pi and follow the instruction given on lava site,
but when slave is trying to connect to master its not getting any response,
I am able to ping master from my R-pi board and default port 3079 is open
on my machine.
I used no encryption and use URL to connect master as follow.
MASTER_URL="tcp://10.42.0.24:3079"
LOGGER_URL="tcp://10.42.0.24:3079"
I continuosly getting log messgaes like,
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
INFO Waiting for the master to reply
DEBUG Sending new HELLO_RETRY message to the master (are they both running
the same version?)
Please, if any one have some idea why I am not able to connect please help.
Thanks,
Ankit
On Mon, 3 Jul 2017 23:50:25 +0300
Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions
> however could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
> > > jobs submit a number of tests to LAVA (via
> > > https://qa-reports.linaro.org/) for the following boards:
> > > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
> > > example of cumulative test report for these platforms:
> > > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
> > >
> > > That's really great! (Though the list of tests to run in LAVA
> > > seems to be hardcoded:
> > > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
> > >
> >
> > It is, as I wasn't really sure what to test. The build job needs to
> > prepare the test templates to be submitted to LAVA. In case of
> > zephyr each tests is a separate binary. So we end up with the
> > number of file paths to substitute in the template. Hardcoding was
> > the easiest thing to get things running. But I see no reason why it
> > wouldn't be changed with some smarter code to discover the
> > binaries. The problem with this approach is that some of these
> > tests are just build time. They have no meaning when running on the
> > board and need to be filter out somehow.
Running the build tests within the Jenkins build makes a lot of sense.
Typically, the build tests will have a different command syntax to the
runtime tests (otherwise Jenkins would attempt to run both), so
filtering should be possible. If the build tests are just a different
set of binary blobs from the runtime tests, that may need a fix
upstream in Zephyr to distinguish between the two modes.
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on
> "samples", not just "tests", as sample can be interactive, etc. it
> makes sense to only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in
> reality, some may need to be filtered out. But then blacklisting
> would be better approach than whitelisting. And I'm not sure if
> Zephyr has concept of "skipped" tests which may be useful to handle
> hardware variations. (Well, I actually dunno if LAVA supports skipped
> tests!)
Yes, LAVA has support for pass, fail, skip, unknown.
For POSIX shell tests, the test writer just calls lava-test-case name
--result skip
For monitor tests, like Zephyr, it's down to the pattern but skip is as
valid as pass and fail (as is unknown) for the result of the matches
within the pattern.
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements
> to make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
There are known limitations with the USB subsystem and associated
hardware across all architectures, affecting test devices and the
workers which run the automation. LAVA has to drive that subsystem very
hard for both fastboot devices and IoT devices. There are also problems
due to the design of methods like fastboot and some of the IoT support
which result from a single-developer model, leading to buggy
performance when used at scale and added complexity in deploying
workarounds to isolate such protocols in order to prevent interference
between tests. The protocols themselves often lack robust error
handling or retry support.
Other deployment methods which rely on TFTP/network deployments are
massively more reliable at scale, so comparing reliability across
different device types is problematic.
> []
>
> > > - test:
> > > monitors:
> > > - name: foo
> > > start: ""
> > > end: Hello, ZJS world!
> > > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
> > >
> > > So, the "start" substring is empty, and perhaps matches a line
> > > output by a USB multiplexer or board bootloader. "End" substring
> > > is actually the expected single-line output. And "pattern" is
> > > unused (dunno if it can be dropped without def file syntax
> > > error). Is there a better way to handle single-line test
> > > output?
> >
> > You're making a silent assumption that if there is a matching line,
> > the test is passed. In case of other tests (zephyr unit tests), it's
> > not the case. The 'start' matches some line which is displayed when
> > zephyr is booting. End matches the line which is displayed after all
> > testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests.
> I present another usercase which albeit simple, barely supported by
> LAVA. (That's a question to LAVA team definitely.)
LAVA result handling is ultimately a pattern matching system. Patterns
must have a unique and reliable start string and a unique and reliable
end string. An empty start string is just going to cause misleading
results and bad pattern matches as the reality is that most boards emit
some level of random junk immediately upon connection which needs to be
ignored. So there needs to be a reliable, unique, start string emitted
by the test device. It is not enough to *assume* a start at line zero,
doing so increases the problems with reliability.
>
> > > Well, beyond a simple output matching, it would be nice even for
> > > the initial "smoke testing" to actually make some input into the
> > > application and check the expected output (e.g., input: "2+2",
> > > expected output: "4"). Is this already supported for LAVA "v2"
> > > pipeline tests? I may imagine that would be the same kind of
> > > functionality required to test bootloaders like U-boot for Linux
> > > boards.
> >
> > I didn't use anything like this in v2 so far, but you're probably
> > best off doing sth like
> >
> > test 2+2=4 PASS.
> >
> > than you can easily create pattern that will filter the output. In
> > case of zephyr pattern is the only way to filter things out as there
> > is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
That will need code changes, so please make a formal request for this
support at CTT
https://projects.linaro.org/servicedesk/customer/portal/1 so that we
can track exactly what is required.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
That JIRA story is in the LITE project. Nobody in the LAVA team can
manage those stories. It needs a CTT issue which can then be linked to
the LITE story and from which a LAVA story can also be linked.
Sadly, any story in the LITE project would go completely unnoticed by
the LAVA software team until it is linked to CTT so that the work can
be prioritised and the relevant LAVA story created. That's just how
JIRA works.
>
> >
> > milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
> _______________________________________________
> linaro-validation mailing list
> linaro-validation(a)lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/linaro-validation
--
Neil Williams
=============
http://www.linux.codehelp.co.uk/
Hi,
On 2017-07-04 15:18, Agustin Benito Bethencourt wrote:
> Dear CIP friends,
>
> please check below. This is a use case we will meet in a few
> weeks/months. It is important to see others walking the same route.
My simple tests are running now thanks to the help of the nice people
from the #linaro-lava chat.
1) Crazy me decided to use upstream u-boot 2017.05 instead of running
some ancient version from 2014;)
1.1) which happens to have a different AUTOBOOT_PROMPT than the one Lava
expects "Press SPACE to abort autoboot in %d seconds\n" instead of "Hit
any key to stop autoboot" so (since) I would like to be able to stay as
close as possible to upstream LAVA 2017.06 I patched u-boot[1]. Note
that this could be fixed with LAVA as well - interrupt_prompt: {{
interrupt_prompt|default('Hit any key to stop autoboot') }}
1.2) also the SYS_PROMPT of upstream u-boot is different than the one
expected by LAVA and again I made a u-boot patch[2]. Note that this
could be fixed with LAVA as well - {% set bootloader_prompt =
bootloader_prompt|default('U-Boot') %}
2) After some searching it turned out that LAVA set some funny variables
in u-boot which made my kernel crash. (Crazy me decided to use a 4.9.x
uImage without baked in address).
Adding this:
{% set base_high_limits = false %}
to my bbb03.jinja2 file fixed it.
... obviously ;)
Regards,
Robert
[1]
https://github.com/RobertBerger/meta-mainline/blob/pyro-training-v4.9.x/u-b…
[2]
https://github.com/RobertBerger/meta-mainline/blob/pyro-training-v4.9.x/u-b…
Hello Milosz,
Thanks for routing this thread to lava-users - when I made initial post
to linaro-validation, I check my archive and so that e.g. Neil posts
there frequently, but I missed that it's not official LAVA list.
On Mon, 3 Jul 2017 22:25:31 +0100
Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
[]
> > So, I'm not exactly sure about build-only tests on real HW boards.
> > The "default" idea would be that they should run, but I imagine in
> > reality, some may need to be filtered out. But then blacklisting
> > would be better approach than whitelisting. And I'm not sure if
> > Zephyr has concept of "skipped" tests which may be useful to handle
> > hardware variations. (Well, I actually dunno if LAVA supports
> > skipped tests!)
>
> As far as I can tell they acutely run on the board, but usually output
> just 'Hello world!' or sth similar. As we discussed with Kumar, this
> is still OK. What Kumar requested (and I still didn't deliver) is that
> whenever the LAVA test job completes, the test should be considered
> 'passed'. So we wouldn't have to do any parsing of patterns. I'm not
> sure if that will work, but it's worth to try.
Hmm, I wonder what would be criteria for being "failed" for such
tests... Anyway, thanks for sharing - I'm not familiar with all Zephyr
tests/samples myself, will keep in mind such issues when looking into
them.
[]
> > more boards will be installed in the Lab and stability of them
> > improves (so far they seem to be pretty flaky).
> >
>
> You're absolutely right. This is a pretty big task to work on and IMHO
> requires someone to work full time at least for couple of weeks. The
> second part is also true, the boards don't behave as they should. I
> guess Dave can elaborate more on that. I can only see the result -
> boards (frdm-kw41z) don't run the tests they're requested.
Matt Hart actually showed me a ticket on that, so at least it's
confirmed/known issue in works. But even with arduino_101 and
frdm_k64f, I hit cases more than once when board(s) were stuck for
extended time, but still were routed jobs to (which either failed or
timed out). So, there may be problem with health checks, which either
don't run frequently enough or aren't robust enough. arduino_101 is all
the lone one, so if something happens to it, there's no backup. Etc,
etc.
[]
> > So, the problem, for starters, is how to make LAVA *feed* the
> > input, as specified in the test definition (like "2+2") into a
> > board.
>
> Right. What I proposed was coding all the inputs in the test itself.
Well, that would require bunch of legwork, but the biggest problem is
that it wouldn't test what's actually required. E.g. both JerryScript
and MicroPython Zephyr ports are actually interactive apps working over
serial connection. And functional testing of them would be feeding
something over this serial connection and checking that results are as
expected. I'll keep in mind idea of "builtin" tests though.
Thanks!
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
I was too quick to hit reply. CCing lava-users for comments from LAVA team.
milosz
On 3 July 2017 at 21:50, Paul Sokolovsky <paul.sokolovsky(a)linaro.org> wrote:
> Hello Milosz,
>
> I appreciate getting at least some response ;-). Some questions however
> could use a reply from LAVA team, I guess.
>
> On Mon, 3 Jul 2017 13:34:49 +0100
> Milosz Wasilewski <milosz.wasilewski(a)linaro.org> wrote:
>
> []
>
>> > jobs submit a number of tests to LAVA (via
>> > https://qa-reports.linaro.org/) for the following boards:
>> > arduino_101, frdm_k64f, frdm_kw41z, qemu_cortex_m3. Here's an
>> > example of cumulative test report for these platforms:
>> > https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
>> >
>> > That's really great! (Though the list of tests to run in LAVA seems
>> > to be hardcoded:
>> > https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
>> >
>>
>> It is, as I wasn't really sure what to test. The build job needs to
>> prepare the test templates to be submitted to LAVA. In case of zephyr
>> each tests is a separate binary. So we end up with the number of file
>> paths to substitute in the template. Hardcoding was the easiest thing
>> to get things running. But I see no reason why it wouldn't be changed
>> with some smarter code to discover the binaries. The problem with this
>> approach is that some of these tests are just build time. They have no
>> meaning when running on the board and need to be filter out somehow.
>
> I see, that makes some sense. But thinking further, I'm not entirely
> sure about "build only" tests. Zephyr's sanitycheck test has such
> concept, but I'd imagine it comes from the following reasons: a)
> sanitycheck runs tests on QEMU, which has very bare hardware support,
> so many tests are not runnable; b) sanitycheck can operate on "samples",
> not just "tests", as sample can be interactive, etc. it makes sense to
> only build them, not run.
>
> So, I'm not exactly sure about build-only tests on real HW boards. The
> "default" idea would be that they should run, but I imagine in reality,
> some may need to be filtered out. But then blacklisting would be better
> approach than whitelisting. And I'm not sure if Zephyr has concept of
> "skipped" tests which may be useful to handle hardware variations.
> (Well, I actually dunno if LAVA supports skipped tests!)
>
> Anyway, these are rough ideas for the future. I've spent couple of
> weeks of munging with LITE CI setup, there're definitely some
> improvements, but also a Pandora box of other ideas and improvements to
> make. I'm wrapping up for now, but hope to look again in some time
> (definitely hope to look before the Connect, so we can discuss further
> steps there). In the meantime, I hope that more boards will be
> installed in the Lab and stability of them improves (so far they seem
> to be pretty flaky).
>
> []
>
>> > - test:
>> > monitors:
>> > - name: foo
>> > start: ""
>> > end: Hello, ZJS world!
>> > pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
>> >
>> > So, the "start" substring is empty, and perhaps matches a line
>> > output by a USB multiplexer or board bootloader. "End" substring is
>> > actually the expected single-line output. And "pattern" is unused
>> > (dunno if it can be dropped without def file syntax error). Is
>> > there a better way to handle single-line test output?
>>
>> You're making a silent assumption that if there is a matching line,
>> the test is passed. In case of other tests (zephyr unit tests), it's
>> not the case. The 'start' matches some line which is displayed when
>> zephyr is booting. End matches the line which is displayed after all
>> testing is done. The pattern follows the unit test pattern.
>
> Thanks, but I'm not sure I understand this response. I don't challenge
> that Zephyr unittests need this support, or the way they're handled.
> LITE however needs to test more things than "batch" Zephyr unittests. I
> present another usercase which albeit simple, barely supported by LAVA.
> (That's a question to LAVA team definitely.)
>
>> > Well, beyond a simple output matching, it would be nice even for the
>> > initial "smoke testing" to actually make some input into the
>> > application and check the expected output (e.g., input: "2+2",
>> > expected output: "4"). Is this already supported for LAVA "v2"
>> > pipeline tests? I may imagine that would be the same kind of
>> > functionality required to test bootloaders like U-boot for Linux
>> > boards.
>>
>> I didn't use anything like this in v2 so far, but you're probably best
>> off doing sth like
>>
>> test 2+2=4 PASS.
>>
>> than you can easily create pattern that will filter the output. In
>> case of zephyr pattern is the only way to filter things out as there
>> is no shell (?) on the board.
>
> So, the problem, for starters, is how to make LAVA *feed* the
> input, as specified in the test definition (like "2+2") into a board.
>
> As there were no reply from LAVA team (I may imagine they're busy with
> other things), I decided to create a user story in Jira for them, as I
> couldn't create a LAVA-* ticket, I created it as
> https://projects.linaro.org/browse/LITE-175 . Hopefully that won't go
> unnoticed and LAVA team would get to it eventually.
>
>>
>> milosz
>
> Thanks!
>
> --
> Best Regards,
> Paul
>
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,all:
If devive have boot successed,can I skip deploy and boot step in job.yaml?
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello LAVA experts,
I am currently working with LAVA and was asked to find is there a way to get data from device dictionary inside a running job. Details like below:
I have a device name demo-01, and its device-dictionary have one line like "{% set my_property = 'my_prop' %}". Then I have a job running on demo-01 device, and I would like to use string 'my_prop' to passed into a script during running. Is it possible to get device-dictionary data directly from job definition(Job Submitter webpage) or test definition(yaml file)? If yes, how could I do this? If not, is there any good way you would like to share to solve this problem?
Thanks and Best Regards,
Yifan
Hi all,
Is there any limitation on test duration when using LAVA to do tests?
Recently, I found my tests were automatically canceled after running for 24 hours and I could not find any clue about why the job was canceled. Can anybody give me some help?
(Very sorry my LAVA version is V1.)
12794.0 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM INFO: Cancel operation
12794.1 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM DEBUG: finally status fail
12794.2 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: [ACTION-E] boot_image is finished with error (Cancel).
12794.3 <LAVA_DISPATCHER>2017-06-25 02:31:42 PM WARNING: Target : power_off being call
Thanks.
Hi,
We've hit an issue when running lava-master on Debian Jessie and
lava-slave on Debian Stretch, after a few minutes the slave would
stop working. After some investigation, it turned out to be due
to a difference of the libzmq versions in Jessie (4.0.5+dfsg-2)
and Stretch (4.2.1-4) causing some protocol errors.
The line that detects the error in Stretch is:
https://github.com/zeromq/libzmq/blob/7005f22726d4a6ca527f27560a0a132394fdb…
This appears to be due to how the "once" counter gets written
into memory and into the zmq packets: the libzmq version from
Jessie uses memcpy whereas the one in Stretch calls put_uint64.
As a result the byte endianness has changed from little to big,
causing the packets to work until "once" reaches 255 which
translates into 0xff << 56, after which it overflows to 0 and
causes the error.
This is not a LAVA bug as such, rather a libzmq one, but it
impacts interoperability between Jessie and Stretch for LAVA so
it may need to be documented or resolved somehow. We've
installed the new version of libzmq onto our Jessie servers to
align them with Stretch; doing this does fix the problem.
Best wishes,
Guillaume
Hi All,
We have a Lab and the DUTs in the Lab will be used for both automation testing(LAVA) and manual usage(development, debug, manual testing).
And we will develop a tool for the manual usage, please find the basic requirements of the tool in the attachment.
I list possible solutions about how to develop the lab tool and let the tool cooperate with LAVA on the same Lab hardwares. which one is better? could you give your suggestions?
1. Develop the lab tool based on LAVA framawork.
Need to modify LAVA DB: add some new tables for network ports, port attribution, port connection, usage log, notes, also need to add columns to existing tables lava_scheduler_app_device.
Need to modify LAVA command line, add functions like lava-server reserve dut1, lava-server connect dut1.
Need to add new codes for the features which LAVA doesn't support, part of codes may be reused, I need to look into LAVA and check how to reuse them.
The tool will be developed based on https://github.com/Linaro/lava-server and will be installed on LAVA master, right?
Most probably we will maintain the codes change in local repository because it is difficult to upstream thousands of codes changes to linaro master repository. We need to merge the changes in master repository to local repository.
2. Develop the lab tool as a separated tool and use separated DB. There are two ways to avoid the DUT conflicts between the lab tool and LAVA:
a) Lab tool DB maintain all DUTs, LAVA DB also maintain all DUTs, when a user want to do automation testing on one DUT, he need to reserve this DUT via lab tool in advance, then he can run automation testing on the specified DUT.
b),Divide the DUTs to two groups, one group is for automation testing and it will be added to LAVA DB, the other one group is for manual usage and will be added to lab tool DB.
Another question about how to link/unlink two network ports dynamically(see requirement#7 in attachment) in automation testing. I am not sure whether LAVA can support this feature later, the simple way is supporting it in test script:
Do the link/unlink in test script: subprocess.Popen(["lab-tool","link","dut1.eth_1g_1","dut2.eth_1g_1"]) , and test script needs to get DUT name from LAVA in advance.
Does this work?
BR
Yongtao
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
I am trying to run a multinode test job
<https://staging.validation.linaro.org/scheduler/job/175378/definition> in
LAVA v2.
It failed <https://staging.validation.linaro.org/scheduler/job/175378#L555>.
After
a short discussion with Chase, I realized if I want to run multinode test
job I have to use "lava-multinode" protocol.
However, I still need "lava-lxc" to deploy the test image for hikey.
I am wondering if it is possible to use two protocols, lava-lxc for
deployment test image and lava-multinode to run the test, in the same test
job?
If yes, could you please provide some examples?
Thanks,
Arthur
Hi ,
I known can add the device_tags by the web , and if i can add the device_tags by the cmdline ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hello,Neil:
1.Can you give me a deploy flow for a x86 device .
2.I want to use web UI submit job for one time N jobs. (N>10)
How can I do it?
Thanks.
Amanda
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi all,
Can be used multi database in the lava ? And is there any introduction to this ?
Best Regards
XuHongyu
This email is intended only for the named addressee. It may contain information that is confidential/private, legally privileged, or copyright-protected, and you should handle it accordingly. If you are not the intended recipient, you do not have legal rights to retain, copy, or distribute this email or its contents, and should promptly delete the email and all electronic copies in your system; do not retain copies in any media. If you have received this email in error, please notify the sender promptly. Thank you.
Hi,
Someone with the nickname "Nurav" pinged in #linaro-lava IRC channel
last week regarding installation problem to install lava in jessie using
the jessie-backports repository. The person also diligently followed up
with me to check what is wrong with his/her installation on private
messages. I did find sometime to do the testing today. Since, I do not
know any contact details of 'Nurav' I am writing my findings here, based
on lava installation I did on fresh jessie containers, hopefully the
person is available in this mailing list and will see this message.
The fresh installation went fine with both jessie and also using jessie
backports and LAVA production-repo. I ve put up the complete log of the
installations I tried in the following files:
* Fresh installation of lava from jessie-backports -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation.…
* Installing 2017.5 lava from jessie to jessie-backports to
production-repo -
http://people.linaro.org/~senthil.kumaran/logs/lava-jessie-bp-installation-…
HTH. Thank You.
--
Senthil Kumaran
http://www.stylesen.org/http://www.sasenthilkumaran.com/
In LAVA V1, we can use `target: {device hostname}` to submit job to one specified device, but I find V2 does not support this.
------------------
Best Regards.
Bo Wang
Hi,
warning: long mail
CIP is a Linux Foundation[1] initiative to create a commodity Linux
based platform for train railway control systems, power plants, etc
which needs to be maintained for a very long time. In extreme cases, for
50 years. The first outcome is the CIP-kernel, based on 4.4 LTS and
maintained for now by Ben Hutchings, a colleague of mine.
This week, within the CIP (Civil Infrastructure Platform) project, we
have published a VM where we have integrated LAVAv2 and KernelCI on
Debian so any developer can test a specific kernel in a board attached
to his/her laptop[2]. You have probably read in this mailing list some
questions coming from some colleagues of mine from Codethink.
Since the project is initially focused on the CIP kernel, it was natural
to select LAVA and KernelCI as our default tools. We would like to see
in the near future our CIP kernel as part of kernelci.org We move slowly
though since CIP is still a young project with very limited resources
but for now, and due to the very unique requirements CIP needs to
address[3], safety critical requirements among them, we need to get
absolute control of the testing infrastructure/service we will use.
As a previous step towards building our own testing/reporting
infrastructure and service, understanding LAVAv2 and promoting it among
the developers involved in this initiative is essential. I think that
B@D will be useful for this purpose, allowing us to start testing and
sharing the results among CIP devs. Codethink has invested a significant
amount of effort in creating a step by step deployment-configure-test
guide[4]. Any feedback is welcome.
B@D is meant to significantly reduce the entry level effort to use both,
LAVA and KernelCI. We consider B@D as a downstream project of LAVAv2 and
KernelCI. Hopefully, once we start using it within CIP we will be able
to provide meaningful feedback on this list.
The team behind B@D, is subscribed to this list and the #linaro-lava IRC
channel. We have our own mailing list cip-dev for general support of CIP
related topics, so B@D too, but our idea is to route users here for LAVA
specific questions that are unrelated with the set up of the environment
and participate in supporting them up to the the best of our knowledge,
if you think that is a good idea. We are unsure yet about the level of
success we will get with this effort though.
Since probably for many of you this is the first news you get about CIP
and B@D, feel free to ask me or my colleagues. We will do our best to
answer your questions.
You can get all the technical information about B@D at feature page[5].
Feel free to download it[6]. The integration scripts are located in
gitlab.com under AGPLv3 license[7]. Our cip-dev mailing list is
obviously an open one[8].
I would like to finish thanking the devs for the great work done in LAVA
and to those of you who have helped the Codethink guys to get to this
point. LAVAv2 and KernelCI are complicated beasts, hard to swallow
without support, and at the same time, very useful. I guess that is part
of the beauty.
[1] https://www.cip-project.org/about
[2] https://www.cip-project.org/news/2017/05/30/bd-v0-9-1
[3] https://wiki.linuxfoundation.org/_media/cip/osls2017_cip_v1_0.pdf
[4]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[5]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboar…
[6] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipdownload
[7] https://gitlab.com/cip-project/cip-testing/board-at-desk-single-dev
[8] https://lists.cip-project.org/mailman/listinfo/cip-dev
Best Regards
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito(a)codethink.co.uk
Hi,
I've hit an issue after upgrading LAVA on Debian Stretch from
2017.5-6486.59742fe-1 to 2017.5-6509.d71f267-1, trying to add or
update device dictionaries using the "lava-tool
device-dictionary" command then always failed.
Between these two revisions, the following change was introduced:
commit ae4d4776ca7b7454f5406159226e3c9327dd207f
Author: Rémi Duraffort <remi.duraffort(a)linaro.org>
Date: Tue May 2 15:22:33 2017 +0200
LAVA-757 Move device dictionaries to file system
The device dictionaries are now saved in:
/etc/lava-server/dispatcher-config/devices
which was previously installed with the root user. The LAVA
server instance is running under the "lavaserver" user, which
didn't have write access to this directory. So I had to manually
change the permissions, and then it all worked.
Could you please add a step in the Debian package installation to
change the permissions and ownership of the files in
/etc/lava-server/ to fix this issue?
Note: It may also be a good idea to propagate the IOError into
the xmlrpclib.Fault error message rather than replacing it with a
rather mysterious "Unable to store the configuration for %s on
disk". I had to do this to find out the root cause of the error;
I can send a patch, let me know.
Best wishes,
Guillaume
Hey all,
I'm currently hitting an interesting issue; We're deploying debian-
based images in our tests which have most of / mounted read-only.
Unfortunately the default deployment data for Debian indicates that the
various lava-test-shell related directories show live in /lava-xxxxx to
which in our booted image ofcourse nothing can write.
It would be good to give test writers the freedom to specify a sensible
base-path for lava's usage to avoid these types of issues. The option
to add a Debian variant to deployment data on our setup is somewhat
less attractive as that requires server configuration. (Changing the
images to mount / as read-write is not an option as that would defeat
the point of the test).
--
Sjoerd Simons
Collabora Ltd.
Hi,
With delivery of May, we can now defined timer for each test of a job.
Some of our tests are not correctly executed and then timeout expired.
But we want to execute next test and not stop the job with an incomplete verdict
How can we configure this behavior in yaml ?
See following example of yaml used
Thanks in advance for your answer
Regards
[Description: Description: Description: Description: Description: Description: logo_big5]
Florence ROUGER-JUNG | TINA: 166 7356 | Tel: +33 2 44 02 73 56 | Mobile: +33 6 13 49 38 02
STMicroelectronics | MCD
Auriga Building | 9-11, rue Pierre-Félix Delarue | 72100 Le Mans | FRANCE
I currently have an issue with starting lava-publisher. Whenever I run python manage.py lava-publisher I get output log:
2017-05-22 20:10:28,061 INFO publisher Dropping priviledges
2017-05-22 20:10:28,061 ERROR publisher Unable to lookup the user or the group
2017-05-22 20:10:28,062 ERROR publisher Unable to drop priviledges
I currently Have the same binaries in my own workstation and I am able to successfully connected to lava-publisher my output log gets:
2017-03-27 19:34:43,909 INFO publisher Dropping priviledges
2017-03-27 19:34:43,909 DEBUG publisher Switching to (lavaserver(126), lavaserver(133))
2017-03-27 19:34:43,909 INFO publisher Creating the input socket at ipc:///tmp/lava.events
2017-03-27 19:34:43,910 INFO publisher Creating the publication socket at tcp://*:5500
2017-03-27 19:34:43,910 INFO publisher Creating the additional sockets:
2017-03-27 19:34:43,910 INFO publisher Starting the proxy
I followed same steps to install from same repository on both machines. I somehow think this has to do with user or group not stored in database but I am not sure.
BTW: lava server/dispatcher both work well on both machines I just want to have an event notification client working on my second machine.
Thank you.
- Randy
Hello,
I am trying to setup Android tests on a B2260 board using LAVA V2 (
version 2017-05 ).
I have read LAVA documentation about Android tests and investigate about
existing Android jobs
and devices that have developed Android tests.
My questions:
* Is it essential to use linux container to perform Androids tests ?
* Do you have device-type, devices and jobs that would help me to
achieve my android tests ?
Best regards
Philippe
Hello,
Until then I've installed extra test packages from embedded test scripts.
Using the latest V2 version, I'm trying to manage this in my test jobs.
e.g.:
install:
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential
- phoronix-test-suite
At execution time, I get a Error: OPKG package manager not found in the path.
Does this mean that OPKG is the only installer supported? Or is this the default one meaning that I can select DPKG or aptitude instead?
Best regards,
Denis
Hello Team,
This is just the request as a lava user.
It would be better if lava supports static ip configuration to deploy and
etc stuff with target board.
The user who dont have dhcp setup means this will be very useful to work
with target locally.
I hope you will consider this request and will support static ip also for
lava users.
Regards,
Guru
Hi Everyone,
In the Beaglebone-Black Health Check, `bbb_debian_ramdisk_test.yaml`,
located in the Linaro master repository
(https://git.linaro.org/lava-team/refactoring.git), there are the
following lines in the "action:" block:
---
kernel:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
ramdisk:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
# the bootloader needs a u-boot header on the modified ramdisk
add-header: u-boot
modules:
url:
http://snapshots.linaro.org/components/lava/standard/debian/jessie/armhf/4/…
compression: gz
---
How is the `initramfs.cpio.gz` generated? KernelCI's build.py script
doesn't generate it. None of the Lava scripts generate it, yet it is
required to perform the boot test of a kernel on the Beaglebone Black. I
can't find it mentioned anywhere in the documentation either.
How did you generate this so it is compatible with the BBB? We want to
follow Linaro's standards, guidelines and recommendations as close as we
can, but this particular part seems to be missing.
Any help you can offer would be greatly appreciated.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Everyone,
This week one of my teammates discovered that storage was very low on
our LAVA Server. After he investigated, he found that
/var/lib/lava/dispatcher gradually increases in size. He realized that
when a test is run, the files are accumulated in
/var/lib/lava/dispatcher/slave/tmp for each job. However, they are never
removed.
Does LAVA have a setting or does it have some kind of automation that
will remove tests, say, after X days or by some other criteria or, do we
need to remove them manually?
I appreciate any guidance you can offer.
Thank you!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Feature request:
Please add an option to "lava-server manage device-types" to add an
alias. Currently this can be done from the django interface, but the
command-line interface is much more automation friendly and
administrator friendly.
Thanks,
Kevin
Hi , we're attempting to use lava-ci to submit a test to lava, I've
cloned it from
https://github.com/kernelci/lava-ci.git
But when I attempt to submit a simple test
../lava-job-runner.py --username lavauser --token ... --server http://localhost:8080/RPC2
I get
Connecting to Server...
Connection Successful!
connect-to-server : pass
Gathering Devices...
Gathered Devices Successfully!
Gathering Device Types...
Gathered Device Types Successfully!
Submitting Jobs to Server...
but I don't see any submitted jobs in the lava2 web interface, is there
anything obvious elsewhere I should be checking? - or does the absence
of a 'Submitted Jobs successfully', if there should be one, means nothing
has been submitted?
Robert
Hi,
In LAVA v1, one could declare login commands to be run after logging in and
before starting any of the tests. For example:
"actions": [
{
"command": "deploy_image",
"parameters": {
"image": "https://some-url-to-a-rootfs.ext4.img.gz",
"bootfstype": "ext2",
"login_commands": [
"sudo su"
],
"login_prompt": "login:",
"username": "username",
"password_prompt": "Password:",
"password": "password"
}
},
In this case, "sudo su" is needed to open a regular user session and inherit
the user's environment while also having root privileges.
In LAVA v2, there isn't the possibility to do anything like this directly. One
could define a test with inline commands, but this is not ideal. The login
commands are not a test but part of how the job sets up the environment in
which the tests are run - i.e. it's part of the initial conditions. Also it's
quite a convoluted and lengthy way of running some commands, and it relies on
the side effects of that "login commands test" to persist when running
subsequent tests.
So I've made a couple of patches to see how this could be implemented in LAVA
v2 with an extra parameter in auto_login:
https://review.linaro.org/#/q/topic:T5703-login-commands
For example:
- boot:
method: qemu
auto_login:
login_prompt: 'login:'
username: user
password_prompt: 'Password:'
password: password
login_commands:
- sudo su
Essentially, this makes auto_login more flexible. At the moment, after logging
in auto_login sets the shell prompt: this is already some kind of hard-coded
login command. Some jobs need to run other things such as "sudo su" to stick
to the same example.
Another login command we've used is "systemctl --failed" to show if any systemd
units (services) failed to load during boot.
Notes from the Gerrit reviews:
* The login commands can't be part of a device definition as they are not
related to the device hardware or the boot configuration. For example, when
running Android, one would not want to run "sudo su" but maybe "setprop ..."
or something else - to be defined in each job.
* The login commands should not be fixed in a given distro / userspace
configuration as each job may need to set up a different initial environment.
For example, some jobs may need to be run with a regular user and would not
use the "sudo su" login command.
* Some documentation and unit tests would need to be added for this to be
merged. This is to first discuss the idea and review the code changes.
Any thoughts? Would it make sense to add this feature or maybe implement it
differently?
Best wishes,
Guillaume
Hi Everyone,
I have a co-worker who wants to use our Kernel CI & Lava Virtual
Machine. He says he wants to boot the VM, log in, and run a command that
downloads a kernel and then tests multiple defconfig's and multiple
versions of the Linux kernel. What is the best tool to do this (lava-ci,
lava-tool, or a different tool)?
Can you point me to some examples of the tool you recommend?
Any help you can offer would be greatly appreciated.
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hello,
I'm trying to set one timeout per test in a job. To do so I'm declaring one test block per test.
Unfortunately, it seems that only the first timeout declaration is taken into account. Did I miss something in my job definition?
Best regards,
Denis
Dear all,
This is my first post on the mailing list, I hope I'm at the right place.
Using Lava V2, I'm trying to install packages in the DUT following the guidelines from
https://validation.linaro.org/static/docs/v2/writing-tests.html#test-defini…
My job looks like this:
metadata:
(...)
install:
sources:
- http://<local_network_package_server>/sti
- http:// <local_network_package_server>/all
- http:// <local_network_package_server>/cortexa9hf-neon
deps:
- python-pytest
- python-lxml
- packagegroup-core-buildessential*
run:
steps:
- step1
- step2
parse:
pattern: "^(?P<test_case_id>\\w+) RESULT:(?P<result>(pass|fail|unknown))"
fixupdict:
FAILED: fail
SUCCESS: pass
ABORTED: unknown
Running this test, I get the following error:
<LAVA_TEST_RUNNER>: started<LAVA_TEST_RUNNER>: looking for work in /lava-3715/0/lava-test-runner.conf-1484266027
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS under lava-test-shell...
<LAVA_TEST_RUNNER>: running 0_TC_BENCH_DISK_AIO_STRESS installer ...
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 5: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 6: lava-add-sources: command not found
/lava-3715/0/tests/0_TC_BENCH_DISK_AIO_STRESS/install.sh: line 7: lava-add-sources: command not found
Error: OPKG package manager not found in the path.
It seems lava-add-sources is not copied to the target. Do I understand the log correctly?
Best regards,
Denis
Hello Team,
My name is guru. i am very new to lava and i am very much interested using
lava concept for embedded linux boards for auto deployment and testing
concepts.
I tried to setup the lava for bbb device. i have followed below steps for
that.
> 1. installed debian on vm machine and lava-server and its component
(jessie-backport) 2016.
> 2. just for understanding purpose i tried to add kvm job it was loaded
successfully.
> 3. Now i am trying to add the BBB device on lava.
> 4. For that i have added the bbb device to dispatcher. find the conf file
below
> name: beaglebone-black01.conf
> content :
> device_type = beaglebone-black
> hostname = beaglebone-black01
> connection_command = telnet localhost 2003
> hard_reset: /usr/bin/reset.sh
> power_off: /usr/bin/off.sh
> power_on: /usr/bin/on.sh
> Note : i am not using pduclient. i am using my own script for control
commands
> but it is not working while executing the hard_reset command on lava..
find
the log for more details.
>
> 5. My current setup is like i am controlling the bbb using serial
controlled
relay from VM host machine(debian).
>
> for that i made my own custom script to on.off,reset serial python code
for
controlling the relay.
> 6. after that i tried to submit the below json test job. Find My
definition
job attached.
> I have taken the below json for reference.
> https://git.linaro.org/lava-team/lava-functional-tests.
git/tree/lava-test-shell/single-node/armmp-bbb-daily.json
> 7. after that i have submitted the job . find the job log for more
details.
> 8. i have no idea what is going on and what went wrong on my setup.
> Help me out to boot the BBB using lava.
Regards,
Guru
On 27 March 2017 at 14:54, Ковалёв Сергей <SKovalev(a)ptsecurity.com> wrote:
> Thank you Neil for you reply.
Please keep the list in CC:
>
>> Compare with: https://staging.validation.linaro.org/scheduler/job/168802
>
> I have tried https://staging.validation.linaro.org/scheduler/job/168802/definition but iPXE stuck on it. I have amd64 machine with UEFI.
"stuck" ? This is a standard amd64 Debian kernel with modules and
initramfs. It is already UEFI-aware. Does the machine run Debian
natively? Is there a Debian kernel you can use in your LAVA
submissions (with modules and ramdisk)?
>> First step is to replace these files with images which work on the x86 DUT on staging.validation.linaro.org
>
> I perform kernel development with my colleagues so I have to load our kernels.
Yes, however, to debug what is going on, you should switch to known
working files so that you have a valid comparison with known working
test jobs. Once debugging has produced some results, then switch back
to the locally built kernels. Change one thing at a time.
>> That just isn't going to work. The initrd needs to come via TFTP but this is an absolute path.
>
> 'initrd' is come via TFTP. In context I supply additional kernel boot options.
Your original email quoted:
context:
extra_kernel_args: initrd=/rootfs.cpio.gz root=/dev/ram0
rootfs.cpio.gz does not exist when the machine boots. The initramfs
will have been downloaded by TFTP and loaded directly into memory, it
simply does not exist as a cpio.gz any longer. /dev/ram0 shouldn't be
needed with modern kernels. At best, it would seem that these options
are ignored.
Debian initramfs log:
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... done.
Warning: fsck not present, so skipping unknown file system
mount: can't find /root in /etc/fstab
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... mount: mounting /dev on
/root/dev failed: No such file or directory
mount: mounting /dev on /root/dev failed: No such file or directory
done.
mount: mounting /run on /root/run failed: No such file or directory
run-init: current directory on the same filesystem as the root: error 0
Target filesystem doesn't have requested /sbin/init.
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
run-init: current directory on the same filesystem as the root: error 0
No init found. Try passing init= bootarg.
BusyBox v1.22.1 (Debian 1:1.22.0-19) built-in shell (ash)
Enter 'help' for a list of built-in commands.
Matched prompt #5: \(initramfs\)
> This boot option have been detected before effort to automate the process with LAVA. Without it we could see kernel panic. With it we successfully load kernel and rootfs (from Buildroot). May be in Linaro you embed that boot options in compile time?
No, we do not embed anything in V2 (it's one of the key changes from
V1, we don't hid magic like that anymore.)
The files were prepared with:
https://git.linaro.org/lava-team/refactoring.git/tree/scripts/x86_64-nfs.sh
You can also see the build log for the original Debian kernel package
if relevant.
https://tracker.debian.org/pkg/linux-signedhttps://buildd.debian.org/status/fetch.php?pkg=linux-signed&arch=amd64&ver=…
Running x86_64-nfs.sh in an empty directory will provide access to the
config of the kernel itself as well as the initramfs and supporting
tools.
It's possible these context arguments are hiding some other problem in
the kernel but, as described so far, the options seem to make no
sense.
The command line used in our tests is simply: Command line: ip=dhcp
console=ttyS0,115200n8 lava_mac={LAVA_MAC}
(where LAVA_MAC does not need to be defined for these devices.)
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hello.
I'm trying to start LXC Debian hacking sessions on our V2 LAVA server.
This is the related configuration:
http://pastebin.com/index/DNGpJfc6
And I'm mostly doing what's in here:
https://git.linaro.org/lava-team/hacking-session.git
The problem I'm facing is that inside a script the environment seems to be broken, so there is no way to copy to ~/.ssh.
Regarding the environment I get this output:
$ echo $HOME
$ echo $USER
$ cat /etc/passwd | grep root
root:x:0:0:root:/root:/bin/bash
$ ls -al /root
total 16
drwx------ 2 root root 4096 Dec 16 15:33 .
drwxrwxrwx 19 root root 4096 Dec 23 13:18 ..
-rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc
-rw-r--r-- 1 root root 148 Aug 17 2015 .profile
$ env
TESTRUN_ID=1_hacksession-debian
SHLVL=4
OLDPWD=/
container=lxc
_=defs/hacksession-debian/setup_session
COLUMNS=80
PATH=/lava-248/1/../bin:/usr/local/bin:/usr/local/sbin:/bin:/usr/bin:/usr/sbin:/sbin
LAVA_RESULT_DIR=/lava-248/1/results/1_hacksession-debian-1482499502
LANG=C
LC_ALL=C.UTF-8
PWD=/lava-248/1/tests/1_hacksession-debian
LINES=24
If I mimic the lava LXC machine creation commands (lxc-create) and I attach to the machine I get a sane environment.
Is this expected behavior?
BR,
Rafael Gago
Hi,
I've installed LAVA and created 'qemu' device type.
$ sudo lava-server manage add-device-type '*'
$ sudo lava-server manage add-device --device-type qemu qemu01
Then, I downloaded an example of yaml to submit a job for the qemu image.
$ wget
https://staging.validation.linaro.org/static/docs/v2/examples/test-jobs/qem…
./
$ lava-tool submit-job http://<name>@localhost qemu-pipeline-first-job.yaml
The error is found during running 'image.py'.
(http://woogyom.iptime.org/scheduler/job/15)
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
Traceback (most recent call last):
File "/usr/bin/lava", line 9, in <module>
load_entry_point('lava-tool==0.14', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
153, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line
143, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 216, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file,
oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py",
line 265, in parse_job_file
env_dut=env_dut)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 165, in parse
test_action, counts[name])
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py",
line 66, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/logical.py",
line 203, in select
willing = [c for c in candidates if c.accepts(device, parameters)]
File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/image.py",
line 116, in accepts
if 'image' not in device['actions']['deploy']['methods']:
KeyError: 'actions'
It seems no 'methods' is found under actions->deploy block on parsing
the yaml file but I'm not sure this error means wrong yaml usage or not.
Best regards,
Milo
Hi Williams,
I want to get lava job submitter by lava-tool.
And when I use command "lava-tool job-details", the submitter info is displayed as "submitter_id". How can I convert the submitter id to submitter username?
Thanks.
The gitweb (depend apache2) and lava is installed on same host.
But the 80 port is used by lava, so gitweb cannot be visited with browser.
So i want to change the lava‘s port to another one, such as 8088, but after
changed the file:
/etc/apache2/sites-enabled/lava-server.conf, lava can not works.
Does anyone know how to make lava-server use another port ?
Btw, i can not find out the "DocumentRoot" of lava-server . The config file
is defined the "DocumentRoot" is "/usr/share/lava-server/static/lava-server/",
but i can not see the default index.html. ( Only see the templates file
in /usr/lib/python2.7/dist-packages/lava_server/templates/index.html )
Could someone tell me where is the lava-server's default index page ?
--
王泽超
TEL: 13718389475
北京威控睿博科技有限公司 <http://www.ucrobotics.com>
Hi Everyone,
I am trying to set up a standalone Lava V2 Server by following the
instructions on the Linaro website and so far things have gone smoothly.
I have Lava installed, a superuser created and I can access the
application through a web browser. But, I have the following issues:
ISSUE #1:
- When I tried to submit a simple Lava V2 test job, I got an error
message stating that the "beaglebone black" device type is not
available.
- I found the directory where the .jinja2 files were stored including
the beaglebone-black.jinja2 file, but regardless of what I tried, I
couldn't get the web application to see the device type definitions.
- It seems like the application isn't pointing to the directory where
those device type files are stored.
- What do I need to do to make the Lava Server "see" those device type
files?
ISSUE #2:
- When I tried to submit a job, I pasted a small .yaml file and the
validation failed because it didn't recognize the data['run'] in the
job. I tried a few others and then I tried a V1 .json file and it
validated just fine.
- What do I have to do to allow Lava to accept V2 .yaml files? Am I
missing something simple?
As always, I greatly appreciate any feedback you may have to help me
out.
Thank you in advance!
--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown(a)codethink.co.uk
Mobile: +1 317-560-0513
Hi Williams,
The submitted time is 8 hours behind my local time. How to change job submitted time displayed on LAVA web page?
I have tried to modify TIMEZONE of the file "/usr/lib/python2.7/dist-packages/lava_server/settings/common.py" and "/usr/lib/python2.7/dist-packages/django/conf/global_settings", and then restarted the LAVA server, but it seemed nothing changed.
Thanks.
Hello.
I have configured a LAVA server and I set up a local Django account to start configuring things:
sudo lava-server manage createsuperuser --username <user> --email=<mail>
Then I want to add LDAP support by adding the relevant fields to /etc/lava-server/settings.conf:
"AUTH_LDAP_SERVER_URI": "ldaps://server.domain.se:636",
"AUTH_LDAP_BIND_DN": "CN=company_ldap,OU=Service Accounts,OU=Resources,OU=Data,DC=domain,DC=se",
"AUTH_LDAP_BIND_PASSWORD": "thepwd",
"AUTH_LDAP_USER_ATTR_MAP": {
"first_name": "givenName",
"email": "mail"
},
"DISABLE_OPENID_AUTH": true
I have restarted both apache2 and lava-server.
I was expecting to get a Sign In page like this one:
https://validation.linaro.org/static/docs/v1/_images/ldap-user-login.png
Unfortunately I'm not familiar with neither django (and Web development) and LDAP and I don't know how to debug this. I have tried to grep for ldap|LDAP in /var/log/lava-server but nothing pops up.
Unfortunately I couldn't find a way to browse the mailing list for previous answers. GMANE search doesn't work today.
How should I proceed?
I have a multi-node test involving 13 roles that is no longer syncing properly after upgrading to 2016.11 this morning. It seems that 2 or 3 nodes end up waiting for a specific message while the other ones finish the message and move on to the next. Looking at the dispatcher log, I don't see any errors, but it's only logging that it's sending to some of the nodes. For example, I see a message like this for the nodes that work in a run:
2016-11-10 13:10:37,295 Sending wait messageID 'qa-network-info' to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tm
p/7620/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {},
"/var/lib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}
2016-11-10 13:10:37,295 Sending wait response to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"message": {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7620/
device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {}, "/var/l
ib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}, "response": "ack"}
For the nodes that get stuck, there is no message like above.
All of the nodes are qemu type, all on the same host. The nodes that fail are not consistent, but there seems to be always 2 or 3 that fail in every run I tried.
Is there anything I can look at here to figure out what is happening?
--
James Oakley
james.oakley(a)multapplied.net
[Moving to lava-users as suggested by Neil]
On 11/07/2016 03:20 PM, Neil Williams (Code Review) wrote:
> Neil Williams has posted comments on this change.
>
> Change subject: Add support for the depthcharge bootloader
> ......................................................................
>
>
>
> Patch Set 3:
>
> (1 comment)
>
> https://review.linaro.org/#/c/15203/3/lava_dispatcher/pipeline/actions/depl…
>
> File lava_dispatcher/pipeline/actions/deploy/tftp.py:
>
> Line 127: def _ensure_device_dir(self, device_dir):
>> Cannot say that I have fully understood it yet. Would it be correct
>> if the
>
> The Strategy classes must not set or modify anything. The accepts
> method does some very fast checks and returns True or False. Anything
> which the pipeline actions need to know must be specified in the job
> submission or the device configuration. So either this is restricted
> to specific device-types (so a setting goes into the template) or it
> has to be set for every job using this method (for situations where
> the support can be used or not used on the same hardware for
> different jobs).
>
> What is this per-device directory anyway and how is it meant to work
> with tftpd-hpa which does not support configuration modification
> without restarting itself? Jobs cannot require that daemons restart -
> other jobs could easily be using that daemon at the same time.
So each firmware image containing Depthcharge will also contain
hardcoded values for the IP of the TFTP server, and for the paths of a
cmdline.txt file and a FIT image. The FIT image containing a kernel and
a DTB file, and optionally a ramdisk.
Because the paths are set when the FW image is flashed, we cannot use
the per-job directory. Thus we add a parameter to the device that is to
be set in the device-specific template of Chrome devices. If that
parameter is present, then a directory in the root of the TFTP files
tree will be created with the value of that parameter.
The TFTP server doesn't need to be restarted because its configuration
is left unchanged, we just create a directory where depthcharge will
look for the files.
Thanks,
Tomeu
> I think this needs to move from IRC and gerrit to a thread on the
> lava-users mailing list where the principles can be checked through
> more easily.
>
>
Hi everyone,
As I have probably mentioned in previous emails, Im using the yocto
project to generate some linux images that I want to test using lava as
part of the continuous integration development.
So far so good, i can submit the job description to lava using lava-tool
and it will start the tests. I'm happy so far with all the results.
Now my question is to ask you what would be the correct way do this
procedure. Do you think it is reasonable to have a lava-tool submit-job
followed by a waiting step using lava-tool job-status to report the
final build result? or there is a nicer way to do this?
Thanks a lot for your help in advance :)
Best,
Alfonso
By default, a uboot header is automatically added to the ramdisk image.
For bootloaders without INITRD_ATAG support, the ramdisk needs to be
passed on the command line and cannot have the uboot header added.
To enable this feature, add a "ramdisk_raw" option that device files can
set so that a uboot header is not added.
Signed-off-by: Kevin Hilman <khilman(a)baylibre.com>
---
Patch applies on 2016.9
lava_dispatcher/config.py | 1 +
lava_dispatcher/device/bootloader.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 66a9e70021fa..c91c5634280d 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -312,6 +312,7 @@ class DeviceSchema(schema.Schema):
uimage_xip = schema.BoolOption(default=False)
append_dtb = schema.BoolOption(default=False)
prepend_blob = schema.StringOption(default=None)
+ ramdisk_raw = schema.BoolOption(default=False)
# for dynamic_vm devices
dynamic_vm_backend_device_type = schema.StringOption(default='kvm')
diff --git a/lava_dispatcher/device/bootloader.py b/lava_dispatcher/device/bootloader.py
index 634d22ef3311..c88fba8937e6 100644
--- a/lava_dispatcher/device/bootloader.py
+++ b/lava_dispatcher/device/bootloader.py
@@ -208,7 +208,7 @@ class BootloaderTarget(MasterImageTarget):
decompress=False)
extract_overlay(overlay, ramdisk_dir)
ramdisk = create_ramdisk(ramdisk_dir, self._tmpdir)
- if self._is_uboot():
+ if self._is_uboot() and not self.config.ramdisk_raw:
# Ensure ramdisk has u-boot header
if not self._is_uboot_ramdisk(ramdisk):
ramdisk_uboot = ramdisk + ".uboot"
--
2.5.0
Hello everyone,
Can you help me on below two questions?
1. I did email notification settings for sending emails after job complete or incomplete.
How can I get whole logs (where are logs?) about email sending process? I need to debug email sending.
2. I want to use script to control device state periodically.
How can I set device to maintenance state using command, like lava-tool command?
Thanks in advance.
Hello everyone,
just a simple question. I might be wrong but I understand that
submitting a job through lava-dispatch and lava-tool should lead to the
same process. Now, with the dispacher you can already specify an
specific target device encoded in yaml format. Does the lava-tool at
some point reach a similar target configuration? does it generate it or
it is stored somewhere? in the latter case, where is it stored?
thanks in advance :)
Best,
Alfonso
Hello everyone,
just a quick question.
I am trying to run a job to test a qemu image which is stored remotely.
The image is packed in a tar.xz file with both the kernel and the file
system.
is there a way to specify in the job description json file that before
the deploy action it must open this tar.xz file and then use the kernel
and filesystem?
Thanks a lot :)
Best,
Alfonso
Hello guys,
I am currently trying to install the lava-server/dispatcher on my local
pc with Ubuntu 16.04. Unfortunately, I had little success installing the
source projects from github.com/linaro. I just wanted to ask if you
could recommend me what would be the best approach for me.
Thanks for your help.
Best regards,
Alfonso
Hi,
I'm trying to get the proper relationship between requested tests and
results in LAVA v2. Here is example job:
https://validation.linaro.org/scheduler/job/1109234
and results for this job:
https://validation.linaro.org/results/1109234
How can I tell:
- which result matches which test?
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
In LAVA v1 the matching was a very arduous process. One had to
download the job definition, look for lava-test-shell actions, pull
the test definition yaml sources and match yaml ID and to ID found in
result bundle. How does this work in v2?
milosz
Hello,
Google has release the latest version of Tradefed with the Android N
release.
https://source.android.com/devices/tech/test_infra/tradefed/index.html
Lots of dispatcher/slave features which LAVA already supports.
Given this update, is LAVA exploring to adopt the new mechanism or continue
developing its own architecture ?
Thanks
Sandeep
Hi,
Chase did an excellent job and put together a piece of code that
allows us local execution of lava-test-shell. This means we can use
our 'lava' test definitions on the boards that are not deployed in any
LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Major part of this code is taken directly from lava dispatcher. There
are slight modifications but I would like to keep them to minimum or
remove at all (if possible). So the question follows - is there a way
to achieve the same goal with only LAVA code? One of the biggest
problems was requirement for ACKing that lava-test-shell requires.
This makes the tests 'interactive' which isn't best for semi-automated
execution. This bit was removed and now we're able to use test shell
locally in non-interactive mode. The README file describes the use
cases we're covering. Any comments are welcome. Code can be found
here:
https://git.linaro.org/qa/lava-local-test.git
milosz
Hi Mahesh,
On Tuesday 19 July 2016 05:58 PM, Umamaheswara Rao Lankoti wrote:
> I am Umamaheswara Rao working for Innominds and I am trying to evaluate
> LAVA framework as part of Continuous Integration job.
Nice to know.
> I am looking at automating the smoke tests with LAVA, downloading a
> newly generated build from Jenkins, flash it on android phone, boot into
> home screen and run a minimal set of usecases, report success/failure
> for the test cases, etc..
There is no direct integration such as plugins which does this in LAVA.
But you can submit jobs to LAVA via scripts, once the builds are ready
in Jenkins. This is already done as part of many jenkin based CI loops
used within Linaro and elsewhere.
> Looking at the documentation, I came to know that Ubuntu support is
> stopped. Would Debian Jessie be supported in future?
Yes you are right Ubuntu is deprecated. Debian will be supported in
future. We support Debian Jessie and Testing as of today.
PS: I ve added lava-users mailing list, so that you would get more
inputs on this topic, in case if I ve missed anything.
Thank You.
--
Senthil Kumaran S
http://www.stylesen.org/http://www.sasenthilkumaran.com/
Hi,
I've just made a fresh Debian (Jessie) installation.
Then, I've added jessie-backports and installed lava-server from there.
Once the installation completed, I've rebooted and the GUI desktop
environment doesn't come up.
This happened twice already, so it's definitely the Lava installation
that's breaking something there. Is this a known issue? Any suggestions?
Regards,
matallui
Hi,
I had a boot issue and it was because of using SPACE to stop the autoboot. A quick fix for this was to add a \" to a variable declaration such as "setenv dummy variable\"". Do you know of any other fix for this? It seems the default for lava is "Hit any key to stop autoboot" and just send a "b" to stop it. In my case, I had to change the device configuration file to look for "Press SPACE to abort autoboot in 2 seconds" and then send a " ". The device configuration looks as follows http://pastebin.ubuntu.com/18790420/.
Now I have another issue where the job keeps going even though the image booted successfully. I want it to stop once it sees "am335x-evm login:". Is there a way to do this? I changed the master_str = am335x-evm login:.
This is my complete log http://pastebin.ubuntu.com/18790288/.
Thanks,
Josue
Dear Lava team,
We're deploying Lava V2. So far we've been working on old servers to prototype our installation. We're now almost ready to order our definitive PCs.
We have assessed some key features for our Lava server. Still, we're not 100% sure on how powerful it should be.
Is it possible for you to share the main characteristics of your current Lava servers (Number of cores, RAM, size of disk)? That would be helpful.
Thanks and best regards,
Denis Humeau
Hi,
I'm currently trying to configure a Switched Rack PDU to the lava instance.
How do I know if the driver for my PDU APC AP7901 is supported in the framework?
I got it connected both via serial and telnet. My /etc/lavapdu/lavapdu.conf file is attached.
Also, when I restart the lavapdu-runner it fails with the error that is attached.
As for the beaglebone black support, I have a device dictionary but where should I save it so lava can access it? I've seen that the pdu hostname is always similar to pdu05. How do I know my pdu's hostname?
I appreciate the help, thanks.
Regards,
Josue Albarran
Hello,
I have a lava setup and right now running two single instance servers.
These are pretty high end servers are handling 4-5 Android devices over USB
I need to expand that to multi node setup with maximum DUTs connected to
one dispatcher.
The target is only to have android DUTs so that implies a very stable USB
hub and stack on the dispatcher machines.
I used a couple of off the shelf desktops in the past and the USB stack
could not handle more than 1 device at a time.
Any suggestions for hardware that is proven to be solid for dispatchers.
Thanks
Sandeep
Hello,
For the 2016.4 release I had created a custom LAVA extension to add 2
commands and 2 xmlrpc methods. The extension mechanism was very
convenient. All I had to do was install an egg on the system and restart
the lava service. When I upgraded to 2016.6 the extension did not work
due to commit b6fd045cc2b320ed34a6fefd713cd0574ed7b376 "Remove the need
for extensions".
I was not able to find a way to add my extension to INSTALLED_APPS and
register the xmlrpc methods without modifying settings/common.py and
urls.py. I looked through common.py and distro.py and could not find
support in settings.conf for extensions. I also looked for a
local_settings import which is referenced on the Internet as a common
way to extend django, but did not find it. If there is a way to extend
LAVA without modifying the LAVA python code, please let me know and I
will be happy to send in a documentation patch.
It would have been nice if functionality such as the extension
mechanism, which is part of the external interface of LAVA, had gone
through a deprecation cycle. A reworked demo app showing how the new
extension mechanism works would have also been helpful.
Thank you for your time.
--
Konrad Scherer, MTS, Linux Products Group, Wind River
---------- Forwarded message ----------
From: Steve McIntyre <steve.mcintyre(a)linaro.org>
Date: 31 May 2016 at 16:46
Subject: Re: pipeline vland help
To: Christian Ziethén <christian.ziethen(a)linaro.org>
Cc: neil.williams(a)linaro.org
On Tue, May 31, 2016 at 03:29:45PM +0200, Christian Ziethén wrote:
>Hi,
Hi Christian,
You'd be much better asking on the lava-users mailing list rather than
just directly to me, I'll be honest.
>Been struggling with setting up a vland between two arndale targets. I have
>managed to create a multinode job that uses the pipeline model:
>https://lng.validation.linaro.org/scheduler/job/9702/multinode_definition
>Have also managed to create valid yaml that seems to conform to the code in
>lava-server/lava_scheduler_app/schema.py
>https://lng.validation.linaro.org/scheduler/job/9743/multinode_definition
>This one does however not do anything after being submitted, I tried
putting
>100M in the tags for the vland, I also tried requiring that the arndale
targets
>in the multinode protocol had the 100M tag, but that didn't work.
According to
>the device dictionary for lng-arndale-01, it should have a 100M tag on
iface1.
Hmmm, OK. I can see that it's sat waiting for devices to be
assigned. Looking at the definition, you've got one group (role)
defined with 2 entries. I believe that for VLANd jobs you need to have
individual roles for each machine. Neil can confirm.
>Also have this job (v1):
>https://lng.validation.linaro.org/scheduler/job/9118/multinode_definition
>Which runs my test using iface1 (I think) but it doesn't use vland.
Right, v1 doesn't do vland.
>I am unsure how to debug this.
>
>It was my assumption that I could create a vlan with the vland protocol and
>then query which interface is on that vlan in my test-definition. That
would be
>my end goal for this exercise.
Sure, that's what we expect to have working for you.
Cheers,
--
Steve McIntyre steve.mcintyre(a)linaro.org
<http://www.linaro.org/> Linaro.org | Open source software for ARM SoCs
Hi,
For the last couple of days I’ve been trying to setup a device in our lab for deployment and installation of a Linux OS.
However, I’ve came across a few issues and lots of questions and hopefully I can get some answers here.
Considerations:
1. I’ve decided to give it a go with LAVA v2 and try using the pipeline model
2. I’m not testing any “boards”, so it’s hard to find a similar example
3. I am not deploying any Linaro images (not even debian based)
4. My team develops and supports a reference Linux OS, based on RHEL, so that means we have total control of our images, kickstarter scripts, etc.
5. We already have a PXE server in our network, which is where our servers (the targets in this context) get booted from by default
6. Once booted from PXE, we get a PXE menu, where we can either select an option, or press ESC and add a custom command (at this syslinux is running)
7. We have access to the serial console of every device via serial port server (telnet <ipaddr> <port>)
8. We have power control over every device via IPMI
Issues:
1. I couldn’t find the documentation for how to add devices and device types to the server (which location to add the file, which format - yaml/conf)
2. In the above described environment, I suppose we would skip the deployment phase, since the devices already boot into syslinux from PXE (is this correct?). Either way, it would be nice to be able to run ‘ipmitool chassis booted pxe’ before rebooting the system.
3. Either way (via boot or deploy), how can I make sure to detect the PXE (syslinux) menu, send the ESC key, and send the command I need to trigger the kickstart installation?
To sum-up, the Workflow I’m trying to achieve after having completed the whole setup sort of goes like this:
1. Reboot target device into PXE (the PXE itself will download and start syslinux here)
2. Wait for PXE menu (expect some sort of string)
3. Send ESC key (to get into the boot shell)
4. Send command (this will trigger our kickstarter script and the installation will take around 5 minutes). The images that are needed are fetched automatically from our sftp server.
5. Wait for boot prompt after system installation succeeds
6. Login using credentials (username, password)
7. Run whatever tests we need from here
Any help here would be much appreciated.
Thanks in advance!
--
matallui
Hello,
I just started to set up a lava server to test custom devices and I've some questions about testing custom or third party boards.
Does someone know a good documentation or tutorial for the integration of custom boards? It seems that there isn't much documented for this topic.
Is there any documentation for setting up a custom device configuration file and the commands that can be used, like client_type?
Thank you and best regards
Stefan
[cid:b073c64a8131462290b626cc30430be3]<http://www.facebook.com/WagoKontakttechnik> [cid:00cd0405ac4344fbb39caf677313ca6c] <http://www.youtube.com/WagoKontakttechnik> [cid:3139f430e8f940bb956be9ff06d8d5c2] <https://www.xing.com/companies/wagokontakttechnikgmbh&co.kg> [cid:614f801bb8f64c1d8a9dbfd711f2a29a] <http://twitter.com/WAGOKontakttech> [cid:69804606bdd6406aa66c78bf7dae6c2a] <http://www.linkedin.com/company/wago-kontakttechnik>
[cid:711e500a31984a6a959819fb46ba7f2d]
________________________________________________________________________________________
Diese E-Mail einschlie?lich ihrer Anh?nge ist vertraulich und daher allein f?r den Gebrauch durch den vorgesehenen Empf?nger bestimmt. Dritten ist das Lesen, Verteilen oder Weiterleiten dieser E-Mail sowie jedwedes Vertrauen auf deren Inhalt untersagt. Wir bitten, eine fehlgeleitete E-Mail unverz?glich vollst?ndig zu l?schen und uns eine Nachricht zukommen zu lassen.
This email may contain material that is confidential and/or privileged for the sole use of the intended recipient. Any review, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
WAGO Kontakttechnik GmbH (nach Schweizer Recht) & Co. KG; Rechtsform: Kommanditgesellschaft; Sitz: Minden; Registergericht: Bad Oeynhausen, HRA 6218; Pers?nlich haftender Gesellschafter: WAGO Kontakttechnik Beteiligungs GmbH; Sitz: Domdidier (Schweiz); Handelsregisteramt CH-217-3533291-9; Gesch?ftsf?hrung: Dipl.-Wirtsch. Ing. (FH) Sven Michael Hohorst; Dipl. Betriebsw. Axel Christian B?rner; Dipl.-Ing. (FH) Ulrich Bohling.
Hi there,
When I run LAVA hacking session on Juno but found sometimes Juno
cannot be allocated IP properly:
- I created multinode definition for Juno:
https://validation.linaro.org/scheduler/job/845471.0: this
definition is to launch kvm so I can run testing suits on it;
https://validation.linaro.org/scheduler/job/845471.1: this
definition is to launch "deploy_linaro_image" on Juno board;
- After launched these two images, the kvm usually can work well and I
can smoothly log in it with ssh;
- But for juno board, it will have below log for ssh:
395.1 ./setup_session_oe: line 38: /etc/init.d/ssh: No such file or directory
395.2 <LAVA_SIGNAL_TESTCASE TEST_CASE_ID=sshd-restart RESULT=fail>
395.3 sshd re-start failed
395.4 Target's Gateway: 10.0.0.1
395.5 ip: RTNETLINK answers: Network is unreachable
395.6
395.7
395.8 *********************************************************************************************
395.9 Please connect to: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@ (juno-07)
395.10 *********************************************************************************************
395.11
So finally I cannot get the info for Juno's IP and cannot log in it
with ssh. It's not everytime I can reproduce this failure, so sometimes
it's lucky so I can get a correct IP.
- As a workaroud, I found if I create two saperated definitions for
Juno and kvm independently, then Juno's IP issue can be resolved:
https://validation.linaro.org/scheduler/job/845552https://validation.linaro.org/scheduler/job/845561
So could you help give suggestions for this?
Thanks,
Leo Yan
Hi,
I have made a test setup with LAVA to run some test cases on my AM335X
based platform. I want to execute some simple test cases like testing
the status of "ifconfig eth0" command. I don't want to deploy any image
and my goal is to execute only this test directly(target is booted with
root file system mounted).I have the below job submitted to the lava
worker. ethernet.yaml contains the shell script which actually performs
the ifconfig test.
=============================================================================================
{
"actions": [
{
"command": "dummy_deploy",
"parameters": {
"target_type": "oe"
}
},
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"testdef": "/root/lava_tests/yaml/ethernet.yaml"
}
],
"timeout": 60
}
},
{
"command": "submit_results_on_host",
"parameters": {
"server": "http://10.24.252.13:81/RPC2/",
"stream": "/anonymous/lavaserver/"
}
}
],
"device_type": "ELIN",
"health_check": false,
"job_name": "elin-ethernet-test",
"logging_level": "DEBUG",
"timeout": 60
}
=======================================================================================================
Lava dispatcher is able to connect to the target and start the job. But
finally I am getting a timeout. Please find the attached log for
details. I can see that after connecting with the target the dispatcher
is checking/finding some information related to the file system, and
looks like it is getting timed out there. How can I avoid this and make
the dispatcher to directly perform the test on the target platform?
Any help any help would be appreciated.
Best Regards,
Pradeepkumar Soman
Hi all,
We are using LAVA to run jobs from KernelCI which is a test automation
system used to test latest versions of the kernel (upstream, next,
stable, ...). Our main goal is to put in our lab a maximum of boards
which are not yet in KernelCI to provide boot reports for these boards.
However, for some of these, we own a single copy while we definitely
need to work on them to add new features or improve them.
What we could do (and what we are currently doing), is to manually put a
board in maintenance mode, take it physically outside of the lab, work
on it and put it back in the lab when we have finished working on it.
This is not really efficient and during the time the board is physically
outside of the lab, not a single job (from KernelCI or ours) can run.
We would like to create a tool to be able to remotely control our boards
while they still are physically in the lab. We need to be able to do
everything we could do if the board would be on our desk. This means
getting the serial connection, powering on or off the board and sending
files (kernel, DTB, initramfs, ...) to it.
For the latter, we just have to copy the files in the directory used by
the TFTP server of LAVA master node.
I would like to know if it is possible to add an endpoint in the API to
power on or off a given board? Is it possible to get the serial
connection over the API?
To put a board virtually outside of the lab, we need to put it into
maintenance mode in LAVA. As of yet, this is only possible from the
browser, right? It would be great if we could add two endpoints to the
API: one for putting a board offline and one for putting a board online,
so we can remotely manage from our program whether a board is in the lab.
We may have few people working on the same board. Therefore, we need a
way to ensure only one person is able to use this board at a certain
time. I've seen the "created_by" attribute in the
DeviceStateTransition[1] which could help us to find who last put the
board virtually outside of the lab and thus denying access to other
users. However, we do not have a way to get this information today via
the API. Is it possible to add an endpoint to get the status of a given
device (like in 'all_devices'[2] but for one device) and the associated
user responsible of the last device state transition?
I can help with patches if you agree to add these endpoints to the API.
Thanks,
Quentin
[1]
https://github.com/Linaro/lava-server/blob/release/lava_scheduler_app/model…
[2]
https://github.com/Linaro/lava-server/blob/release/lava_scheduler_app/api.p…
The Cambridge LAVA Lab has a transparent squid proxy which saves having
to configure each dispatcher and device to use it. Outgoing HTTP
traffic from the lab has
no choice as it is intercepted at the internet gateway.
We did this because even after configuring the dispatcher, and
devices, it's almost
impossible to make all test shell tasks use the proxy. LAVA sets a
shell environment
inside a job but many of the clients in the various different types of
job simply ignore it.
Chasing every test writer was not feasible as the lab usage is so
large, but might be ok
in a smaller lab with tighter control of the jobs.
We don't proxy HTTPS requests because that's becomes very complicated
with faking
certificates etc
>>Marc Titinger <mtitinger at baylibre.com> writes:
>>
>> I had to make this change to get squid3 going with our LAVA 1.0 machine.
>> I thought this could be useful. I did not test extensively though.
>
>FWIW, I had problems getting lava-dispatcher to use my local squid proxy
>also. Seems setting LAVA_PROXY in lava-dispatcher.conf was working for
>the the devices (lava set the environment variable after booting the
>target), but lava-dispatcher itself was not using the proxy.
>
>I'll give this a try as well.
>
>Kevin
From: Marc Titinger <mtitinger(a)baylibre.com>
search_substr_from_array would return a false positive
in lava_dispatcher/downloader.py when trying to match
'no_proxy' exclusion list with the download object url.
This now uses a (too?) simple substring matching rather
than a greedy regex.
Signed-off-by: Marc Titinger <mtitinger(a)baylibre.com>
---
lava_dispatcher/utils.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lava_dispatcher/utils.py b/lava_dispatcher/utils.py
index f2fd79a..037f6b5 100644
--- a/lava_dispatcher/utils.py
+++ b/lava_dispatcher/utils.py
@@ -640,7 +640,7 @@ def search_substr_from_array(string, array, sep=','):
Strings in array are separated by default with ','
"""
for obj in array.split(sep):
- if re.search('.*' + str(obj) + '.*', str(string)):
+ if str(obj).find(string) is not -1:
return True
return False
--
2.5.0
hi, all,
I am using the new uefi, it use the cursor movement and the Enter to
enter the entry.
So I use the config file to write it, but I found it can not work well.
the config is below.
boot_cmds_ramdisk_sata = expect "Move Highlight",
sendline "\033[B",
sendline "\033[B",
sendcontrol "M",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendcontrol "M",
expect "D02",
sendline "provision {SERVER_IP} -u htsat -p
hello8943217 -f Image_D02 -a 10000",
expect "succ" ,
sendline "norwfmem 100000 100000 1f00000",
expect "OK",
sendline "provision {SERVER_IP} -u htsat -p
hello8943217 -f hip05-d02.dtb -a 100000",
expect "succ",
sendline "spiwfmem 100000 300000 100000",
expect "OK",
expect "OK",
sendline "exit",
expect "Select Language",
sendline "\033[B",
sendline "\033[B",
sendcontrol "M",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendline "\033[B",
sendcontrol "M"
After I read the source code of LAVA. I found it use the pexpect.spawn
to interact with the terminal, So I directly writte a python file to
directly use spawn to interact with the terminal, and it can work well .
The source code is below.
import sys
from pexpect import spawn
child = spawn("board_connect 2") # this is similar with minicom
logfile = "/tmp/test_for_terminal.log"
fd = open(logfile, 'a+')
child.logfile = fd
child.expect("to stop automatical booting", timeout=120)
child.sendline("b")
child.expect("Move Highlight")
## boot from sata
child.sendline("\033[B")
child.sendline("\033[B")
child.sendcontrol("M")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendcontrol("M")
child.expect("D02")
child.sendline("provision 192.168.6.100 -u htsat -p hello8943217 -f
Image_D02 -a 10000")
child.expect("succ")
child.sendline("norwfmem 100000 100000 1f00000")
child.expect("OK")
child.sendline("provision 192.168.6.100 -u htsat -p hello8943217 -f
hip05-d02.dtb -a 100000")
child.expect("succ")
child.sendline("spiwfmem 100000 300000 100000")
child.expect("OK")
child.expect("OK")
#child.interact()
child.sendline("exit")
child.expect("Select Language")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendcontrol("M")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendline("\033[B")
child.sendcontrol("M")
child.interact()
So I don't know why in LAVA it can not work. Also I have realized a
function named /sendcursor /similar with the sendcontrol in the
lava_dispatcher package especially in the
/usr/lib/python2.7/dist-package/lava_dispatcher/utils.py, and it call
the pexpect.spawn.sendline to send contents to the child process. But it
still have the same phenomenon with using sendline(namely won't select
the entry in the uefi menu). Does this problem is caused by the
dashboard showing?
The attached files are the log files when I use the python file to
execute the interaction and the one when LAVA execute it. Thanks for
your help.
Elaine
Hisilicon turing software and architecture
Hi,
We are building a LAVA lab which will be soon integrated to KernelCI. We
want to add most of our boards to LAVA that's why we created
configuration files for the old dispatcher (the one used by KernelCI at
this moment) for all device types we own.
I would like to know if you were interested in getting those
configuration files to put them in your git repository (we are using
some from:
https://github.com/Linaro/lava-dispatcher/tree/release/lava_dispatcher/defa…
If yes, what is your naming convention? I've seen sama53d for the
SAMA5D3 Xplained for example.
This is the list of boards currently added to our lab:
Board name Current configuration file's name
NextThingCo. CHIP sun5i-r8-chip
Boundary Devices Nitrogen6x imx6q-nitrogen6x
ATMEL at91rm9200ek at91rm9200ek
ATMEL at91sam9261ek at91sam9261ek
ATMEL at91sam9m10g45ek at91sam9m10g45ek
ATMEL at91sam9x25ek at91sam9x25ek
ATMEL at91sam9x35ek at91sam9x35ek
ATMEL sama5d31ek sama5d31ek
ATMEL sama5d36ek sama5d36ek
ATMEL sama5d44ek sama5d4ek
Marvell Armada 385 AP armada-385-db-ap
Solidrun Clearfog armada-388-clearfog
Plathome OpenBlocks AX3 armada-xp-openblocks-ax3-4
Regards,
Quentin
hi,
I have two questions to ask:
1.
I am using the lava to boot the D02 board, it can successfully execute
before. But it failed after I update the uefi and rewrite the config
file. But when I execute the config file manually, it succeed booting
the board. And I can figure out what is wrong. The version of LAVA I
used is 2015.9.post1, and I use LAVA v1.
From the figure, we can see it is strucked when booting the kernel.
then
The log file of the lava job, the log of boot manually and the config
file has been attached.
2.
The uefi is used need to control by cursor movement, so I need to send
KEY-DOWN in the d02.conf file. But when I use 'sendline "\033[A"' or
‘sendline "\16[A"', I found it can not work in the LAVA. So I want to
know if we have any method to realize it?
Thank you very much.
--
Best Wishes
wuyanjun (Elaine)
Hisilicon Turing Software and Architecture
hi, all,
The uefi the board used has upgraded, so that the configuration of it
can not be used in the current environment. Because now the uefi use the
dialog to show the menu.
As I know, the expect and sendline in the lava XXX.conf can not work
well in this situation. Does you know how to config the XXX.conf to
support this kind of uefi?
The menu of the uefi is below.
and
Thank you very much.
Elaine (wuyanjun)
Hisilicon, Turing Architecture and Software
Hi Neil and all,
how do I add per-action comments or meta data so that I can parse them
from the result bundle:
the following for instance will pass the schema validation, but the
result bundle will degenerate into keeping only the last "comment"
metadata entry, while I'd like a per-action metadata entry.
{
"command": "lava_command_run",
"metadata":
{
"comment": "comment1"
},
"parameters":
{
"commands": [
"uname -a"
]
}
},
{
"command": "lava_command_run",
"metadata":
{
"comment": "comment2"
},
"parameters":
{
"commands": [
"ls -al"
]
}
},
The result bundle will contain the same metadata for each lava-command:
"attributes": {
"comment": "comment2", <=== same value
"image.type": "kernel-ci",
.... snip ...
"logging_level": "DEBUG",
"device.tree": "omap4-panda-es.dtb"
},
"test_id": "lava-command"
IOW, how do I assign a brief description to an action ?
Thanks and regards,
Marc.
Hi Neil,
But the problem is I want to define a job and use python script to communicate with the Android DUT, for example I want to use script.py in dispatcher to control the DUT. And the "lava_test_shell" will copy all these commands in steps to run.sh.
# test_script.yaml
run:
steps:
- echo "hello world"
- ./android/script/script.py
# run.sh in DUT
###default parameters from yaml###
######
###test parameters from json###
######
###other parameters###
LAVA_SERVER_IP='172.16.117.50'
TARGET_TYPE='android'
######
set -e
export TESTRUN_ID=python
cd /data/local/tmp/lava-mstar01/tests/0_c7a6f9e0-ae70-4d8b-8c06-f6da04b7a26c
UUID=`cat uuid`
echo "<LAVA_SIGNAL_STARTRUN $TESTRUN_ID $UUID>"
#wait for an ack from the dispatcher
read
echo "hello world"
./android/script/script.py
echo "<LAVA_SIGNAL_ENDRUN $TESTRUN_ID $UUID>"
#wait for an ack from the dispatcher
read
------------------ 原始邮件 ------------------
发件人: "Neil Williams";<neil.williams(a)linaro.org>;
发送时间: 2016年1月30日(星期六) 晚上11:29
收件人: "wangbo"<wangbo.hfut(a)qq.com>;
抄送: "Lava Users Mailman list"<lava-users(a)lists.linaro.org>;
主题: Re: Forward of moderated message
On 29 January 2016 at 04:08, wangbo <wangbo.hfut(a)qq.com> wrote:
> Hi Neil,
>
> Thanks very much to your quickly and warmly answer.
>
> One more question:
>
> LAVA copy all the test scripts defined in the YAML to the DUT at the
> specified directory, then the job will be executed on the DUT.
The LAVA test shell scripts are POSIX shell. If a shell is not
available, the test can be rewritten to use ADB commands, as the CTS
tests do.
> And if I want
> to test one android device, but almost all the android images don't support
> python interpreter,
LAVA does not run any python code on the DUT unless the test writer
puts python commands into their test shell definition YAML. The python
code only runs on the dispatcher and handles communications over the
serial port (or using ADB).
> so it's impossible to write case with python, in most
> cases we should use shell to run the test case. But sometimes we want to use
> Appium/Robotium to test the android image, and use LAVA to deploy the image,
> analyze and record the test results, so LAVA is unavailable in such
> conditions. Do you have any advice to integrate these tools in LAVA or will
> LAVA support such conditions in the future.
You'd need to use multinode (as with CTS) to install the test support
binaries into a VM which then communicates with the device. That VM
then reports results that are retrieved with LAVA.
>
> Best Regards.
>
> ------------------ 原始邮件 ------------------
> 发件人: "Neil Williams";<neil.williams(a)linaro.org>;
> 发送时间: 2016年1月29日(星期五) 凌晨3:27
> 收件人: "wangbo"<wangbo.hfut(a)qq.com>;
> 抄送: "Lava Users Mailman list"<lava-users(a)lists.linaro.org>;
> 主题: Re: Forward of moderated message
>
>> From: wangbo <wangbo.hfut(a)qq.com>
>> Date: Thu, 28 Jan 2016 20:39:25 +0800
>> Subject: LAVA uninstall and upgrade, build method from the source code
>> Hi all,
>>
>> I installed LAVA on Ubuntu 14.04 in our lab, but later we plan to upgrade
>> the system to Ubuntu 15.10, so what should I do now?
>
> I'm afraid the only long term option is to migrate to Debian Jessie,
> possibly using a VM, and then using jessie-backports. It is uncertain
> whether it will be possible to support LAVA on the upcoming Ubuntu
> Xenial 16.04LTS and support for Trusty has already been frozen. There
> have been problems upgrading from 14.04 - these issues have already
> been discussed on the lists.
>
> https://lists.linaro.org/pipermail/lava-announce/2015-November/000003.html
>
> The next upload of LAVA should be able to migrate into Ubuntu ahead of
> the cut off for the Xenial release but that is outside our control. It
> is also unknown whether it will be possible to maintain a system based
> on Xenial - the LAVA software team will be unable to maintain such
> support and users would need to be proactive in fixing problems that
> may arise.
>
>> Because I have already installed LAVA, so should I uninstall it firstly,
>> and then use apt-get install to reinstall LAVA? But I can't find more
>> information about this in the website, can you give me some advice, thanks
>> very much.
>
> https://staging.validation.linaro.org/static/docs/installing_on_debian.html…
>
> The main problems will be with the database, so building in any way
> will have the same difficulties. Trusty is, sadly, a dead end at the
> moment.
>
>> And, how to install LAVA from the source code?
>
> Not supported - installation is only supported as Debian packages.
> LAVA is much more than a pip install / virtualenv can ever manage.
>
>> LAVA includes different parts, like lava-server, lava-dispatcher,
>> lava-tool, etc. So which one should be installed firstly? By the way in the
>> Git, for example "https://git.linaro.org/lava/lava-server.git/tree", there
>> is no document about the install method. Now, I will modify some code to
>> meet our demands in our lab, and then replace the corresponding file. We
>> plan to manage the code with Git too. So if we know the install method from
>> the source code, it will be better to build the code.
>
> There is no documentation for installing from git because the
> installation method has moved to using packages. We abandoned the old
> install from git methods a long time ago due to interminable bugs and
> insolvable dilemmas.
>
> During the migration to the new dispatcher, things are too complex to
> support more than Debian Jessie (and the next Debian release,
> Stretch). Once the migration is complete (in 2017), the advantages of
> the new design should also make it easier to support other operating
> systems. We are a long way from that at the moment.
>
> --
>
> Neil Williams
> =============
> neil.williams(a)linaro.org
> http://www.linux.codehelp.co.uk/
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Dear LAVA-Team,
I have set up two new LAVA instances (master, worker) based on the production release 2016.02.
The installation worked fine but I can't switch the apache site.
I followed the instructions on:
https://validation.linaro.org/static/docs/installing_on_debian.html#install…
Then I searched for the lava-server.conf file on my system, but didn't find it.
It looks like it isn't installed.
I also tried to add it to add it manually, but it didn't worked.
i.A. Marco Felsch
Entwicklung Standort Chemnitz
******************** Wir sind umgezogen! ********************
Tel. +49 371 433151-34, Fax +49 371 433151-22
Zwickauer Strasse 173, 09116 Chemnitz
*******************************************************************
mailto:marco.felsch@tq-group.com
TQ-Systems GmbH
Mühlstraße 2, 82229 Seefeld
Amtsgericht München, HRB-Nr. 105 018, UST-IdNr. DE 252 251 764
WEEE-Reg.-Nr. DE 46311528
Sitz der Gesellschaft: Seefeld
Geschäftsführer: Dipl.-Ing. (FH) Detlef Schneider, Dipl.-Ing. (FH) Rüdiger Stahl
http://www.tq-group.com<http://www.tq-group.com/>
Besuchen Sie uns / Visit us:
light + building, Frankfurt am Main, March 13 - 18, 2016, Hall 9.0, Stand C80
Aircraft Interiors Expo, Hamburg, April 5 - 7, 2016, Stand 2A20
Automatica, Munich, June 21 - 24, 2016
Intersolar, Munich, June 22 - 24, 2016
If you are running lava-tool on Debian unstable (sid) or testing
(stretch), the new version of python-keyring (8.4.1-1) will break
lava-tool at versions lower than 0.14-2 as it has moved a previously
supported component into a separate package which did not previously
exist. Sadly, the lava-tool maintainers were not made aware of this
change in functionality in advance - lava-tool simply stopped working
in unstable.
lava-tool 0.14-2 has been uploaded with the only change being a
dependency on python-keyrings.alt which is the new package which
provides the functionality formerly available as part of
python-keyring itself. python-keyrings.alt already exists in Debian
testing and can be safely installed ahead of the upgrade of
python-keyring, so users may wish to install python-keyrings.alt now
in order to maintain functionality.
Version 8.4 of python-keyring will migrate into Debian testing before
version 0.14-2 of lava-tool can migrate, so users of testing may have
a short window where lava-tool fails in Debian testing if
python-keyrings.alt is not installed in advance.
Ubuntu users may also be affected as python-keyring is a "main"
package, so may possibly update in Xenial Xerus - however as lava-tool
is a "universe" package, it may not be possible to upgrade Xenial to
0.14-2 of lava-tool. Users are advised to manually install
python-keyrings.alt to maintain functionality.
lava-tool users running Jessie are not affected (but are advised to
upgrade to 0.14-1~bpo8+1 from jessie-backports for other updates, new
functionality and bug fixes).
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
This is used to generate measurements by interacting with devices that are
not reachable as a node, and avoid target-specific support.
Each device.conf file can define a host command or script used when entering
a lava_command_run and leaving a lava_command_run, In the style of the PDU
power cycling commands.
host_hook_enter_command = <enter_hook> (command line to pass to host)
host_hook_exit_command = <exit_hook>
Hooks calling abi:
ENTRER: arg0 = <enter_hook> arg1 = "output-file-fullpath" &
EXIT: arg0 = <exit_hook> arg1 = "output-file-fullpath" &
The MO is close to lava_test_shell:
the LAVA_SIGNAL_TESTCASE search pattern will be applied to the output of the exit
hook and if a measurement is found, a test_case will be added to the bundle
stream of the job. Any file found in the logdir will be attached, starting with
stdout.log. If a {output-file-fullpath}.minetype is found, its content is used
as 'mime_type' for the attached file (as with lava_shell_test)
example, see Job http://lava.baylibre.com:10080/dashboard/streams/anonymous/lab-health/bundl… as an example of unit-test
power statistics record and display as test_cases.
Signed-off-by: Marc Titinger <mtitinger(a)baylibre.com>
---
v2: expect LAVA_SIGNAL_TESTCASE patterns like with lava_test_shell
---
lava_dispatcher/actions/lava_command.py | 46 +++++++++++++++++++++++++++++++--
lava_dispatcher/config.py | 5 ++++
2 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/lava_dispatcher/actions/lava_command.py b/lava_dispatcher/actions/lava_command.py
index 7bf787a..0748b51 100644
--- a/lava_dispatcher/actions/lava_command.py
+++ b/lava_dispatcher/actions/lava_command.py
@@ -33,6 +33,8 @@ from lava_dispatcher.test_data import create_attachment
from lava_dispatcher.utils import read_content
from datetime import datetime
+from lava_dispatcher.actions import lava_test_shell
+from lava_dispatcher.lava_test_shell import parse_testcase_result
class cmd_lava_command_run(BaseAction):
@@ -54,16 +56,30 @@ class cmd_lava_command_run(BaseAction):
_parser = None
_fixupdict = {}
_results_from_log_file = []
+ _cur_record = None
+ _record_index = 0
+ _uuid = None
def run(self, commands, parser=None, iterations=1, fixupdict=None, timeout=-1):
target = self.client.target_device
+ context = self.context
log_dir = tempfile.mkdtemp(dir=target.scratch_dir)
self._logfile = os.path.join(log_dir, 'stdout.log')
+ self._uuid = str(uuid4())
if parser is not None:
self._parser = parser
if fixupdict is not None:
self._fixupdict = fixupdict
logging.info("lava_command logfile: %s" % self._logfile)
+
+ #if there is a host-side hook to call:
+ host_enter_hook = context.device_config.host_hook_enter_command
+ if host_enter_hook:
+ self._cur_record = os.path.join(log_dir, self._uuid.split('-')[0])
+ host_enter_hook = host_enter_hook.rstrip('&') + " " + self._cur_record + " &"
+ logging.warning('Running enter hook on host %s' % host_enter_hook)
+ context.run_command(host_enter_hook)
+
with self.client.tester_session() as session:
for count in range(iterations):
logging.info("Executing lava_command_run iteration: %s" % count)
@@ -81,6 +97,27 @@ class cmd_lava_command_run(BaseAction):
self._results_from_log_file.append(res)
logging.error(e)
+ #if there is a host-side hook to call:
+ host_exit_hook = context.device_config.host_hook_exit_command
+ if host_exit_hook:
+ host_exit_hook = host_exit_hook.rstrip('&') + " " + self._cur_record + " &"
+ logging.warning('Running EXIT hook on dispatcher host %s' % host_exit_hook)
+ output = context.run_command_get_output(host_exit_hook)
+
+ # See https://github.com/BayLibre/iio-capture as an instance of an
+ # app that will produce compatible output when called from the host
+ # Hook.
+ test_pattern = r"<LAVA_SIGNAL_TESTCASE TEST_CASE_ID=(?P<test_case_id>.*)\s+"\
+ "RESULT=(?P<result>(PASS|pass|FAIL|fail|SKIP|skip|UNKNOWN|unknown))\s+"\
+ "UNITS=(?P<units>.*)\s+MEASUREMENT=(?P<measurement>.*)>"
+ test_case_pattern = re.compile(test_pattern)
+
+ for line in output.split(os.linesep):
+ match = test_case_pattern.match(line.strip())
+ if match:
+ res = parse_testcase_result(match.groupdict())
+ self._results_from_log_file.append(res)
+
bundle = self._get_bundle()
self._write_results_bundle(bundle)
@@ -126,12 +163,17 @@ class cmd_lava_command_run(BaseAction):
def _get_test_runs(self):
now = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
- attachment = [create_attachment(os.path.basename(self._logfile), read_content(self._logfile))]
+ attachment_dir = os.path.dirname(self._logfile)
+ attachment = lava_test_shell._attachments_from_dir(os.path.dirname(self._logfile))
+ # fixup default mimetypes, for stdout.log mainly
+ for entry in attachment:
+ if entry['pathname'].endswith(".log"):
+ entry['mime_type'] = "text/plain"
results = self._get_test_results()
return {
'test_id': 'lava-command',
'analyzer_assigned_date': now,
- 'analyzer_assigned_uuid': str(uuid4()),
+ 'analyzer_assigned_uuid': self._uuid,
'time_check_performed': False,
'test_results': results,
'attachments': attachment
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 63aa3c6..10bccc8 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -29,6 +29,11 @@ from configglue import parser, schema
class DeviceSchema(schema.Schema):
+
+ # Host-side hook for lava_command_run
+ host_hook_enter_command = schema.StringOption()
+ host_hook_exit_command = schema.StringOption()
+
master_testboot_dir = schema.StringOption()
master_testboot_label = schema.StringOption()
master_testrootfs_dir = schema.StringOption()
--
2.5.0
> From: wangbo <wangbo.hfut(a)qq.com>
> Date: Thu, 28 Jan 2016 20:39:25 +0800
> Subject: LAVA uninstall and upgrade, build method from the source code
> Hi all,
>
> I installed LAVA on Ubuntu 14.04 in our lab, but later we plan to upgrade the system to Ubuntu 15.10, so what should I do now?
I'm afraid the only long term option is to migrate to Debian Jessie,
possibly using a VM, and then using jessie-backports. It is uncertain
whether it will be possible to support LAVA on the upcoming Ubuntu
Xenial 16.04LTS and support for Trusty has already been frozen. There
have been problems upgrading from 14.04 - these issues have already
been discussed on the lists.
https://lists.linaro.org/pipermail/lava-announce/2015-November/000003.html
The next upload of LAVA should be able to migrate into Ubuntu ahead of
the cut off for the Xenial release but that is outside our control. It
is also unknown whether it will be possible to maintain a system based
on Xenial - the LAVA software team will be unable to maintain such
support and users would need to be proactive in fixing problems that
may arise.
> Because I have already installed LAVA, so should I uninstall it firstly, and then use apt-get install to reinstall LAVA? But I can't find more information about this in the website, can you give me some advice, thanks very much.
https://staging.validation.linaro.org/static/docs/installing_on_debian.html…
The main problems will be with the database, so building in any way
will have the same difficulties. Trusty is, sadly, a dead end at the
moment.
> And, how to install LAVA from the source code?
Not supported - installation is only supported as Debian packages.
LAVA is much more than a pip install / virtualenv can ever manage.
> LAVA includes different parts, like lava-server, lava-dispatcher, lava-tool, etc. So which one should be installed firstly? By the way in the Git, for example "https://git.linaro.org/lava/lava-server.git/tree", there is no document about the install method. Now, I will modify some code to meet our demands in our lab, and then replace the corresponding file. We plan to manage the code with Git too. So if we know the install method from the source code, it will be better to build the code.
There is no documentation for installing from git because the
installation method has moved to using packages. We abandoned the old
install from git methods a long time ago due to interminable bugs and
insolvable dilemmas.
During the migration to the new dispatcher, things are too complex to
support more than Debian Jessie (and the next Debian release,
Stretch). Once the migration is complete (in 2017), the advantages of
the new design should also make it easier to support other operating
systems. We are a long way from that at the moment.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Neil et al.
I'd like to have the host record power measurements during a job, or
during a unit-test.
browsing the lava manual, I came across the shell-hooks which seem to
enable what I want to do.
Suppose I want to call a script "capture-start.sh" at the top of the
job, and "capture-stop.sh" at the end, is this the proper way to do stuff:
In the job json:
-------------------
```
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"git-repo": "https://github.com....
/lava-test-definitions.git",
"testdef": "capture-start.yaml"
}
],
"timeout": 1800
}
},
.... dispatcher actions of interest ....
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"git-repo": "https://github.com....
/lava-test-definitions.git",
"testdef": "capture-stop.yaml"
}
],
"timeout": 1800
}
},
```
In the yaml, do I need to override all the handlers ? If I don't need
end_case or postprocess_test_result do I need to fill them, and point to
a "nop.sh" script "?
I assume that I need to provide at lest one step to be done on the
target, even if I am only interested in the host-side commands, would
the following yaml make sense:
```
metadata:
format: Lava-Test Test Definition 1.0
name: power-capture-start
run:
steps:
- lava-test-case pass-test --shell echo starting power measurements
handler:
handler-name: shell-hooks
params:
device_config_vars:
DEVICE_TYPE: device_type
handlers:
start_testcase: capture-start-start-hook.sh
end_testcase: nop.sh
postprocess_test_result: nop.sh
```
Thanks and Regards,
Marc.
Hi all,
I have a basic practical question: I'm trying to reset a panda-es board
with a suitable master image for lava to use with the default
device-types confs, namely with:
lava-dispatcher/device-types/panda-es.conf:bootloader_prompt = U-Boot
so I wrote panda-master-20150303.img.tgz (linaro master-images) to the
sdcard I'm using.
I'd expect u-boot.bin on the boot partition to being compatible with the
expected lava prompt and u-boot command set ('bootz' supported for
instance), but I keep having an incompatible config of u-boot ("Panda"
as a prompt, bootz not supported, etc...), what am I missing?
Thanks and Regards,
Marc.
Hi Neil et al,
I'm trying to debug a simple qemu job that goes straight from running to
incomplete without log creation (used to working ok, but I reinstalled
everything on a different machine...)
Looking at /var/log/lava-server/lava-scheduler.log I see the following:
2015-12-09 15:22:27,838 [INFO] [lava_scheduler_daemon.job.JobRunner.14]
starting job {u'timeout': 18000, 'health_check': False, u'job_name':
u'qemu-arm-test', u'actions': [{u'command': u'deploy_linaro_kernel',
u'parameters': {u'login_prompt': u'login:', u'kernel': u'
http://images.validation.linaro.org/functional-test-images/qemu-arm/zImage-…',
u'username': u'root', u'rootfs': u'
http://images.validation.linaro.org/functional-test-images/qemu-arm/core-im…'}},
{u'command': u'boot_linaro_image', u'parameters': {u'test_image_prompt':
u'root@qemu-system-arm:~#'}}], u'target': u'qemu0'}
2015-12-09 15:22:27,838 [INFO] [lava_scheduler_daemon.job.MonitorJob]
monitoring "setsid lava-server manage schedulermonitor 14 lava-dispatch
qemu0 /tmp/tmpPd4nGs -l info -f /var/log/lava-server/lava-scheduler.log"
2015-12-09 15:22:29,171 [INFO] [lava_scheduler_daemon.job.Job.qemu0]
executing "lava-dispatch /tmp/tmpFltuQQ --output-dir
/var/lib/lava-server/default/media/job-output/job-14"
2015-12-09 15:22:30,388 [INFO]
[lava_scheduler_daemon.job.DispatcherProcessProtocol] childConnectionLost
for qemu0: 0
2015-12-09 15:22:30,389 [INFO]
[lava_scheduler_daemon.job.DispatcherProcessProtocol] childConnectionLost
for qemu0: 1
2015-12-09 15:22:30,389 [INFO]
[lava_scheduler_daemon.job.DispatcherProcessProtocol] childConnectionLost
for qemu0: 2
2015-12-09 15:22:30,389 [INFO]
[lava_scheduler_daemon.job.DispatcherProcessProtocol] processExited for
qemu0: A process has ended with a probable error condition: process ended
with exit code 1.
2015-12-09 15:22:30,389 [INFO]
[lava_scheduler_daemon.job.DispatcherProcessProtocol] processEnded for
qemu0: A process has ended with a probable error condition: process ended
with exit code 1.
2015-12-09 15:22:30,389 [INFO] [lava_scheduler_daemon.job.Job.qemu0] job
finished on qemu0
2015-12-09 15:22:30,389 [INFO] [lava_scheduler_daemon.job.Job.qemu0] job
incomplete: reported 1 exit code
2015-12-09 15:22:30,422 [INFO]
[lava_scheduler_daemon.dbjobsource.DatabaseJobSource] job 14 completed on
qemu0
I tried to run manually:
setsid lava-server manage schedulermonitor 14 lava-dispatch qemu0
qemu-arm.json
powerci@lab-baylibre:~/POWERCI/scripts/user$ 2015-12-09 15:23:23,285
[ERROR] [lava_scheduler_daemon.job.Job.qemu0] AttributeError: 'Job' object
has no attribute '_protocol'
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line
1203, in mainLoop
self.runUntilCurrent()
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line
798, in runUntilCurrent
f(*a, **kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line
393, in callback
self._startRunCallbacks(result)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line
501, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line
588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/lib/python2.7/dist-packages/lava_scheduler_daemon/job.py",
line 226, in _run
self.cancel(exc)
File "/usr/lib/python2.7/dist-packages/lava_scheduler_daemon/job.py",
line 157, in cancel
self._protocol.transport.signalProcess(getattr(signal, signame))
exceptions.AttributeError: 'Job' object has no attribute '_protocol'
Note that I get the same issue with other jobs (boards, kvm): submission is
OK, but incomplete, and no log.
Any help would be much appreciated!
Many thanks,
Marc.
Hi,
is it possible to interact with a host tool (running on the dispatcher
host) to gather arbitrary key values and post them into the test result
stream. IOW, How do I add a MEASUREMENT/UNITS values using the output of
a host tool (no target shell interaction) ?
Thanks,
Marc.
Hi Neil et al.
I'm working at add my switching device to lavapdu deamon, so I pull Matt's
github etc...but when trying to restart the daemon I get the following
error:
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]: Reading
settings from /etc/lavapdu/lavapdu.conf
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]: Traceback
(most recent call last):
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]: File
"/usr/sbin/lavapdu-runner", line 103, in <module>
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]: context =
daemon.DaemonContext(
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]:
AttributeError: 'module' object has no attribute 'DaemonContext'
déc. 04 15:56:49 powerci-lava-disp lavapdu-runner[3081]: ...fail!
déc. 04 15:56:49 powerci-lava-disp systemd[1]: lavapdu-runner.service:
Control process exited, code=exited status=1
déc. 04 15:56:49 powerci-lava-disp systemd[1]: Failed to start LSB:
LAVA PDU Runner.
déc. 04 15:56:49 powerci-lava-disp systemd[1]: lavapdu-runner.service:
Unit entered failed state.
déc. 04 15:56:49 powerci-lava-disp systemd[1]: lavapdu-runner.service:
Failed with result 'exit-code'.
I tried to revert to the master branch, re-installed with setup.py
with no success. The json config file seems ok.
Any clue?
Many Thanks,
Marc.
Hi All,
I'm setting up a local LAVA scheduler+dispatcher (standalone).
I'm able to posts jobs, but I think I'm missing an initial configuration
for some actions to complete, namely
action deploy_linaro_image
the eMMC flash content of my BBB-RevB6 is pretty standard (debian-7.9 of
2015-11-03, 2gb image).
Based on the log, the dispatcher expects the default "testboot" and
"testrootfs" partitions with offset 2, I assuime this means that those
partitions must be manually created the first time with index 3 and 4 in
order for this to work.
I would appreciate some hints, to avoid unnecessary test-and-try, what
would be the expected partitionning for BBB with LAVA ?
Thanks in advance,
best regards,
Marc.
Please let us know if you are using OpenID authentication with LAVA.
Newer versions of django will make it impossible to support
django-openid-auth in Debian unstable and testing. The version of
django-openid-auth in Jessie can continue to be used, so we would like
to know how many users want to continue with this support.
OpenID as a protocol has been dying for some time and Linaro has moved
over to LDAP, which is fine if LDAP is already available.
The time pressure for this change is coming from the schedule to get
the latest django and the latest lava packages into Ubuntu Xenial
16.04LTS which means that support needs to be implemented in the
2015.12 or 2016.1 LAVA releases. This is why this is quickly following
the trusty change. We have been aware of the issues with
django-openid-auth for some time, it was only when we had completed
the move of the Cambridge lab to LDAP that changes involving
django-openid-auth could be considered.
If you are using OpenID authentication (e.g. using Launchpad or Google
OpenID), please let us know.
If you would like to see some other forms of authentication supported,
also let us know. We can investigate Python Social Auth
(http://psa.matiasaguirre.net/), if there is interest.
If we don't hear from users who want django-openid-auth support for
use on Debian Jessie, we will drop django-openid-auth support from all
lava builds. This will leave LDAP and local Django accounts in
2015.12.
If anyone has experience of other django authentication modules, also
let us know.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
See also:
https://lists.linaro.org/pipermail/lava-announce/2015-November/000002.html
which was also sent to these lists (except linaro-dev).
So far, nobody has come forward as a Trusty user. The only Trusty
instance of which we are aware is already due to migrate to Debian
Jessie.
The LAVA software team are now applying updates which will freeze LAVA
software support for Ubuntu Trusty at 2015.9 for lava-dispatcher and
2015.9.post1 for lava-server due to the complexities of supporting
both django1.6 and the current django1.7 in Jessie and django1.8,
possibly django1.9 by the time Debian Stretch is released.
The last packages for Ubuntu Trusty 14.04LTS will be:
lava-server 2015.9.post1
lava-dispatcher 2015.9
Once these changes are applied, the Debian packaging used to build
future versions of LAVA packages will prevent builds against django1.6
and prevent installation if django1.6 is found, in order to prevent
database corruption.
This means that Trusty users will not be able to use the results of
the dispatcher refactoring.
Ubuntu Xenial Xerus - which is planned to be the 16.04LTS in April
2016 - is expected to pick up LAVA software releases from Debian up
until the 2016.1 release (possibly 2016.2) and is also expected to be
using django1.8. The next Debian stable release (Stretch), for which
no date has yet been set, may use django1.9.
Initial attempts at migrating a test instance from Trusty to django1.7
did not go well and the migration from Trusty to Xenial cannot be
supported by the LAVA software team - the recommendation is to go
directly from 2015.9 on Trusty to the same version available for
Debian Jessie but there will still be work to be done to prepare and
implement the migration which will be instance-dependent.
Documentation is being added to assist with this migration but there
will remain risks of data loss which will need to be managed for each
instance. It is imperative that anyone using Trusty has an up to date
backup of the postgresql database dump before considering any
migration. If the existing data is to be dropped, a new install on
Debian Jessie is recommended.
It is not possible for the LAVA software team to support all versions
of django from 1.6 to 1.9 - particular problems are known when going
from django1.6 to django1.7 as the methods to migrate the lava-server
database changed fundamentally in django1.7.
Notes are being added to the documentation on the trusty branch based
on 2015.9 to be released within lava-server 2015.9.post1 and to the
documentation in the master branch (which will go into 2015.12).
All future builds of LAVA software will now be made and uploaded only
to Debian and releases.linaro.org.
So far, nobody has come forward who is willing to maintain packaging
for LAVA software on any distribution other than Debian. As the
refactoring proceeds, we expect that it will become easier to package
LAVA for other distributions but the migration to the refactoring must
be complete first.
Everyone interested in or using LAVA is encouraged to subscribe to the
lava-announce mailing list which is low volume and only used for
substantial changes like this.
https://lists.linaro.org/mailman/listinfo/lava-announce
See also https://validation.linaro.org/static/docs/support.html
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
This is a call to *all* users of LAVA on Trusty - please let us know
who you are, what you're needs are and whether you are able to migrate
to Debian Jessie instead of going from Trusty 14.04LTS to Xenial Xerus
16.04LTS.
If you are using LAVA on any release of Ubuntu older than Trusty, the
only advice from the LAVA team is to immediately migrate to Debian
Jessie.
If you are using LAVA on Wily Werewolf or Vivid Vervet, you should
also consider testing the migration to Xenial and comparing with a
migration to Debian Jessie, as below.
LAVA is looking at a fix for the 2015.9 release but Django1.9 is in
beta release already. There are issues with django1.8 which are in
development. Currently, we are proposing that this update will be
applied to 2015.9 and made into a frozen release.
The master branch of LAVA will continue to develop and will need to
use more features only available in Django1.7 and later. Documentation
relating to installing Trusty would then be deprecated and removed in
subsequent releases from the master branch. Equally, future database
migrations on the master branch would no longer provide south support,
using the django migration support provided by django1.7 and later. So
these releases would not be built for Trusty - leaving only the frozen
branch.
Due to the complexity of supporting django1.6, it is unlikely that
updates will be available for the frozen branch once this happens..
The changes in the lava-server due to the ongoing refactoring will
mean that users of Trusty will be unable to migrate to pipeline
support until the server has also been migrated to Xenial 16.04LTS.
In addition to this, there is concern that migrating from Trusty and
django1.6 all the way to django1.8 or possibly django1.9 in Ubuntu
Xenial 16.04LTS is going to be problematic and the LAVA team will be
unable to assist in most cases.
The alternative is for someone with a reasonably complex lab running
Ubuntu to take up a role as tester of the frozen branch *and*
responsibility for patches which can maintain trusty support and
migration to Xenial 16.04LTS. The problem then will be that it will be
a very large transition when 16.04LTS actually becomes available -
only for the same lag to start all over again.
I'm unsure when Xenial will close the window for migrations from
Debian into Xenial - I expect that the 2015.12 release of LAVA will
migrate, I expect that 2016.1 will migrate too but I cannot be sure
about 2016.2 or 2016.3. That migration is completely outside the
control of the LAVA software team.
https://launchpad.net/ubuntu/xenial/+source/lava-server
Everyone considering staying on Ubuntu is advised to try a migration
to Xenial *now* - in a VM, with and without a recent backup of your
database and logs. Xenial currently has 2015.11. Also compare with a
migration to Debian Jessie by dumping and reimporting the database. In
each case, ensure that the permissions on /var/lib/lava-server and
sub-directories are retained from the original.
Please talk to us and test out what you are going to do.
https://validation.linaro.org/static/docs/support.html
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Some pointers on this list: Anyone is welcome to subscribe and post their
issues. A few notes which subscribers should follow and recommend to others:
0: Wherever possible, use a pastebin for log output and include a link to
the paste in your post.
1: Paste from the complete log, not the summary, so that you get the
complete lines.
2: Include in this paste or another paste, the job definition
3: If your job uses URLs which are not visible to the rest of the list,
include a rough outline of how those were built and what versions of tools
were used.
Subscribers are invited to help each other. LAVA developers will be around
but are not the only source of assistance on this list.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/