hi,
I have two questions to ask:
1.
I am using the lava to boot the D02 board, it can successfully execute
before. But it failed after I update the uefi and rewrite the config
file. But when I execute the config file manually, it succeed booting
the board. And I can figure out what is wrong. The version of LAVA I
used is 2015.9.post1, and I use LAVA v1.
From the figure, we can see it is strucked when booting the kernel.
then
The log file of the lava job, the log of boot manually and the config
file has been attached.
2.
The uefi is used need to control by cursor movement, so I need to send
KEY-DOWN in the d02.conf file. But when I use 'sendline "\033[A"' or
‘sendline "\16[A"', I found it can not work in the LAVA. So I want to
know if we have any method to realize it?
Thank you very much.
--
Best Wishes
wuyanjun (Elaine)
Hisilicon Turing Software and Architecture
hi, all,
The uefi the board used has upgraded, so that the configuration of it
can not be used in the current environment. Because now the uefi use the
dialog to show the menu.
As I know, the expect and sendline in the lava XXX.conf can not work
well in this situation. Does you know how to config the XXX.conf to
support this kind of uefi?
The menu of the uefi is below.
and
Thank you very much.
Elaine (wuyanjun)
Hisilicon, Turing Architecture and Software
Hi Neil and all,
how do I add per-action comments or meta data so that I can parse them
from the result bundle:
the following for instance will pass the schema validation, but the
result bundle will degenerate into keeping only the last "comment"
metadata entry, while I'd like a per-action metadata entry.
{
"command": "lava_command_run",
"metadata":
{
"comment": "comment1"
},
"parameters":
{
"commands": [
"uname -a"
]
}
},
{
"command": "lava_command_run",
"metadata":
{
"comment": "comment2"
},
"parameters":
{
"commands": [
"ls -al"
]
}
},
The result bundle will contain the same metadata for each lava-command:
"attributes": {
"comment": "comment2", <=== same value
"image.type": "kernel-ci",
.... snip ...
"logging_level": "DEBUG",
"device.tree": "omap4-panda-es.dtb"
},
"test_id": "lava-command"
IOW, how do I assign a brief description to an action ?
Thanks and regards,
Marc.
Hi Neil,
But the problem is I want to define a job and use python script to communicate with the Android DUT, for example I want to use script.py in dispatcher to control the DUT. And the "lava_test_shell" will copy all these commands in steps to run.sh.
# test_script.yaml
run:
steps:
- echo "hello world"
- ./android/script/script.py
# run.sh in DUT
###default parameters from yaml###
######
###test parameters from json###
######
###other parameters###
LAVA_SERVER_IP='172.16.117.50'
TARGET_TYPE='android'
######
set -e
export TESTRUN_ID=python
cd /data/local/tmp/lava-mstar01/tests/0_c7a6f9e0-ae70-4d8b-8c06-f6da04b7a26c
UUID=`cat uuid`
echo "<LAVA_SIGNAL_STARTRUN $TESTRUN_ID $UUID>"
#wait for an ack from the dispatcher
read
echo "hello world"
./android/script/script.py
echo "<LAVA_SIGNAL_ENDRUN $TESTRUN_ID $UUID>"
#wait for an ack from the dispatcher
read
------------------ 原始邮件 ------------------
发件人: "Neil Williams";<neil.williams(a)linaro.org>;
发送时间: 2016年1月30日(星期六) 晚上11:29
收件人: "wangbo"<wangbo.hfut(a)qq.com>;
抄送: "Lava Users Mailman list"<lava-users(a)lists.linaro.org>;
主题: Re: Forward of moderated message
On 29 January 2016 at 04:08, wangbo <wangbo.hfut(a)qq.com> wrote:
> Hi Neil,
>
> Thanks very much to your quickly and warmly answer.
>
> One more question:
>
> LAVA copy all the test scripts defined in the YAML to the DUT at the
> specified directory, then the job will be executed on the DUT.
The LAVA test shell scripts are POSIX shell. If a shell is not
available, the test can be rewritten to use ADB commands, as the CTS
tests do.
> And if I want
> to test one android device, but almost all the android images don't support
> python interpreter,
LAVA does not run any python code on the DUT unless the test writer
puts python commands into their test shell definition YAML. The python
code only runs on the dispatcher and handles communications over the
serial port (or using ADB).
> so it's impossible to write case with python, in most
> cases we should use shell to run the test case. But sometimes we want to use
> Appium/Robotium to test the android image, and use LAVA to deploy the image,
> analyze and record the test results, so LAVA is unavailable in such
> conditions. Do you have any advice to integrate these tools in LAVA or will
> LAVA support such conditions in the future.
You'd need to use multinode (as with CTS) to install the test support
binaries into a VM which then communicates with the device. That VM
then reports results that are retrieved with LAVA.
>
> Best Regards.
>
> ------------------ 原始邮件 ------------------
> 发件人: "Neil Williams";<neil.williams(a)linaro.org>;
> 发送时间: 2016年1月29日(星期五) 凌晨3:27
> 收件人: "wangbo"<wangbo.hfut(a)qq.com>;
> 抄送: "Lava Users Mailman list"<lava-users(a)lists.linaro.org>;
> 主题: Re: Forward of moderated message
>
>> From: wangbo <wangbo.hfut(a)qq.com>
>> Date: Thu, 28 Jan 2016 20:39:25 +0800
>> Subject: LAVA uninstall and upgrade, build method from the source code
>> Hi all,
>>
>> I installed LAVA on Ubuntu 14.04 in our lab, but later we plan to upgrade
>> the system to Ubuntu 15.10, so what should I do now?
>
> I'm afraid the only long term option is to migrate to Debian Jessie,
> possibly using a VM, and then using jessie-backports. It is uncertain
> whether it will be possible to support LAVA on the upcoming Ubuntu
> Xenial 16.04LTS and support for Trusty has already been frozen. There
> have been problems upgrading from 14.04 - these issues have already
> been discussed on the lists.
>
> https://lists.linaro.org/pipermail/lava-announce/2015-November/000003.html
>
> The next upload of LAVA should be able to migrate into Ubuntu ahead of
> the cut off for the Xenial release but that is outside our control. It
> is also unknown whether it will be possible to maintain a system based
> on Xenial - the LAVA software team will be unable to maintain such
> support and users would need to be proactive in fixing problems that
> may arise.
>
>> Because I have already installed LAVA, so should I uninstall it firstly,
>> and then use apt-get install to reinstall LAVA? But I can't find more
>> information about this in the website, can you give me some advice, thanks
>> very much.
>
> https://staging.validation.linaro.org/static/docs/installing_on_debian.html…
>
> The main problems will be with the database, so building in any way
> will have the same difficulties. Trusty is, sadly, a dead end at the
> moment.
>
>> And, how to install LAVA from the source code?
>
> Not supported - installation is only supported as Debian packages.
> LAVA is much more than a pip install / virtualenv can ever manage.
>
>> LAVA includes different parts, like lava-server, lava-dispatcher,
>> lava-tool, etc. So which one should be installed firstly? By the way in the
>> Git, for example "https://git.linaro.org/lava/lava-server.git/tree", there
>> is no document about the install method. Now, I will modify some code to
>> meet our demands in our lab, and then replace the corresponding file. We
>> plan to manage the code with Git too. So if we know the install method from
>> the source code, it will be better to build the code.
>
> There is no documentation for installing from git because the
> installation method has moved to using packages. We abandoned the old
> install from git methods a long time ago due to interminable bugs and
> insolvable dilemmas.
>
> During the migration to the new dispatcher, things are too complex to
> support more than Debian Jessie (and the next Debian release,
> Stretch). Once the migration is complete (in 2017), the advantages of
> the new design should also make it easier to support other operating
> systems. We are a long way from that at the moment.
>
> --
>
> Neil Williams
> =============
> neil.williams(a)linaro.org
> http://www.linux.codehelp.co.uk/
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Dear LAVA-Team,
I have set up two new LAVA instances (master, worker) based on the production release 2016.02.
The installation worked fine but I can't switch the apache site.
I followed the instructions on:
https://validation.linaro.org/static/docs/installing_on_debian.html#install…
Then I searched for the lava-server.conf file on my system, but didn't find it.
It looks like it isn't installed.
I also tried to add it to add it manually, but it didn't worked.
i.A. Marco Felsch
Entwicklung Standort Chemnitz
******************** Wir sind umgezogen! ********************
Tel. +49 371 433151-34, Fax +49 371 433151-22
Zwickauer Strasse 173, 09116 Chemnitz
*******************************************************************
mailto:marco.felsch@tq-group.com
TQ-Systems GmbH
Mühlstraße 2, 82229 Seefeld
Amtsgericht München, HRB-Nr. 105 018, UST-IdNr. DE 252 251 764
WEEE-Reg.-Nr. DE 46311528
Sitz der Gesellschaft: Seefeld
Geschäftsführer: Dipl.-Ing. (FH) Detlef Schneider, Dipl.-Ing. (FH) Rüdiger Stahl
http://www.tq-group.com<http://www.tq-group.com/>
Besuchen Sie uns / Visit us:
light + building, Frankfurt am Main, March 13 - 18, 2016, Hall 9.0, Stand C80
Aircraft Interiors Expo, Hamburg, April 5 - 7, 2016, Stand 2A20
Automatica, Munich, June 21 - 24, 2016
Intersolar, Munich, June 22 - 24, 2016
If you are running lava-tool on Debian unstable (sid) or testing
(stretch), the new version of python-keyring (8.4.1-1) will break
lava-tool at versions lower than 0.14-2 as it has moved a previously
supported component into a separate package which did not previously
exist. Sadly, the lava-tool maintainers were not made aware of this
change in functionality in advance - lava-tool simply stopped working
in unstable.
lava-tool 0.14-2 has been uploaded with the only change being a
dependency on python-keyrings.alt which is the new package which
provides the functionality formerly available as part of
python-keyring itself. python-keyrings.alt already exists in Debian
testing and can be safely installed ahead of the upgrade of
python-keyring, so users may wish to install python-keyrings.alt now
in order to maintain functionality.
Version 8.4 of python-keyring will migrate into Debian testing before
version 0.14-2 of lava-tool can migrate, so users of testing may have
a short window where lava-tool fails in Debian testing if
python-keyrings.alt is not installed in advance.
Ubuntu users may also be affected as python-keyring is a "main"
package, so may possibly update in Xenial Xerus - however as lava-tool
is a "universe" package, it may not be possible to upgrade Xenial to
0.14-2 of lava-tool. Users are advised to manually install
python-keyrings.alt to maintain functionality.
lava-tool users running Jessie are not affected (but are advised to
upgrade to 0.14-1~bpo8+1 from jessie-backports for other updates, new
functionality and bug fixes).
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
This is used to generate measurements by interacting with devices that are
not reachable as a node, and avoid target-specific support.
Each device.conf file can define a host command or script used when entering
a lava_command_run and leaving a lava_command_run, In the style of the PDU
power cycling commands.
host_hook_enter_command = <enter_hook> (command line to pass to host)
host_hook_exit_command = <exit_hook>
Hooks calling abi:
ENTRER: arg0 = <enter_hook> arg1 = "output-file-fullpath" &
EXIT: arg0 = <exit_hook> arg1 = "output-file-fullpath" &
The MO is close to lava_test_shell:
the LAVA_SIGNAL_TESTCASE search pattern will be applied to the output of the exit
hook and if a measurement is found, a test_case will be added to the bundle
stream of the job. Any file found in the logdir will be attached, starting with
stdout.log. If a {output-file-fullpath}.minetype is found, its content is used
as 'mime_type' for the attached file (as with lava_shell_test)
example, see Job http://lava.baylibre.com:10080/dashboard/streams/anonymous/lab-health/bundl… as an example of unit-test
power statistics record and display as test_cases.
Signed-off-by: Marc Titinger <mtitinger(a)baylibre.com>
---
v2: expect LAVA_SIGNAL_TESTCASE patterns like with lava_test_shell
---
lava_dispatcher/actions/lava_command.py | 46 +++++++++++++++++++++++++++++++--
lava_dispatcher/config.py | 5 ++++
2 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/lava_dispatcher/actions/lava_command.py b/lava_dispatcher/actions/lava_command.py
index 7bf787a..0748b51 100644
--- a/lava_dispatcher/actions/lava_command.py
+++ b/lava_dispatcher/actions/lava_command.py
@@ -33,6 +33,8 @@ from lava_dispatcher.test_data import create_attachment
from lava_dispatcher.utils import read_content
from datetime import datetime
+from lava_dispatcher.actions import lava_test_shell
+from lava_dispatcher.lava_test_shell import parse_testcase_result
class cmd_lava_command_run(BaseAction):
@@ -54,16 +56,30 @@ class cmd_lava_command_run(BaseAction):
_parser = None
_fixupdict = {}
_results_from_log_file = []
+ _cur_record = None
+ _record_index = 0
+ _uuid = None
def run(self, commands, parser=None, iterations=1, fixupdict=None, timeout=-1):
target = self.client.target_device
+ context = self.context
log_dir = tempfile.mkdtemp(dir=target.scratch_dir)
self._logfile = os.path.join(log_dir, 'stdout.log')
+ self._uuid = str(uuid4())
if parser is not None:
self._parser = parser
if fixupdict is not None:
self._fixupdict = fixupdict
logging.info("lava_command logfile: %s" % self._logfile)
+
+ #if there is a host-side hook to call:
+ host_enter_hook = context.device_config.host_hook_enter_command
+ if host_enter_hook:
+ self._cur_record = os.path.join(log_dir, self._uuid.split('-')[0])
+ host_enter_hook = host_enter_hook.rstrip('&') + " " + self._cur_record + " &"
+ logging.warning('Running enter hook on host %s' % host_enter_hook)
+ context.run_command(host_enter_hook)
+
with self.client.tester_session() as session:
for count in range(iterations):
logging.info("Executing lava_command_run iteration: %s" % count)
@@ -81,6 +97,27 @@ class cmd_lava_command_run(BaseAction):
self._results_from_log_file.append(res)
logging.error(e)
+ #if there is a host-side hook to call:
+ host_exit_hook = context.device_config.host_hook_exit_command
+ if host_exit_hook:
+ host_exit_hook = host_exit_hook.rstrip('&') + " " + self._cur_record + " &"
+ logging.warning('Running EXIT hook on dispatcher host %s' % host_exit_hook)
+ output = context.run_command_get_output(host_exit_hook)
+
+ # See https://github.com/BayLibre/iio-capture as an instance of an
+ # app that will produce compatible output when called from the host
+ # Hook.
+ test_pattern = r"<LAVA_SIGNAL_TESTCASE TEST_CASE_ID=(?P<test_case_id>.*)\s+"\
+ "RESULT=(?P<result>(PASS|pass|FAIL|fail|SKIP|skip|UNKNOWN|unknown))\s+"\
+ "UNITS=(?P<units>.*)\s+MEASUREMENT=(?P<measurement>.*)>"
+ test_case_pattern = re.compile(test_pattern)
+
+ for line in output.split(os.linesep):
+ match = test_case_pattern.match(line.strip())
+ if match:
+ res = parse_testcase_result(match.groupdict())
+ self._results_from_log_file.append(res)
+
bundle = self._get_bundle()
self._write_results_bundle(bundle)
@@ -126,12 +163,17 @@ class cmd_lava_command_run(BaseAction):
def _get_test_runs(self):
now = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
- attachment = [create_attachment(os.path.basename(self._logfile), read_content(self._logfile))]
+ attachment_dir = os.path.dirname(self._logfile)
+ attachment = lava_test_shell._attachments_from_dir(os.path.dirname(self._logfile))
+ # fixup default mimetypes, for stdout.log mainly
+ for entry in attachment:
+ if entry['pathname'].endswith(".log"):
+ entry['mime_type'] = "text/plain"
results = self._get_test_results()
return {
'test_id': 'lava-command',
'analyzer_assigned_date': now,
- 'analyzer_assigned_uuid': str(uuid4()),
+ 'analyzer_assigned_uuid': self._uuid,
'time_check_performed': False,
'test_results': results,
'attachments': attachment
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 63aa3c6..10bccc8 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -29,6 +29,11 @@ from configglue import parser, schema
class DeviceSchema(schema.Schema):
+
+ # Host-side hook for lava_command_run
+ host_hook_enter_command = schema.StringOption()
+ host_hook_exit_command = schema.StringOption()
+
master_testboot_dir = schema.StringOption()
master_testboot_label = schema.StringOption()
master_testrootfs_dir = schema.StringOption()
--
2.5.0
> From: wangbo <wangbo.hfut(a)qq.com>
> Date: Thu, 28 Jan 2016 20:39:25 +0800
> Subject: LAVA uninstall and upgrade, build method from the source code
> Hi all,
>
> I installed LAVA on Ubuntu 14.04 in our lab, but later we plan to upgrade the system to Ubuntu 15.10, so what should I do now?
I'm afraid the only long term option is to migrate to Debian Jessie,
possibly using a VM, and then using jessie-backports. It is uncertain
whether it will be possible to support LAVA on the upcoming Ubuntu
Xenial 16.04LTS and support for Trusty has already been frozen. There
have been problems upgrading from 14.04 - these issues have already
been discussed on the lists.
https://lists.linaro.org/pipermail/lava-announce/2015-November/000003.html
The next upload of LAVA should be able to migrate into Ubuntu ahead of
the cut off for the Xenial release but that is outside our control. It
is also unknown whether it will be possible to maintain a system based
on Xenial - the LAVA software team will be unable to maintain such
support and users would need to be proactive in fixing problems that
may arise.
> Because I have already installed LAVA, so should I uninstall it firstly, and then use apt-get install to reinstall LAVA? But I can't find more information about this in the website, can you give me some advice, thanks very much.
https://staging.validation.linaro.org/static/docs/installing_on_debian.html…
The main problems will be with the database, so building in any way
will have the same difficulties. Trusty is, sadly, a dead end at the
moment.
> And, how to install LAVA from the source code?
Not supported - installation is only supported as Debian packages.
LAVA is much more than a pip install / virtualenv can ever manage.
> LAVA includes different parts, like lava-server, lava-dispatcher, lava-tool, etc. So which one should be installed firstly? By the way in the Git, for example "https://git.linaro.org/lava/lava-server.git/tree", there is no document about the install method. Now, I will modify some code to meet our demands in our lab, and then replace the corresponding file. We plan to manage the code with Git too. So if we know the install method from the source code, it will be better to build the code.
There is no documentation for installing from git because the
installation method has moved to using packages. We abandoned the old
install from git methods a long time ago due to interminable bugs and
insolvable dilemmas.
During the migration to the new dispatcher, things are too complex to
support more than Debian Jessie (and the next Debian release,
Stretch). Once the migration is complete (in 2017), the advantages of
the new design should also make it easier to support other operating
systems. We are a long way from that at the moment.
--
Neil Williams
=============
neil.williams(a)linaro.org
http://www.linux.codehelp.co.uk/
Hi Neil et al.
I'd like to have the host record power measurements during a job, or
during a unit-test.
browsing the lava manual, I came across the shell-hooks which seem to
enable what I want to do.
Suppose I want to call a script "capture-start.sh" at the top of the
job, and "capture-stop.sh" at the end, is this the proper way to do stuff:
In the job json:
-------------------
```
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"git-repo": "https://github.com....
/lava-test-definitions.git",
"testdef": "capture-start.yaml"
}
],
"timeout": 1800
}
},
.... dispatcher actions of interest ....
{
"command": "lava_test_shell",
"parameters": {
"testdef_repos": [
{
"git-repo": "https://github.com....
/lava-test-definitions.git",
"testdef": "capture-stop.yaml"
}
],
"timeout": 1800
}
},
```
In the yaml, do I need to override all the handlers ? If I don't need
end_case or postprocess_test_result do I need to fill them, and point to
a "nop.sh" script "?
I assume that I need to provide at lest one step to be done on the
target, even if I am only interested in the host-side commands, would
the following yaml make sense:
```
metadata:
format: Lava-Test Test Definition 1.0
name: power-capture-start
run:
steps:
- lava-test-case pass-test --shell echo starting power measurements
handler:
handler-name: shell-hooks
params:
device_config_vars:
DEVICE_TYPE: device_type
handlers:
start_testcase: capture-start-start-hook.sh
end_testcase: nop.sh
postprocess_test_result: nop.sh
```
Thanks and Regards,
Marc.