Hi Kumar, Andrei
>From the previous lava- mailing list i have seen that you people addressed
jlink debugger information as like this :
*On Thu, Jan 23, 2020 at 7:33 PM Andrei Gansari <andrei.gansari at
nxp.com <https://lists.lavasoftware.org/mailman/listinfo/lava-users>>
*>>* wrote:
*>>>>* From the screenshot it looks like you have a version of LAVA that does
*>>* not support jlink boot method.
*>>>>* JLink was added in version 2019.10-1*
>>* On Tue, Nov 26, 2019 at 2:36 PM Andrei Gansari <andrei.gansari at nxp.com <https://lists.lavasoftware.org/mailman/listinfo/lava-users>>
*>>* wrote:
*>>>>* I’ve tested lava+jlink on Cortex M with both onboard debugger and
*>>* external debugger, like the one you referenced.
*>>>>* You should change the following if needed:
*>>>>>>>>* address:
*>>>>* *0x00000000*
*>>>>* options:
*>>>>* - '-device *MK64FN1M0xxx12'*
*>>>>* - '-if SWD'
*>>>>* - '-speed 4000'*
Can you please let me know, whether the latest lava version can support
trace32 boot method instead jlink/cmsis-dac. trace32 debugger tool is
designed by lauterbach and it is licensed one.
I am bit afraid whether this support is available from lava server or
not as Jlink support added recently.
https://www2.lauterbach.com/pdf/app_t32start.pdf
We need trace32 debugger support as we are using cortex-M55 processor
operations over here.
I am looking forward your kind support
thanks
Regards
Nagendra S
Hi,
I am looking for the LTP test suite included(inbuild) rootfs.cpio file for
the Qualcomm Snapdragon (arm64) seris.
Please can some one let me know as where can I find this .cpio file in
Linaro's prebuild releases?
Regards,
Koti
Oh, the action indeed DOES cause an infrastructure error if it fails, my command simply did not return an error in case of a failure.
Sorry for the noise!
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Schlachthofstrasse 20
21079 Hamburg
Direct: +49 40 791 899 - 183
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
-----Ursprüngliche Nachricht-----
Von: Tim Jaacks
Gesendet: Donnerstag, 30. April 2020 16:32
An: lava-users(a)lists.lavasoftware.org
Betreff: Evaluating return value of user_commands
Hello everyone,
I am using a command action with pre-defined user_commands in the device dictionary for switching relays, as described here:
https://master.lavasoftware.org/static/docs/v2/actions-command.html
The return value does not seem to be evaluated, though. The test continues even if my user_command fails. I would assume that this causes an infrastructure failure, resulting in an incomplete job. Why is this not the case?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Schlachthofstrasse 20
21079 Hamburg
Direct: +49 40 791 899 - 183
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
Hello everyone,
I am using a command action with pre-defined user_commands in the device dictionary for switching relays, as described here:
https://master.lavasoftware.org/static/docs/v2/actions-command.html
The return value does not seem to be evaluated, though. The test continues even if my user_command fails. I would assume that this causes an infrastructure failure, resulting in an incomplete job. Why is this not the case?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Schlachthofstrasse 20
21079 Hamburg
Direct: +49 40 791 899 - 183
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
Hi team,
i am trying to login IRC channel for #linaro-lava.
https://webchat.freenode.net/#linaro-lava
But , There is no register option for new user. It takes me to the group as
Unregistered user. Please share the proper link to complete
the registration for #linaro-lava.
thanks
Regards
Nagendra S
Hi team,
I am new to lava-server. I have written and uploaded the job description
for QEMU-arm64 to boot. But I can see in logs both kerenl
Image-qemuarm64.bin and rootfs core-image-minimal-qemuarm64.ext4 files are
downloaded in /var/lib/lava/dispatcher/tmp/* directory.
But i am getting following error .
auto-login-action: Wait for prompt ['Linux version [0-9]'] (timeout
00:02:00)
*W: /etc/qemu-ifup: no bridge for guest interface foundqemu-system-aarch64:
-kernel
/var/lib/lava/dispatcher/tmp/27/deployimages-otsvre77/kernel/Image-qemuarm64.bin:
Could not open
'format=raw,file=/var/lib/lava/dispatcher/tmp/27/deployimages-otsvre77/rootfs/core-image-minimal-qemuarm64.ext4':
No such file or directoryConnection closed*
*Attaching .yaml file for job discription.* Please let me know if i miss
anything.
Note : LAVA-SERVER version:version: 2019.01-5
QEMU boot command:sudo qemu-system-aarch64 -kernel Image-qemuarm64.bin
-netdev tap,id=net0,ifname=tap1,script=no,downscript=no -device
virtio-net-device,netdev=net0 -machine virt -cpu cortex-a57 -drive
id=disk0,file=core-image-minimal-qemuarm64.ext4,if=none,format=raw -device
virtio-blk-device,drive=disk0 -no-reboot -nographic -m 512 --append
"root=/dev/vda rw console=ttyAMA0,38400 mem=512M highres=off ip=192.168.7.4:
:192.168.7.3:255.255.255.0 rootfstype=ext4 console=ttyS0"
Tested and working fine on my debian machine(with out lava server)
thanks
Regards
Nagendra S
Hi Team,
Hi Team,
Thanks for giving kind support. We are booting *ARM-cortexM targets* using
lauterbach - trace32 debugger.
http://www.embeddedindia.com/lauterbach-gmbh.html
I would like to know *lauterbach** -trace32 debugger support* in lava
If support is available, Please share me the supported Device-type jinja
file or share me the reference file.
Please let me know anything required from my side.
I am looking forward your kind support
thanks
Regards
Nagendra S
Hi Team,
Today i have started working with remote worker. But In Lava-server Admin
page i have noticed that default lava-worker went office where as remote
worker operations are working fine as expected.
1. Is this expected behavior ??
On command prompt i can see lava-slave status as Online
[image: default_worker_onserver_command_line showing_online.png]
Where as On webpage it is showing offline:
[image: default_worker_onserver_webpage_status_offline.png]
The devices connected to default worker also showing offline.
2. Can we rename hostname for remote worker??
Thanks !!
Regards
Nagendra S
Hi, guys,
I have a question after listen Docker feature for Android testing<https://linarotechdays.sched.com/event/ZZFc/ltd20-103-improved-android-test…>, and the question is a little long.
1. I want to say sorry that I haven't ever tried it on android because currently not urgent for us to switch from lxc to docker.
But my real question is related to this feature at least I think.
2. The whole story is:
We have a device which use the "tftpboot(deploy) -> nfs(boot) -> shell(test)" mode to test, that's ok.
But now we have a team which for their testcases, they need to define other logic which behavior totally different with current lava solution.
There is legacy code on pc which we want to reuse, so the quicker way for us is to use docker device, in it we could do anything to control the device.
What we tried is:
- Deploy (to: docker)
- Boot (method: docker)
- Test
It's ok, as you see we could use the connection from boot in test. But as you know, not all device type accepts docker actions.
Then, I find the option after you give the presentation in linaro tech share, use docker test.
3. I tried the v2020.02 release, it's ok to use next with any device type in our job to do my things.
- test:
docker:
image: ubuntu:18.04
Although some android related log printed like "- ANDROID_SERIAL='xxx'", but I can bear.
4. Things broken when you improve this feature in v2020.04, you add next in pipeline:
WaitDeviceBoardID(self.get_board_id()
Then, the pipeline will have to wait for a udev event, but in our case, we don't have, we control device with "remote telnet device", then job hang.
5. So, my question here is: "What's the roadmap of this docker test action"?
a) Is it just for android scenario?
b) Why we can't make it works for more common scenario?
My opinion here is: with this docker test action, we even no longer need the old "docker deploy + docker boot"?
c) For this docker test, most of docker operation was written fixed in docker test, if possible to add in some place which user could configure? Like, if I want to add "-t", I can't control that although you define the interface in "DockerRun" with "def tty(self):", and others like more docker bind mounts etc?
6. BTW, a side question related to android (As definitely someday we need to switch from lxc to docker, I use this chance to ask the question).
What will happened if there is adbd restart or pdu reboot in "docker test"? You add "-device" for the usb bus (I'm not sure, just think it same way as lxc), but during adbd restart or pdu reboot, the usb bus will surely changes. I didn't see docker has such kinds of ability to renew the "-device". How would that happen? Sorry again, I haven't tired, just want to know the mechanism.
7. Anyway, currently what I care about is the roadmap of "docker test".
If your final direction is to make the "docker test" more common, then we are OK currently "STICK ON 2020.02". If just for android & "WaitDeviceBoardID" had to be here without any control by user, then we will give up this solution & try to find another way to reuse the device in LAVA?
Your direction matters our next step!
8. Finally,
Any other suggestion for our scenario which I described in item 2?
Regards,
Larry
Hi,
I have a question related to uboot boot action's retry settings, our job is:
- boot:
failure_retry: 2
namespace: test_suite_1
connection-namespace: burning-uboot_1
method: u-boot
commands: nfs
auto_login:
login_prompt: '(.*) login:'
username: root
prompts:
- 'root@(.*):~#'
timeout:
minutes: 10
1. From the code:
"UBootAction" extends from a RetryAction, while in its internal pipeline, there is action named "UBootRetry" which also extends from RetryAction.
If we define a "retry", when exception happened in "RetryAction", it will first cause "UbootRetry" to retry, then "UBootAction" to retry again.
Sounds confuse, I wonder for what reason we should had a nested retry here?
2. In fact the real issue here for us is next:
Let's suppose we define failure_retry: 2, our situation is:
1) First boot timeout for some random block issue.
2) Then, it start Retrying: 4.4 uboot-retry (599 sec), but timeout again.
3) Then, it start Retrying: 4 uboot-action (599 sec), but timeout again.
4) Then, it start Retrying: 4.4 uboot-retry (599 sec), this time a lucky boot here, but before we are happy, it finish the last action "export-device-env" in uboot-retry. Then, looks like "UBootAction" timeout resume, then the lucky boot becomes useless although it's in fact successfully boot.
The log is:
start: 4.4.5 expect-shell-connection (timeout 00:07:23) [test_suite_1]
Forcing a shell prompt, looking for ['root@(.*):~#']
root@imx8mnevk:~#
expect-shell-connection: Wait for prompt ['root@(.*):~#'] (timeout 00:10:00)
Waiting using forced prompt support. 299.9747439622879s timeout
end: 4.4.5 expect-shell-connection (duration 00:00:00) [test_suite_1]
start: 4.4.6 export-device-env (timeout 00:07:23) [test_suite_1]
end: 4.4.6 export-device-env (duration 00:00:00) [test_suite_1]
uboot-action timed out after 727 seconds
end: 4.4 uboot-retry (duration 00:02:07) [test_suite_1]
I'm not sure, but looks like: for second "uboot-action", there is two "uboot-retry" inside it because of "retry", which will make when "uboot-action" timeout resume, the time diff becomes less than 0, which directly raise exception? Is it a bug or I misunderstand it?
duration = round(action_max_end_time - time.time())
if duration <= 0:
signal.alarm(0)
parent.timeout._timed_out(None, None)
Any suggestion for this?