On 18 December 2017 at 16:13, Denis HUMEAU <denis.humeau@st.com> wrote:

One last question today on this topic :

 

We’ll try to get familiar with LXC. By default, the examples in Lava documentation deal with fastboot. I guess that we can use any flashing method, including a proprietary one, provided that we have a DFU available, right?

 

 

 

From: Lava-users [mailto:lava-users-bounces@lists.linaro.org] On Behalf Of Neil Williams
Sent: lundi 18 décembre 2017 14:51
To: Philippe BEGNIC <philippe.begnic@st.com>
Cc: lava-users@lists.linaro.org
Subject: Re: [Lava-users] Minimal Boot: manage additional parameters

 

On 18 December 2017 at 13:42, Philippe BEGNIC <philippe.begnic@st.com> wrote:

Thank you Neil for your answer,

Find my comments below.

 

On 12/18/2017 02:32 PM, Neil Williams wrote:

On 18 December 2017 at 10:12, Philippe BEGNIC <philippe.begnic@st.com> wrote:

Hello Lava Team,

I have created some Lava jobs that use our proprietary Flasher, based on a DFU connection.
As our flasher is not a "standard" flasher, I have adapted the boot process to be able to use our flasher.

 

Sounds like you need to use LXC instead. Put the flashing tool inside the LXC.

The board I am testing is installed in a debian docker, and the flashing tools is installed in this docker.

 

That doesn't remove the need for LXC.

 

What can LXC bring us ?

 

  

Separation that allows the use of namespaces to set the version of the flashing tool in a test shell. 

 

I use the boot method "minimal" to achieve this.

To call our flasher script, I have used the script called by the method "power_on". This is defined in the device configuration.

Find below an extract of the device content :
.......................................................................................
..
..
{% set hard_reset_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command reboot --port 1' %}
{% set power_off_command = '/usr/bin/pduclient --daemon localhost --hostname lava_pdu_01.lme.st.com --command off --port 1' %}
{% set power_on_command = '/root/git/lava-config/scripts/flash_stm32_programmer.sh -u lava_pdu_01.lme.st.com -p 1 -d usb1 -b ds378_2.lme.st.com -s 4_5_6 -f /tmp/test' %}
{% set connection_command = 'telnet localhost 2001' %}
..
..
.......................................................................................

This works correctly for a "static" configuration. The settings for the flasher are defined outside Lava by a script that configure the flashing parameters.

 

That script should be run inside the LXC.

 

However, it looks like you're approaching this in the wrong way anyway - if you are flashing files to the device, that is NOT a power on command. That is a deployment action which needs to be done in the LXC because you need to separate different versions of the flashing tool from each other, based on the test job.

 

Ignore docker in all this, that's just how you've got LAVA installed. You can't have a separate dispatcher for every test job. Use LXC to have a separate environment which the one dispatcher uses to configure the flashing tool for that specific test job.

 

The "power_on" script reads these parameters, and launch the flashing on the board.

My problem now,  is when I launch simultaneously jobs on several boards that requires different flashing binaries version.
I am unable to indicate to each boards which binary version to be used by our flasher.

 

So each needs to set up their own in a container.

Each devices under test is installed in a docker.

 

Doesn't make any odds, you should not reconfigure the docker for each test job, that is the role for LXC, transparently within LAVA.


That turned out not to be practical - we expected that LXC would work inside docker, so allowing for minimal code changes to get this working. Unfortunately, that doesn't work for a variety of reasons. So we're now working on support for two related use cases, using wrappers to replace LXC actions for those who want to replicate a job which did use LXC and support for using a customised docker which contains exactly the right tool for a specific device and using that as the container instead of an LXC. That support will take some time to develop.
 
There remains the problem that you are merging the power_on support with deployment support. Those are not the same and there should be separation between any power command and the deployment operations. i.e. support for this device - in common with most other new devices - involves code changes to support the deployment to the device, not bolting on to the device configuration. This is now an ongoing task internally and the details of this specific device will be handled off-list.

 



 

The best way would be to pass parameters in the job to indicate which binary version has to be used by the flasher.

 

No, that is a very bad way.

What would be the problem to use this method ?

 

 

Parameters are only passed to a test shell. The flashing tool needs to be configured afresh within each test job - the supported way to do that is LXC.

 



 

This could be done in the "deploy action" and pass to the "power_on" command, but I don't know how to implement it.

I don't know also if it is possible to do that easily ?


Find below my job definition.

######  Job definition  ##############

actions:  

- deploy:

timeout:

minutes: 5

to: ssh

os: oe

device:  

- boot:

method: minimal

failure_retry: 2

auto_login:

login_prompt: 'login:'

username: root

prompts:

- 'root@stm32mp1'

timeout:

minutes: 10

transfer_overlay:

download_command: sync && sleep 15 && wget

unpack_command: tar -C / -xzf


- test: ... #############################

Thanks to support me.

BR

Philippe



_______________________________________________
Lava-users mailing list
Lava-users@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lava-users



 

--

 




--