Hello everyone,
we have a quite large test suite (> 150 testcases, and counting, devided into ~50 groups) which we want to run on our devices. This is required for each software release and would be nice for nightly builds as well.
Our deployment method flashes kernel and root filesystem onto the internal flash memory, which takes ~3 minutes. Booting the OS from RAM or using remote filesystems (NFS) is not an option for us. We need to run all tests on a device booted completely from its internal flash memory. Ideally our OS image should be deployed once and then all the tests should run on top of that deployment.
According to the LAVA documentation, best practice is not to put too many tests into one job file, as this would be hard to maintain and logs would become huge and difficult to read.
What is recommended in such a scenario? My intentional tendency was to create a job for each test group (each containing ~3 testcases average), which occurs reasonable to me. However, this would result in 50 deployment cycles with 3 minutes each, resulting in 2,5 hours spent with basically unnecessary work. This, in turn, does not seem reasonable to me.
Is there a possibility to combine jobs to run on the same device subsequently, so that the images need to be deployed only once? Or can jobs be nested somehow, so that one job does the deployment and contains sub-jobs which perform the actual tests? Or are there any other recommendations for this?
Mit freundlichen Grüßen / Best regards Tim Jaacks DEVELOPMENT ENGINEER Garz & Fricke GmbH Tempowerkring 2 21079 Hamburg Direct: +49 40 791 899 - 55 Fax: +49 40 791899 - 39 tim.jaacks@garz-fricke.com www.garz-fricke.com SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg Registergericht: Amtsgericht Hamburg, HRB 60514 Geschäftsführer: Matthias Fricke, Manfred Garz
On 19 March 2018 at 12:36, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
Hello everyone,
we have a quite large test suite (> 150 testcases, and counting, devided into ~50 groups) which we want to run on our devices. This is required for each software release and would be nice for nightly builds as well.
150 test cases does not necessarily mean large - we have test jobs which produce 50,000 test cases per test job, that is when a large test set can become a problem, depending on the available resources on the server. It was these test jobs which led to the sections in the documentation which you mention later. If your server doesn't struggle with the current test jobs, you might not have anything to do at this stage.
However, if you have clear groups of test cases, you should investigate using test sets which preserve those boundaries within the reported results: https://staging.validation.linaro.org/static/docs/v2/results-intro.html#test...
Our deployment method flashes kernel and root filesystem onto the internal flash memory, which takes ~3 minutes. Booting the OS from RAM or using remote filesystems (NFS) is not an option for us. We need to run all tests on a device booted completely from its internal flash memory. Ideally our OS image should be deployed once and then all the tests should run on top of that deployment.
According to the LAVA documentation, best practice is not to put too many tests into one job file, as this would be hard to maintain and logs would become huge and difficult to read.
That is a recommendation based on the amount of log output per test case. Best solution for that is to wrap the output of the commands in another script which can filter out the noise and only report useful stuff.
* If the test job log is too long, it will cause the browser to timeout - there are settings in lava-server to limit the visibility of these log files. LOG_SIZE_LIMIT https://staging.validation.linaro.org/static/docs/v2/advanced-installation.h...
* If the test job log is large because it contains lots of noise (like progress bars or other temporary data which isn't relevant to debugging a failed test case) then adapt the test shell definitions to use --quiet options or use wrapper scripts to filter out such content.
What is recommended in such a scenario? My intentional tendency was to create a job for each test group (each containing ~3 testcases average), which occurs reasonable to me. However, this would result in 50 deployment cycles with 3 minutes each, resulting in 2,5 hours spent with basically unnecessary work. This, in turn, does not seem reasonable to me.
Is there a possibility to combine jobs to run on the same device subsequently, so that the images need to be deployed only once? Or can jobs be nested somehow, so that one job does the deployment and contains sub-jobs which perform the actual tests? Or are there any other recommendations for this?
It sounds like you are only looking at this from the perspective of large log files, that's not the only concern.
Test jobs which produce tens of thousands of test cases had issues with storing the results in the database and retrieving the results through the API - both of these have been addressed in 2018.2 but there will always be a practical limit to how many test cases can be handled in one test job - not just in the log file but in subsequent handling.
If it was possible to combine jobs, the same problems would occur with retrieving the combined results. It just moves the problem elsewhere.
MultiNode does offer a possible option but MultiNode itself is complex and not all test jobs would be able to use it - essentially you can use MultiNode with SSH to have multiple logins to the same device running the same software. Problems include issues with locking resources, load on the device from running tests in parallel, problems within the test being able to run in parallel in the first place amongst others.
From your enquiry, it does not sound as if you need any of that. You might
want to look at a wrapper script which consumes the output of the tests and filters out noise but apart from that, 150 test cases is really not large. We routinely run functional tests which produce more test cases than that: https://staging.validation.linaro.org/results/214157 https://staging.validation.linaro.org/scheduler/job/214157
Mit freundlichen Grüßen / Best regards Tim Jaacks DEVELOPMENT ENGINEER Garz & Fricke GmbH Tempowerkring 2 21079 Hamburg Direct: +49 40 791 899 - 55 Fax: +49 40 791899 - 39 tim.jaacks@garz-fricke.com www.garz-fricke.com SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg Registergericht: Amtsgericht Hamburg, HRB 60514 Geschäftsführer: Matthias Fricke, Manfred Garz
Lava-users mailing list Lava-users@lists.linaro.org https://lists.linaro.org/mailman/listinfo/lava-users
lava-users@lists.lavasoftware.org