On Wed, 20 Feb 2019 at 17:25, Tim Jaacks <tim.jaacks@garz-fricke.com> wrote:

Von: Neil Williams [mailto:neil.williams@linaro.org]
Gesendet: Dienstag, 12. Juni 2018 17:26
An: Tim Jaacks <tim.jaacks@garz-fricke.com>
Cc: lava-users@lists.linaro.org
Betreff: Re: [Lava-users] Specifying metadata for test cases

 

> On 12 June 2018 at 16:05, Tim Jaacks <tim.jaacks@garz-fricke.com> wrote:

> > Hello everyone,

> >

> > I know from the LAVA documentation how to add metadata to jobs and test suites. When I look at test results, I see that test cases have metadata, too. E.g. https://validation.linaro.org/results/testcase/9759970 shows the following metadata:

> >

> > case: linux-linaro-ubuntu-lscpu

> > definition: 0_smoke-tests-lxc

> > result: pass

> >

> That data is retrieved directly from the test case. case is the string passed to lava-test-case as the test case ID. definition is the currently running test definition (so an integral part of how the test case is reported) and then there is the result.

>

> Actions can supply metadata, e.g. https://validation.linaro.org/results/testcase/9759948

>

> This metadata comes from the Python3 code running the action and covers a range of dynamic data which is not otherwise recorded. This data is not modifiable.

>

> Test suites gain metadata from the operation of the test job itself. The content of the Lava Test Shell Definition is not used, other than to identify the test defiinition.

 

Why can metadata be defined, when it is not used at all?


Because Lava-Test Test Definition 1.0 had to retain compatibilty with LAVA V1. Lava-Test Test Defintiion 1.0 metadata exists for the benefits of test writers and has never been utilised by LAVA itself. This metadata is static and has been deemed to be rarely useful within CI itself. The choice of which test definition files to use (and therefore the choice of static metadata within those files) is made when the test job is created prior to submission, so the templating / submission tools need to handle that.
 

Metadata from inline test shell definitions can be read out via the REST API, metadata from external test shell definitions can't. What's the point in defining metadata, if you cannot use it afterwards?

 

>

> The user-controllable metadata needs to be in the Test Job Submission, to then appear in the top level results:

>

> e.g. in

>

> https://validation.linaro.org/results/1882533

>

> path: health-checks/dragonboard-820c.yaml

> source: https://git.linaro.org/lava-team/refactoring.git

>

> come from the test job submission.

>

>

>

> > Is there a possibility to add custom metadata to test cases?

> >

> Only as a URL: https://validation.linaro.org/static/docs/v2/writing-tests.html#recording-test-case-references

 

Why this limitation? Is there any reason, why it has to be an URL? Why not allow any string to be appended to a test case?


TestCases were originally designed only for comparison in charts - numbers or enums. Strings in test cases are planned https://git.lavasoftware.org/lava/lava/issues/20 but as part of Test Definition 2.0
 

 

> Once the test job has completed, bug links are also available for test jobs, test suites and test cases.

>

> Extra data can be recorded as separate test cases using the test case name, e.g.

>

> https://staging.validation.linaro.org/results/testcase/5557198

 

This seems like a dirty hack to me, quite contrary to the „keep your data clean“ principle.

 

> (Although I really should tweak that to drop the Python3 string handling artifacts in the test shell scripts. It should read code-version-2018.5+12261.50e347aca )

>

> We are considering new features for the Lava Test Shell Definition but changes won't be made any time soon, it's likely to be 2019 before we have a clear plan for that work.

>

 

Are there any news on this? What new features are planned?


Test Definition 2.0 has taken a back seat to other priorities related to scaling and high availability. We only have enough developer time within the core team to handle one of these at a time.

I suggest that you subscribe to the lava-devel mailing list where we post updates of the planning process.
 

 

Sorry for re-activating such an old thread, but the topic is still relevant for us. We would like to append metadata to test cases and test suites, and there does not seem to be a useful way to achieve this with LAVA. Our actual goal is to include LAVA test results in the release notes for our OS releases. For this purpose, we need some kind of human-readable description of our tests, which shall be stored alongside the results.


Then I suggest that you use the existing support and write a custom frontend which can extract the data you need into the format you require. LAVA test results are not designed to be human readable, the results are for machines to consume and reformat into human readable items like mailing list emails and tables. Squad is one frontend doing this, KernelCI have their own.

We will take things like this into account with Test Definition 2.0 but the first half of 2019 is going to be taken up by other priorities.


Incidentally, if you are considering that this test case metadata should indicate whether a failing test case is expected to fail, I would recommend instead that you adopt skip lists. Experience with other human-readable reporting is that "expected failures" are too much noise and what is generally better is to report that all tests that were executed passed, then include the list of skipped tests with reasons. Managing skip lists (to make fix intermittent failures and expected failures) can then be done in dedicated test jobs with different reporting, directly to the people responsible for reducing the number of skipped tests.
 

 

As far as I can see from the database layout, metadata is a data field of test cases. Would you accept code contributions which add functionality for adding custom metadata to this field?


Not at this stage, it would need to be designed into Test Defintiion 2.0. We already have problems with the amount of SQL operations required when creating a TestCase object and there are jobs which create over 50,000 test cases per test job. The current model is already causing bottlenecks & latency, so adding more operations would not be acceptable at this time.
 

 

However, test suites do not have such a field. Instead, actions have metadata, which are not stored in the database but in dedicated metadata yaml files under /var/lib/lava-server/default/media/job-output/. Why is that? Why not add a metadata database field for actions/test suites as well?

 

Do you have any ideas of how to achieve our goal using LAVA?


You would need a custom frontend which collates the various data sources and creates the data set you need. The information you need to label each test case pass or fail is already known to you, the list of test case names is under your control and the formatting of any human readable output needs to be constantly edited and tweaked for different audiences. The creation of such output is not within scope for a generic application like LAVA. All conversions of LAVA test results into human readable content should be done outside LAVA, precisely because every type of such format is so different to every other type and each method requires constant adjustment. It is a mistake to think that LAVA can do everything you describe above. There will always need to be some manipulation of the LAVA output before you can create any human-readable output.

 

 

> --

>

> Neil Williams

> =============

> neil.williams@linaro.org

> http://www.linux.codehelp.co.uk/

> 

 

 

Mit freundlichen Grüßen / Best regards

Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH

Tempowerkring 2
21079 Hamburg

Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks@garz-fricke.com
www.garz-fricke.com

WE MAKE IT YOURS!

Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun

 

 



--