On 12 September 2016 at 12:56, Neil Williams neil.williams@linaro.org wrote:
[cut]
I'm not sure what else you want from a REST API other than having all of the data available to build the URL immediately after completion, without needing to do round-trip lookups to find hashes or other generated strings. A single call to the results for a completed testjob provides all the information you need to build URLs for all test cases including the links to the position within the log file for each test case. There is no "matching" required in V2 and no round-trips back to the server with more API calls. One call gets all the data but the job of a REST API is not to build those URLs for you, it's to provide enough information to predict those URLs in a single call. Are you looking for an API call which returns all the URLs pre-assembled?
I don't need URLs at all. All I need is to know which test results come from which 'tests' in job definition
Test suites come from the job definition, following the name specified by the job definition.
and if there is anything missing.
There is a specific calculation for this on the Results page for the test job. This checks that all test-suites defined in the test definition have provided results - that is why inline definitions show as omitted if no lava-test-case is used.
The important part is to know that some tests screws something up and produces no results.
That is outside the control of LAVA *except* in the case where the test runner itself fails and thereby stops a subsequent test suite from executing. So if 1_smoke-tests falls over in a heap such that the job terminates, times out then the job will be Incomplete. If the test-runner exits early then this will also be picked up as a test runner failure. "lava-test-runner exited with an error".
We need to be careful with terminology here.
test-suite - maps to the test definition in your git repo or the inline definition. If a test suite fails to execute, LAVA will report that.
test-set and test-case - individual lines within a test definition. If
These are the ones I don't care about. I don't need to know whether there are N or N+1 test-cases in the test-suite. All I'm asking is whether the test-suite (translated from inline test in job definition) reported a result or not. So let's stop worrying about test-cases as they're irrelevant in this context.
[cut]
lava-test-case calls are not that interesing yet as for example the test can return different number of results based on parameters passed.
However, lava-test-case can also be used to report results for things which are "hidden" within the scripts in the remote git repo. It is also the test-case which provides the link into the position in the job log file.
This approach ties tests to LAVA which I don't like as users requested to have ability to run tests 'standalone'. So anything that takes the test in the direction of being 'LAVA specific' can't be used.
Then a custom script is going to be needed which does the parsing - including checking whether the correct number of tests have been run - and then produces data which is reported to LAVA (or something else). I do this for the django unit tests with lava-server. We have a single script, ./ci-run, that everyone runs to execute the tests locally (and in gerrit). The custom script sets up the environment, then runs ./ci-run | tee filename and then parses the file. Once it has done the checks it needs, it loops through it's own data. At that point, it can check for lava-test-case in $PATH and use that or dump to some other output or call something else. This is what provides the standalone support with LAVA picking up the results once the standalone script has done the execution and parsing of the data.
I'm doing a similar thing now.
In addition, each test job gets a set of LAVA results containing useful information like the commit hash of the test definition when it was cloned for this test job.
- if there are multiple occurrences of the same test with different
parameters, how to recognize the results?
Multiple occurrences show up in the results table: https://staging.validation.linaro.org/results/154736/2_singlenode-advanced (realpath_check occurs twice with separate results)
The question was about multiple occurences of the same test definition.
Will occur as discrete entries in the results - prefixed with the order.
1_smoke_tests 2_smoke_tests etc.
For example we use subsets of LTP. So I would like to test:
- LTP - syscalls
- LTP - math
As I wrote above the test cases will be different, so they're not that interesting.
That is where test-set is useful. I'll be writing up more documentation on that today.
lava-test-set start syscalls lava-test-case syscalls ... lava-test-set stop syscalls lava-test-case start math lava-test-case math ... lava-test-set stop math
This adds a set around those test cases by adding the test set to the URL.
/results/JOB_ID/2_smoke-tests/syscalls/syscall_one_test
This approach ties the test to LAVA which is a 'no go' from my point of view. Beside that there are other params which are important to know (see CTS: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/android/cts-host.y... or hackbench: https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/ubuntu/hackbench.y...).
I disagree. You can do all the processing in the standalone script and still call lava-test-set at particular points if that proves to be useful, as part of the reporting stage at the end of the standalone script. The script still needs to output something sensible when run outside LAVA, so it still needs to do all the same checks and parsing. When it chooses to report to LAVA, it is able to use lava-test-set if that is useful or simply put everything through lava-test-case.
That is very impractical. It means the result format for different tests is different. This increases the effort on test maintenance and increases complexity. In most cases lava-test-set is not useful. When test is run standalone the parameters are known. So why not report the parameters in results (metadata) from LAVA? They are just a part of job definition like git URL or test definition path. These are reported, so why not the parameters (at least if they're set)?
Analysis of the data within LAVA will need the relevant elements to be reported to LAVA - we cannot go into the Lava Test Shell Definition and *guess* how many times lava-test-case is meant to be called.
As I mentioned calling lava-test-case is out of context here. I just need to match inline test definition to test-suite in results. I actually got my example running and IMHO there is a bug somewhere: https://validation.linaro.org/scheduler/job/1113516
There are 2 test-suites in job definition but only one is reported in the results (note the parameters are different for test-suite instances).
[cut]
Example of such jobs: https://validation.linaro.org/results/1107487 (not the best as the names are different) https://validation.linaro.org/scheduler/job/1113188/definition (job failed, so no results, but I'm trying to get this working)
That needs to be declared to LAVA via the test suite name or a test-set or via the test case names. LAVA cannot introspect into your remote git repo any more easily than you can.
Hmm, this approach implies there is only 1 parameter. How do I know if there are more than 1?
That is up to the standalone script that does the parsing.
How do I do the parsing if LAVA doesn't give me the parameters back?
So if the default isn't clear, add a lava-test-case which tests that the default is what you expect - smoke-test-default-true: fail.
This looks like regression from v1 which reported all params in result bundles (default and set in job definition).
It's not a regression, it is a different method. Not all test writers always need / want all parameters to be reported. V2 provides the control that V1 just presumed to take whether the writer wanted that or not.
Doing these tests to support standalone testing means putting nearly all the logic of the test into a single script which can be run both standalone and in LAVA - the script simply has options which determine how that work is reported.
I disagree here. The parameters come from job definition and should be reported (in metadata?). Other metadata is reported, like git repo URL or test path. Even the default parameters are available to LAVA before the execution as the test shell scripts are prepared by LAVA. So there isn't much difference from v1 in this process.
milosz