Hey Paul,
I don't want to divert a conversation from the big picture here, but here's a query that can help you if I understand your problem correctly:
https://lite.validation.linaro.org/results/query/+custom?entity=testjob&...
It will show you all your jobs which are completed and have at least one test case failure.
You can use this API call https://lite.validation.linaro.org/api/help/#results.run_query to run queries from your choice of language and utilize it in any way you wish.
Cheers,
On 4/7/20 9:31 PM, Paul Sokolovsky wrote:
Hello Milosz,
On Tue, 7 Apr 2020 14:37:06 +0000 Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
[]
So, what's that usage? Well, I'm not much interested in "interactive" use (submit jobs manually from my machine). Our interest is in unattended automated CI, of which the testing system is the second half after the build system. So let me remind how our build system, Jenkins, works. Normally, it just builds binaries and uploads them to a publishing server. It's invisible to me in this phase, and my engineering work goes uninterrupted. But when a build fails, I get an email with details about a failure. And I'll continue to get them while it continues to fail. So, the only option I have is to go see the failure, investigate, and fix it. When I arrive at Jenkins, I can easily see which jobs failed and which not, then within each job, see which builds failed and which succeeded. That's very easy, because failed things are red, and successful things are green.
This is just one 'test case'. In a way jenkins executes one test for you - build test. You can clearly see this test result and associate it with software version. LAVA executes multiple tests. There may be multiple users running their jobs on a single LAVA instance and even on single lava 'device'.
But the talk is not about this. It's about:
- Jenkins clearly allows me to distinguish "failed" build. It allows
me to receive notification when build fails. Both of these don't seem to be possible with LAVA. 2. You say "it's just one 'test case'", but I can make a job with with one test case in LAVA, that test case can fail, and LAVA will still keep me oblivious of this fact.
So, I'm afraid the difference lies not in number of "test cases". It lies in the fact that Jenkins provides following job statuses: SUCCESS, UNSTABLE, FAILURE, NOT_BUILT or ABORTED (note the clear presence of SUCCESS and FAILURE). Whereas LAVA provides status of Unknown, Complete, Incomplete, Canceled.
Each of them needs to collect results of these jobs and interpret for their needs.
Right, so LAVA not only self-absolves from helping user to interpret result it runs, it simply disallows user to do that within its bounds, given the statuses listed above.
I now have to say that this discussion haven't started with this email, we came to it on Gitlab, and I find this reply from Stevan insightful: https://git.lavasoftware.org/lava/lava/-/issues/394#note_15175. Let me quote z part of it:
This is not something we invented over night. [...] LAVA users have ALWAYS been asking for something more, something else. What ever kind of result representation you implement, however generic it is, some percentage of users (sometimes it's even 100%) will find something missing and/or not satisfactory to their needs.
I'm sure this didn't come overnight, that's why I was very keen to do my homework before coming up with emails like this. I actually may imagine those Complete/Incomplete statuses are "achievement" of LAVA2 (comparing to LAVA1).
I also can very well relate with that fact that users always want more and more and are never satisfied. But it seems to me that you guys concluded that "if we can't satisfy all needs, let's satisfy NONE". And as you imagine, I cannot agree with that, because, based on my *personal* analysis, this over-simplification on LAVA side, and over-complication on user side, goes against needs of the team I represent (an internal Linaro customer).
[]
So, what am I missing and how to make LAVA work like the above?
My take on this is LAVA is _not_ 'jenkins for testing'. It's simply test executor and you need to postprocess your results yourself.
"Yourself", with which hat, as which role? My story is that I'm an embedded engineer (microcontroller level). My team doesn't have a dedicated test engineer, each engineer is tasked with working on testing as part of their workload, and that always goes into the backlog. I personally finally hit a deadend, where lack of proper testing truly affects development. So these last few month I'm actually working *as* a test engineer for my team. I'm resolving various issues in LAVA backend precluding proper working with our devices (MCU-level, again), all upheld by the hope that afterwards, we (the entire team) will be able to control and monitor all our testing need. Just to find that the way LAVA offers me to do that is by receiving hundreds spammy notification mails (that's our job volume, yeah), and grep each of them manually for word "failed", which is of course not acceptable.
Or, as an alternative, our team now needs to develop a frontend for LAVA, for its own to stop one step short of providing a baseline-useful solution. It's not me who does the resource allocation, but I'm almost sure our team doesn't have resources for that.
So, I'm raising these issues trying to find a suitable solution. The good news is that none of the issues are deep or complex, they're literally "last-mile" style issues. Rather than working on yet another adhoc frontend, I'd rather work on making LAVA more smooth-curve solution for all its users, offering baseline reporting capabilities out of the box. The risk here is of course that there's no agreement of what "baseline capabilities" are.
milosz