On Tue, 7 Apr 2020 at 15:22, Paul Sokolovsky paul.sokolovsky@linaro.org wrote:
Hello,
I hope that the mail subject set up an easy, cheerful background for this discussion ;-). This definitely goes into the "stupid questions" department. My plan was to collect "evidence"/feedback from my colleagues, then at the Connect BUD20, crash into LAVA/QA rooms and ask what I/we are doing wrong. As circumstances changed, but the "testing debt" only builds up, there's little choice but to try figure out these matters over a much narrower pipe.
So, before proceeding to the question per se, let me theorize that the reasons why such questions come up at all, are sufficient differences in the intended LAVA usage patterns. In other words, how I'd like to use LAVA on the LITE team side may differ from how LAVA team intends it to be used, or how QA team uses (the biggest user would it be). The issue? How I'd intend to use it is IMHO one the most basic ways to use a test system.
So, what's that usage? Well, I'm not much interested in "interactive" use (submit jobs manually from my machine). Our interest is in unattended automated CI, of which the testing system is the second half after the build system. So let me remind how our build system, Jenkins, works. Normally, it just builds binaries and uploads them to a publishing server. It's invisible to me in this phase, and my engineering work goes uninterrupted. But when a build fails, I get an email with details about a failure. And I'll continue to get them while it continues to fail. So, the only option I have is to go see the failure, investigate, and fix it. When I arrive at Jenkins, I can easily see which jobs failed and which not, then within each job, see which builds failed and which succeeded. That's very easy, because failed things are red, and successful things are green.
This is just one 'test case'. In a way jenkins executes one test for you - build test. You can clearly see this test result and associate it with software version. LAVA executes multiple tests. There may be multiple users running their jobs on a single LAVA instance and even on single lava 'device'. Each of them needs to collect results of these jobs and interpret for their needs.
I know jenkins can execute tests (other than just build). But in such case it allows to configure how many tests can fail when the build is still considered 'green'.
So, we've now arrived at the main question of this email - Why I don't seem to be able to use LAVA in the same way? Why LAVA offers only "Incomplete" and "Complete" job statuses? "Incomplete" is understood - it's an infrastructure failure, such a job is definitely "failed". But "Complete" doesn't give me any useful information whether the job succeeded or failed. Because a "Complete" job may still have any number of tests failed. And that's exactly the "last mile" LAVA misses to go: for any test job, I want to see a cumulative number of test cases which failed, straight at the page like https://lite.validation.linaro.org/scheduler/alljobs . Then, I'd like to filter out jobs which has this number >0. Then I'd like to receive a notification only for "failed" jobs, "failed" defined as "status != Complete OR failed_testcases > 0".
So, what am I missing and how to make LAVA work like the above?
My take on this is LAVA is _not_ 'jenkins for testing'. It's simply test executor and you need to postprocess your results yourself.
milosz
Thanks, Paul
Linaro.org | Open source software for ARM SoCs Follow Linaro: http://www.facebook.com/pages/Linaro http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users