150 test cases does not necessarily mean large - we have test jobs which produce 50,000 test cases per test job, that is when a large test set can become a problem, depending on the available resources on the server. It was these test jobs which led to the sections in the documentation which you mention later. If your server doesn't struggle with the current test jobs, you might not have anything to do at this stage.
However, if you have clear groups of test cases, you should investigate using test sets which preserve those boundaries within the reported results: https://staging.validation.linaro.org/static/docs/v2/results-intro.html#test...
Thanks for the information. Perhaps it would be a good idea to add these figures to the documentation, so that new users have an idea of what a “large” test set is.
I am just starting out with LAVA, so I haven’t set up any productive tests yet. At the moment I am creating a concept of how we can include LAVA into our workflow. Part of this is the question how we handle test jobs and how we store them in our SCM.
MultiNode does offer a possible option but MultiNode itself is complex and not all test jobs would be able to use it - essentially you can use MultiNode with SSH to have multiple logins to the same device running the same software. Problems include issues with locking resources, load on the device from running tests in parallel, problems within the test being able to run in parallel in the first place amongst others.
From your enquiry, it does not sound as if you need any of that. You might
want to look at a wrapper script which consumes the output of the tests and filters out noise but apart from that, 150 test cases is really not large. We routinely run functional tests which produce more test cases than that: https://staging.validation.linaro.org/results/214157 https://staging.validation.linaro.org/scheduler/job/214157
I actually have to use MultiNode for some of our testcases anyway. These testcases need a remote server or client connected to the DUT (e.g. testing hardware interfaces like RS485, CAN, etc.).
And this is actually part of the question: When I declare all of my testcases in one test job, I have to declare the remote nodes for ALL of the tests in there as well. This makes the test job huge and confusing, though, I think. How do you handle such cases? Do you ever test that kind of interfaces at all?
Mit freundlichen Grüßen / Best regards Tim Jaacks DEVELOPMENT ENGINEER Garz & Fricke GmbH Tempowerkring 2 21079 Hamburg Direct: +49 40 791 899 - 55 Fax: +49 40 791899 - 39 tim.jaacks@garz-fricke.com www.garz-fricke.com SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg Registergericht: Amtsgericht Hamburg, HRB 60514 Geschäftsführer: Matthias Fricke, Manfred Garz
On 19 March 2018 at 14:57, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
150 test cases does not necessarily mean large - we have test jobs which produce 50,000 test cases per test job, that is when a large test set can become a problem, depending on the available resources on the server. It was these test jobs which led to the sections in the documentation which you mention later. If your server doesn't struggle with the current test jobs, you might not have anything to do at this stage.
However, if you have clear groups of test cases, you should investigate using test sets which preserve those boundaries within the reported
results:
results-intro.html#test-set
Thanks for the information. Perhaps it would be a good idea to add these figures to the documentation, so that new users have an idea of what a “large” test set is.
The problem of picking a number is that it depends a lot on the resources available to the server and the performance of the devices themselves.
I am just starting out with LAVA, so I haven’t set up any productive tests yet. At the moment I am creating a concept of how we can include LAVA into our workflow. Part of this is the question how we handle test jobs and how we store them in our SCM.
Templating.
Check out how Jinja2 is used for the server-side device configuration templates - Jinja2 can output any text-based format, we chose YAML. The same principles are used by the Linaro QA team to produce the test job submissions. Templates live in version control, the commit hash of the template gets included into the metadata of the output.
MultiNode does offer a possible option but MultiNode itself is complex and not all test jobs would be able to use it - essentially you can use MultiNode with SSH to have multiple logins to the same device running the same software. Problems include issues with locking resources, load on the device from running tests in parallel, problems within the test being able to run in parallel in the first place amongst others.
From your enquiry, it does not sound as if you need any of that. You
might
want to look at a wrapper script which consumes the output of the tests
and
filters out noise but apart from that, 150 test cases is really not large. We routinely run functional tests which produce more test cases than that: https://staging.validation.linaro.org/results/214157 https://staging.validation.linaro.org/scheduler/job/214157
I actually have to use MultiNode for some of our testcases anyway. These testcases need a remote server or client connected to the DUT (e.g. testing hardware interfaces like RS485, CAN, etc.).
And this is actually part of the question: When I declare all of my testcases in one test job, I have to declare the remote nodes for ALL of the tests in there as well. This makes the test job huge and confusing, though, I think. How do you handle such cases? Do you ever test that kind of interfaces at all?
MultiNode is intrinsically complex. The first solution is to prepare the test job submissions using some form of templating and version control, inserting commit hashes etc. into the test job as metadata which can then be picked up as part of the results. Second, make your test shell definitions portable. In the case of MultiNode, this means isolating the synchronisation calls to the LAVA MultiNode API from the test itself into dedicated test shell definitions. One test shell definition does some sync calls, the next runs some tests on this node or that node using a portable test shell script which doesn't make any calls over the MultiNode API, then another sync definition and repeat.
This allows you to replicate the test later by running the portable scripts in isolation and doing any synchronisation manually. You can also do these operations using hacking sessions.
A simple check for the presence of lava-test-case in $PATH is enough to allow the same scripts to execute in LAVA as well as outside, just use print or echo depending on the language.
We routinely run single node and MultiNode test jobs on staging.validation.linaro.org as functional tests to make sure the software itself is operating correctly.
Mit freundlichen Grüßen / Best regards Tim Jaacks DEVELOPMENT ENGINEER Garz & Fricke GmbH Tempowerkring 2 21079 Hamburg Direct: +49 40 791 899 - 55 Fax: +49 40 791899 - 39 tim.jaacks@garz-fricke.com www.garz-fricke.com SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg Registergericht: Amtsgericht Hamburg, HRB 60514 Geschäftsführer: Matthias Fricke, Manfred Garz
lava-users@lists.lavasoftware.org