Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that: - prototyping tests for LAVA - semi-automated execution of tests on targets that are not deployed in any LAB
Major part of this code is taken directly from lava dispatcher. There are slight modifications but I would like to keep them to minimum or remove at all (if possible). So the question follows - is there a way to achieve the same goal with only LAVA code? One of the biggest problems was requirement for ACKing that lava-test-shell requires. This makes the tests 'interactive' which isn't best for semi-automated execution. This bit was removed and now we're able to use test shell locally in non-interactive mode. The README file describes the use cases we're covering. Any comments are welcome. Code can be found here: https://git.linaro.org/qa/lava-local-test.git
milosz
On 2 August 2016 at 10:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
A few provisos on that. * This won't be able to support any Multinode API usage, it won't be able to support LXC usage in V2 either. * LAVA V2 has removed the ACK from lava_test_shell and has changes to the lava_test_shell snippets. * This re-implements out things like the deployment data which specify which shell to use and other items like directory layouts.
Major part of this code is taken directly from lava dispatcher. There are slight modifications but I would like to keep them to minimum or remove at all (if possible). So the question follows - is there a way to achieve the same goal with only LAVA code?
For the migration to V2, we kept compatibility with the lava test shell definitions - as far as runtime operation is concerned. Once the V1 code is removed, we do have plans for improving lava test shell in ways which will break compatibility, but changes cannot be made until then. In the meantime, the advice being written into the documentation is to prepare custom scripts in the test definition repository which can do all the parsing and other work in a self-contained way which can be tested locally against log files from the device. This is a more robust way of ensuring that operations happen as expected. lava-test-shell is primarily intended for very simple command operation. lava-test-case itself will always have problems with much more than a single command with arguments to be able to pass in parameters from the test job. When things become sufficiently complex that the test definition YAML actually needs to be debugged, it is better to move that logic into another script, written in a language with genuine support for conditionals, logic, arrays and dictionaries. The script/program can then call lava-test-case and other support scripts or output the results if not running in LAVA. Such scripts can improve the package installation work as well by only installing something if the parameters actually require it. The emphasis will shift to doing less in lava-test-shell in the same way that V2 has removed the assumptions and misguided helpers in the rest of the code, to allow the test shell to have easier support for alternative test platforms and utilities.
Lava test shell actually provides little benefit for targets not deployed in LAVA, apart from possibly familiarity for current LAVA test writers, over a common stand-alone script.
In LAVA V2, if you have a local instance, there is no need to replicate the code from lava_test_shell, the complete overlay is available as a tarball and that's exactly the same tarball - with any V2 changes and deployment_data changes - as was used in the test. We're still working out the best way to expose that overlay, currently it is only built on the worker. However, the advice remains the same that tests definitions which are sufficiently complex to need prototyping or debugging should go into custom scripts running against log files which can then be switched to read from stdin or similar.
One of the biggest problems was requirement for ACKing that lava-test-shell requires. This makes the tests 'interactive' which isn't best for semi-automated execution. This bit was removed and now we're able to use test shell locally in non-interactive mode.
This change is not compatible with LAVA V1. The ACK has been removed from V2 but it is not possible to do so reliably in V1.
On 2 August 2016 at 10:56, Neil Williams neil.williams@linaro.org wrote:
On 2 August 2016 at 10:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
A few provisos on that.
- This won't be able to support any Multinode API usage, it won't be
able to support LXC usage in V2 either.
Correct and this isn't expected.
- LAVA V2 has removed the ACK from lava_test_shell and has changes to
the lava_test_shell snippets.
+1
- This re-implements out things like the deployment data which specify
which shell to use and other items like directory layouts.
Correct. As I wrote, there are a couple of items that were done for convenience.
Major part of this code is taken directly from lava dispatcher. There are slight modifications but I would like to keep them to minimum or remove at all (if possible). So the question follows - is there a way to achieve the same goal with only LAVA code?
For the migration to V2, we kept compatibility with the lava test shell definitions - as far as runtime operation is concerned. Once the V1 code is removed, we do have plans for improving lava test shell in ways which will break compatibility, but changes cannot be made until then. In the meantime, the advice being written into the documentation is to prepare custom scripts in the test definition repository which can do all the parsing and other work in a self-contained way which can be tested locally against log files from the device. This is a
erm, do you suggest not to use LAVA at all?
more robust way of ensuring that operations happen as expected. lava-test-shell is primarily intended for very simple command operation. lava-test-case itself will always have problems with much more than a single command with arguments to be able to pass in parameters from the test job. When things become sufficiently complex that the test definition YAML actually needs to be debugged, it is
I totally disagree here. Practice shows that the pretty simple steps might result in errors that don't happen locally. https://git.linaro.org/qa/test-definitions.git/commit/74dcae69247c5741921ec8...
better to move that logic into another script, written in a language with genuine support for conditionals, logic, arrays and dictionaries. The script/program can then call lava-test-case and other support scripts or output the results if not running in LAVA. Such scripts can improve the package installation work as well by only installing something if the parameters actually require it. The emphasis will shift to doing less in lava-test-shell in the same way that V2 has removed the assumptions and misguided helpers in the rest of the code, to allow the test shell to have easier support for alternative test platforms and utilities.
Lava test shell actually provides little benefit for targets not deployed in LAVA, apart from possibly familiarity for current LAVA test writers, over a common stand-alone script.
I'll play devil's advocate here - it's not lava test shell I'm after but the ability to run existing tests. I agree that the tests aren't perfect and we're already trying to remove all dependencies on LAVA. But this will take time and the results are expected 'in the meantime'.
In LAVA V2, if you have a local instance, there is no need to replicate the code from lava_test_shell, the complete overlay is available as a tarball and that's exactly the same tarball - with any V2 changes and deployment_data changes - as was used in the test. We're still working out the best way to expose that overlay, currently it is only built on the worker. However, the advice remains the same that tests definitions which are sufficiently complex to need prototyping or debugging should go into custom scripts running against log files which can then be switched to read from stdin or similar.
Problem with this approach is one has to install LAVA v2 somewhere. The problem we're trying to solve is running tests on remote targets that can't be accessed by other means. There is no easy access to PDU/IPMI. Logging to console requires custom scripts.
One of the biggest problems was requirement for ACKing that lava-test-shell requires. This makes the tests 'interactive' which isn't best for semi-automated execution. This bit was removed and now we're able to use test shell locally in non-interactive mode.
This change is not compatible with LAVA V1. The ACK has been removed from V2 but it is not possible to do so reliably in V1.
OK :(
milosz
On 2 August 2016 at 11:30, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On 2 August 2016 at 10:56, Neil Williams neil.williams@linaro.org wrote:
On 2 August 2016 at 10:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Major part of this code is taken directly from lava dispatcher. There are slight modifications but I would like to keep them to minimum or remove at all (if possible). So the question follows - is there a way to achieve the same goal with only LAVA code?
For the migration to V2, we kept compatibility with the lava test shell definitions - as far as runtime operation is concerned. Once the V1 code is removed, we do have plans for improving lava test shell in ways which will break compatibility, but changes cannot be made until then. In the meantime, the advice being written into the documentation is to prepare custom scripts in the test definition repository which can do all the parsing and other work in a self-contained way which can be tested locally against log files from the device. This is a
erm, do you suggest not to use LAVA at all?
To not be held back by Lava Test Shell, which is part of the test action part of LAVA.
LAVA gets you to the prompt of a login shell and provides some helpers for LAVA-specific information but I am convinced that the rest of lava-test-shell needs to be improved to make it more flexible for other test suites, other deployments and for non-POSIX systems. The lava-test-shell definition can - and arguably should - become just a few lines calling existing scripts with parameters.
I'm suggesting not to write tests which rely upon Lava Test Shell beyond the bare minimum of what's necessary for automation. To write custom scripts that can do the same thing on a different architecture and come up with similar results, supporting testing locally in a clean environment without needing anything from LAVA to be installed locally.
more robust way of ensuring that operations happen as expected. lava-test-shell is primarily intended for very simple command operation. lava-test-case itself will always have problems with much more than a single command with arguments to be able to pass in parameters from the test job. When things become sufficiently complex that the test definition YAML actually needs to be debugged, it is
I totally disagree here. Practice shows that the pretty simple steps might result in errors that don't happen locally. https://git.linaro.org/qa/test-definitions.git/commit/74dcae69247c5741921ec8...
? That just looks like the test environment was not clean. If by "local" you mean a dirty user system instead of a clean chroot / VM or other way of emulating what any automation actually does (gives you a clean environment) then this just looks like the remote end changed behaviour and the local test didn't catch it because it used an existing copy instead of installing fresh as the test must do.
What I'm getting at is this: What's necessary for reproducing a LAVA test is a series of log files of the output that you expect to get from the test operation(s), quite possibly from a completely different architecture. A custom script which uses a sensible language can then do lots of reliable, clever parsing of the log to get exactly the right answers for that test operation. Where possible, use a script that someone else has already written for this or if none exists, write it so that it can be contributed upstream, without reference to LAVA. That stage can be done without any access to the device itself - the values will change but the format of the output should be the same (or else it's a bug in the test operation). That script can also do package installation and other setup tasks, equivalent to what would be needed to run up the test on a different architecture in a clean chroot environment. This moves all the complex work into a script which can be easily run in a local clean environment, regardless of the architecture/device. Let LAVA do the device-specific & automation stuff and give you a prompt on the device with your script available and at least the same list of packages installed as in your clean local chroot. From that point on, the test operation itself - whether it's LTP or something else - is handled by the same script as is used in the local environment. It's trivial to check for lava-test-case in the $PATH or ../bin/ directory, at which point the script can print the results or call lava-test-case --result with the results of the parsing. The Lava Test Shell definition then becomes not much more than calling that script with parameters - 2 lines at best. The script does the download of any code needed, the compilation, the execution and the parsing - exactly as it would in a clean local chroot environment.
It's about testing one thing at a time - when your own custom script does all the work, it is trivial to test that script locally - it is also trivial to use that script in a variety of other test environments, not just LAVA. Lava Test Shell is a helper, an enabler. It was never intended to become a limitation on running the tests elsewhere or to end up needing a version of itself to be available for local installation. That is something we need to fix in Lava Test Shell - it needs to stop tempting people into relying on it to such a degree that it makes the test definition itself non-portable. Portability is a good thing and custom scripts are the way to achieve that - especially as all the necessary support for this is already available and has been for some years.
Tests run in LAVA need to be thought of as code which is intended to be upstreamed, but not into LAVA. LAVA needs to do only the minimum operations required to get the test running under automation and then, critically, get out of your way as test writer. This means that running the same test locally is part of the design and needs nothing from LAVA. We're making progress with this principle in the V2 code and the tests themselves can benefit in the same way.
better to move that logic into another script, written in a language with genuine support for conditionals, logic, arrays and dictionaries. The script/program can then call lava-test-case and other support scripts or output the results if not running in LAVA. Such scripts can improve the package installation work as well by only installing something if the parameters actually require it. The emphasis will shift to doing less in lava-test-shell in the same way that V2 has removed the assumptions and misguided helpers in the rest of the code, to allow the test shell to have easier support for alternative test platforms and utilities.
Lava test shell actually provides little benefit for targets not deployed in LAVA, apart from possibly familiarity for current LAVA test writers, over a common stand-alone script.
I'll play devil's advocate here - it's not lava test shell I'm after but the ability to run existing tests. I agree that the tests aren't perfect and we're already trying to remove all dependencies on LAVA. But this will take time and the results are expected 'in the meantime'.
Exactly - lava-test-shell will (eventually) stop getting in the way of running existing tests in LAVA. In the meantime, custom scripts which work in a clean environment locally and in LAVA are the best approach and can already be supported. The lava-test-shell definition then just becomes run: steps: ./scripts/process.py --param value --param2 value2
lava-test-shell is just a helper and when that becomes a limitation, the correct solution is to do the work in a more competent language because, as test writer, you control which languages are available. lava-test-shell has to try and get by with not much more than busybox ash as the lowest common denominator. Please don't expect it to do everything - let it provide you with a directory layout containing your scripts, some basic information about the job and a way of reporting test case results - that's about all it should be doing outside of the Multinode API. Then do the rest in ways that allow for simple local testing in a clean environment.
This allows us to then move lava-test-shell and the test action support in LAVA V2 into areas where there is no shell and to make it easier to work with other, established, test harnesses.
There shouldn't be a need for test definitions to rely on LAVA pattern parsing and fixup dictionaries or much more than a basic set of dependencies to install - that is under the control of the test writer and adopting that into the test scripts makes it possible to run that test outside of LAVA too.
This is what we're doing with the LAVA tests on the LAVA software itself - to make it easier to run those tests in Debian and other environments where the software is the same but the architecture / device does not matter so much.
In LAVA V2, if you have a local instance, there is no need to replicate the code from lava_test_shell, the complete overlay is available as a tarball and that's exactly the same tarball - with any V2 changes and deployment_data changes - as was used in the test. We're still working out the best way to expose that overlay, currently it is only built on the worker. However, the advice remains the same that tests definitions which are sufficiently complex to need prototyping or debugging should go into custom scripts running against log files which can then be switched to read from stdin or similar.
Problem with this approach is one has to install LAVA v2 somewhere. The problem we're trying to solve is running tests on remote targets that can't be accessed by other means. There is no easy access to PDU/IPMI. Logging to console requires custom scripts.
That's just to get hold of the overlay.tar.gz. With a custom script that does the work, you just need the output of the test itself, whether it's LTP or something else. You shouldn't need a local lava-test-shell or the overlay - only a few log files of the expected / real output and a local clean environment which can be used to do the setup.
On Tue, Aug 02, 2016 at 11:30:37AM +0100, Milosz Wasilewski wrote:
For the migration to V2, we kept compatibility with the lava test shell definitions - as far as runtime operation is concerned. Once the V1 code is removed, we do have plans for improving lava test shell in ways which will break compatibility, but changes cannot be made until then. In the meantime, the advice being written into the documentation is to prepare custom scripts in the test definition repository which can do all the parsing and other work in a self-contained way which can be tested locally against log files from the device. This is a
erm, do you suggest not to use LAVA at all?
I think the point here is (as we chatted privately):
Do Not Lock Yourself Out of Your Tests™, i.e.
- don't make your test code depend on the LAVA infrastructure in any way; make sure you can always run your tests by downloading the test code to a target device, installing its dependencies (the test code itself could do this), and running a single script.
- make the LAVA-specific part as small as possible, just enough to e.g. gather any inputs that you get via LAVA, call the main test program, and translate your regular output into ways to tell lava how the test went (if needed).
That, of course, can't help with existing tests that already assume too much of LAVA to be used without it.
On 2 August 2016 at 12:48, Antonio Terceiro antonio.terceiro@linaro.org wrote:
On Tue, Aug 02, 2016 at 11:30:37AM +0100, Milosz Wasilewski wrote:
For the migration to V2, we kept compatibility with the lava test shell definitions - as far as runtime operation is concerned. Once the V1 code is removed, we do have plans for improving lava test shell in ways which will break compatibility, but changes cannot be made until then. In the meantime, the advice being written into the documentation is to prepare custom scripts in the test definition repository which can do all the parsing and other work in a self-contained way which can be tested locally against log files from the device. This is a
erm, do you suggest not to use LAVA at all?
I think the point here is (as we chatted privately):
Do Not Lock Yourself Out of Your Tests™, i.e. - don't make your test code depend on the LAVA infrastructure in any way; make sure you can always run your tests by downloading the test code to a target device, installing its dependencies (the test code itself could do this), and running a single script. - make the LAVA-specific part as small as possible, just enough to e.g. gather any inputs that you get via LAVA, call the main test program, and translate your regular output into ways to tell lava how the test went (if needed).
Thanks, Antonio. I may well paste that block directly into the documentation!
Nice summary.
On 2 August 2016 at 12:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Oh, yes, this was a long time request from me :)
https://bugs.linaro.org/show_bug.cgi?id=1610
Plans to packages this so it can be just pulled with apt-get ?
Riku
On Aug 18, 2016, at 9:27 PM, Riku Voipio riku.voipio@linaro.org wrote:
On 2 August 2016 at 12:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Oh, yes, this was a long time request from me :)
https://bugs.linaro.org/show_bug.cgi?id=1610
Plans to packages this so it can be just pulled with apt-get ?
Just a heads up. We are working on qa/test-definition refactoring.
Users will be able to run test scripts directly, they may need to specify parmaters manually as needed. Test script handle test run and result parsing. And it produces a result file in a format that can be easily parsed and sent to LAVA.
Meanwhile, yaml file for each test case will be provided for test run in LAVA. The yaml file defines the default params, runs the script with the params and contains one line in the end using a script(using lava-test-case) to send result to LAVA.
A local test runner will be provided as well to run yaml file or a set of yaml files defined in test plan. It simpley convert the yaml to run.sh and run it. It is sth similar as the lava-loca-test, but removed the dependence on lava-test-shell.
Here is the first attempt https://review.linaro.org/#/c/13773/
Riku _______________________________________________ linaro-validation mailing list linaro-validation@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-validation
-- Thanks, Chase
On 18 August 2016 at 14:27, Riku Voipio riku.voipio@linaro.org wrote:
On 2 August 2016 at 12:09, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
Hi,
Chase did an excellent job and put together a piece of code that allows us local execution of lava-test-shell. This means we can use our 'lava' test definitions on the boards that are not deployed in any LAB. There are 2 main reasons for doing that:
- prototyping tests for LAVA
- semi-automated execution of tests on targets that are not deployed in any LAB
Oh, yes, this was a long time request from me :)
https://bugs.linaro.org/show_bug.cgi?id=1610
Plans to packages this so it can be just pulled with apt-get ?
I don't have much experience with maintaining a package. We should have this in good shape by connect, so we can talk about producing a package.
milosz
lava-users@lists.lavasoftware.org