>150 test cases does not necessarily mean large - we have test jobs which
>produce 50,000 test cases per test job, that is when a large test set can
>become a problem, depending on the available resources on the server. It
>was these test jobs which led to the sections in the documentation which
>you mention later. If your server doesn't struggle with the current test
>jobs, you might not have anything to do at this stage.
>
>However, if you have clear groups of test cases, you should investigate
>using test sets which preserve those boundaries within the reported results:
>https://staging.validation.linaro.org/static/docs/v2/results-intro.html#tes…
Thanks for the information. Perhaps it would be a good idea to add these figures to the documentation, so that new users have an idea of what a “large” test set is.
I am just starting out with LAVA, so I haven’t set up any productive tests yet. At the moment I am creating a concept of how we can include LAVA into our workflow. Part of this is the question how we handle test jobs and how we store them in our SCM.
>MultiNode does offer a possible option but MultiNode itself is complex and
>not all test jobs would be able to use it - essentially you can use
>MultiNode with SSH to have multiple logins to the same device running the
>same software. Problems include issues with locking resources, load on the
>device from running tests in parallel, problems within the test being able
>to run in parallel in the first place amongst others.
>
>>From your enquiry, it does not sound as if you need any of that. You might
>want to look at a wrapper script which consumes the output of the tests and
>filters out noise but apart from that, 150 test cases is really not large.
>We routinely run functional tests which produce more test cases than that:
>https://staging.validation.linaro.org/results/214157
>https://staging.validation.linaro.org/scheduler/job/214157
I actually have to use MultiNode for some of our testcases anyway. These testcases need a remote server or client connected to the DUT (e.g. testing hardware interfaces like RS485, CAN, etc.).
And this is actually part of the question: When I declare all of my testcases in one test job, I have to declare the remote nodes for ALL of the tests in there as well. This makes the test job huge and confusing, though, I think. How do you handle such cases? Do you ever test that kind of interfaces at all?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello,
I have very interesting problem: I would like to do Lava testing of
BBB01, but I am not suceeding.
Simple and plain, my U-Boot scripts gets somehow rejected, they are
not executed.
I do download correctly, I have all the correct ingredients in place,
but the U-Boot scripts are not activated, at the time of U-boot
prompt.
Here is my output, from the testing:
https://pastebin.com/hTQQSLU1
Rather, the thing gets executed from SDCard (from /boot on rootfs).
I have no idea why this is not executed.
Help appreciated!
Thank you,
Zoran
Hi Neil,
thanks for your reply.
>>(*) As a sidenote I'd like to add, that using "BootloaderCommandsAction" alone does not work. I had to add "BootloaderCommandOverlay" as well, because the "commands" are set at the end of the "run" function of this class:
>>
>> self.set_namespace_data(action='bootloader-overlay', label=self.method, key='commands', value=subs)
>>
>>Is this by design? It seems like a bug to me, since I did not find any documentation about this dependency.
>
>The majority of classes in lava-dispatcher have dependencies which are entirely determined by the way that the test devices need to operate. A new Strategy class would ensure that both are added for this use case.
>
>You also need to consider whether the test job itself contains a test action and therefore whether an overlay is needed at all. Simple boot testing doesn't have to include a test action at all, just deploy and boot.
That's the point: I don't need the overlay action in my strategy. But I had to include it, because the command action did not work without it. And to clarify things: The overlay you are talking about (adding LAVA files to the rootfs) is not the overlay I am talking about (replacing placeholders in the commands with actual values).
>I assume that simply no one has ever used "BootloaderCommandsAction" without "BootloaderCommandOverlay", so no one ever noticed. In my opinion "BootloaderCommandsAction" should work on its own.
>
>No, it should not. The commands frequently need to be modified by other actions.
That's okay. But why are they set in "BootloaderCommandOverlay" in the first place? This is totally unexpected. I am new to the code, so I don't know anything about the internals. But from a user's perspective, it seems like bad design to have an action's essential variable set in a different action.
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello everyone,
I have a device with a very basic proprietary bootloader and want to automate it with LAVA. I figured out that the "minimal" bootloader class covers basically everything I need, except that it cannot send commands to the bootloader. So for a quick test, I added "self.internal_pipeline.add_action(BootloaderCommandsAction())" to actions/boot/minimal.py. With this modification (*) I was able to create a device class which can run a smoke test successfully on my device.
In a former question on this mailing list concerning the integration of my bootloader, someone recommended me to implement a new boot strategy. Would you accept a code contribution which adds a new boot strategy only differing from the "minimal" strategy in this one addition? Or would it perhaps make sense to add the "BootloaderCommandsAction" directly to the "minimal" strategy?
(*) As a sidenote I'd like to add, that using "BootloaderCommandsAction" alone does not work. I had to add "BootloaderCommandOverlay" as well, because the "commands" are set at the end of the "run" function of this class:
self.set_namespace_data(action='bootloader-overlay', label=self.method, key='commands', value=subs)
Is this by design? It seems like a bug to me, since I did not find any documentation about this dependency. I assume that simply no one has ever used "BootloaderCommandsAction" without "BootloaderCommandOverlay", so no one ever noticed. In my opinion "BootloaderCommandsAction" should work on its own.
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello everyone,
we have a quite large test suite (> 150 testcases, and counting, devided into ~50 groups) which we want to run on our devices. This is required for each software release and would be nice for nightly builds as well.
Our deployment method flashes kernel and root filesystem onto the internal flash memory, which takes ~3 minutes. Booting the OS from RAM or using remote filesystems (NFS) is not an option for us. We need to run all tests on a device booted completely from its internal flash memory. Ideally our OS image should be deployed once and then all the tests should run on top of that deployment.
According to the LAVA documentation, best practice is not to put too many tests into one job file, as this would be hard to maintain and logs would become huge and difficult to read.
What is recommended in such a scenario? My intentional tendency was to create a job for each test group (each containing ~3 testcases average), which occurs reasonable to me. However, this would result in 50 deployment cycles with 3 minutes each, resulting in 2,5 hours spent with basically unnecessary work. This, in turn, does not seem reasonable to me.
Is there a possibility to combine jobs to run on the same device subsequently, so that the images need to be deployed only once? Or can jobs be nested somehow, so that one job does the deployment and contains sub-jobs which perform the actual tests? Or are there any other recommendations for this?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello to everyone,
I need to add some scripts to the
/etc/lava-server/dispatcher-config/device-types.
It says to contact Lava mailing list in such a case, to get some guidance.
Says here:
https://staging.validation.linaro.org/static/docs/v2/device-integration.html
*"Please* talk to us *before* starting on the integration of a new device
using the Mailing lists
<https://staging.validation.linaro.org/static/docs/v2/support.html#mailing-l…>
."
The device I need to add is Renesas iwg20m:
https://mp.renesas.com/en-us/boards/iW-RainboW-G20D-RZG1M_RZG1N_QsevenDevKi…
I have on the device working MLO and U-Boot with some U-Boot environment
which boot mmc, I also have working ser2net in Lava VM, it works
seamlessly. So these are all the prerequisites for adding the device type,
as my best understanding is?
My Lava is upgraded:
||/ Name Version Architecture Description
+++-======================-================-================-==================================================
ii lava-dispatcher 2018.2.post2-1+s amd64 Linaro
Automated Validation Architecture dispatche
ii lava-server 2018.2-1+stretch all Linaro
Automated Validation Architecture server
I have beaglebone-black
<http://localhost:8080/scheduler/device_type/beaglebone-black> device type
and added to it bbb01 device, which I finally made working with Lava. I
updated the beaglebone-black-jinja2 script, and created in
/etc/lava-server/dispatcher-config/devices bbb01.jinja2, added this device
to beaglebone-black device type.
I am wondering what should I do else, besides to write iwg20m.jinja2 which
inherits base-uboot.jinja2 script???
iwg20m is similar to Beaglebone Black (it is, after all, armv7
silicon/platform based). And I can make iwg20m.jinja2 as similar analogy to
beaglebone-black.jinja2 .
And, btw, how to add the device-type? Using GUI? CLI? Any description?
Thank you,
Zoran
I ever encountered this issue recently.
In my experience, it caused by failing to start postgresql service.
Have you changed from Jessie to stretch? You can refer to this page:
https://github.com/WindRiver-OpenSourceLabs/lava-base/blob/master/Dockerfile
Regards,
-Yang (Young)
-----Original Message-----
Message: 1
Date: Tue, 13 Mar 2018 17:36:27 +0000
From: Conrad Djedjebi <conrad.djedjebi(a)linaro.org>
To: Lava Users Mailman list <lava-users(a)lists.linaro.org>
Subject: [Lava-users] LAVA V2 | 500 Internal Server Error
Message-ID:
<CAF9r90RH4h3Z-=FZkcd6J6dV0V8nH+sW=tyDrZcHN0WLPR08Bg(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Good morning everyone,
I would like to know if someone here already face the following message while opening a LAVA results page or LAVA alljobs page :
I am looking around LAVA documentation to check if it is a known issue. But if someone here already knows something about it, it could help me also.
regards,
Hi Senthil,
>It throws an error for me when trying to submit via the UI "Job submission error: u'device_type'".
>
>Similarly, when trying with command line I get a similar error:
>
><snip>
>$ lava-tool submit-job
>https://senthil.kumaran@staging.validation.linaro.org/RPC2/
>/tmp/multinode-lxc.yaml
>ERROR: <Fault -32603: "Internal Server Error (contact server administrator for details): u'device_type'"> </snip>
>
In my instance it works via the UI as well as via lava-tool.
>BTW, what is your lava-server and lava-dispatcher version?
tim.jaacks@A048:~$ apt-cache policy lava-server lava-dispatcher
lava-server:
Installed: 2018.2-1+stretch
Candidate: 2018.2-1+stretch
Version table:
*** 2018.2-1+stretch 500
500 https://images.validation.linaro.org/production-repo stretch-backports/main amd64 Packages
100 /var/lib/dpkg/status
2017.7-1~bpo9+1 100
100 http://ftp.debian.org/debian stretch-backports/main amd64 Packages
2016.12-2 500
500 http://ftp.de.debian.org/debian stretch/main amd64 Packages
lava-dispatcher:
Installed: 2018.2.post2-1+stretch
Candidate: 2018.2.post2-1+stretch
Version table:
*** 2018.2.post2-1+stretch 500
500 https://images.validation.linaro.org/production-repo stretch-backports/main amd64 Packages
100 /var/lib/dpkg/status
2017.7-1~bpo9+1 100
100 http://ftp.debian.org/debian stretch-backports/main amd64 Packages
2016.12-1 500
500 http://ftp.de.debian.org/debian stretch/main amd64 Packages
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hi Senthil,
thanks for your reply.
>> I want to set up a multinode job containing an LXC device. Unfortunately I always get the following error:
>>
>> Missing protocol 'lava-lxc' in ['lava-multinode']
>
>This is because each role should have a lava-lxc protocol defined.
Even real hardware devices?
>Going through your job, I could see you are requesting a single device
>(role) in your multinode job, which is not the use-case which multinode caters. There should be more than one device (roles) requested as part of the job. For a single device, single node jobs are the way to go.
I know, this was deliberate in order to strip the error down to a minimal example. I started out with one LXC device and one real hardware device (which is actually the setup I want to use), which led to the above error. Then I removed parts of the job for simplification until (I hoped) the error would disappear. But it did not, even with a single LXC device. I assume a multimode job should work even if it only contains one single device, shouldn't it?
>See https://staging.validation.linaro.org/scheduler/job/213532 - which is a sample multinode job run with two lxc devices requested as part of the job. You can see how each role has a lava-lxc protocol defined for it.
>
>The job definition is available here -
>https://git.linaro.org/lava-team/refactoring.git/plain/lxc-multinode.yaml
Thanks, that pointed me the way. I had to add an additional "remote" key under "protocols/lava-lxc", so the job looks like this:
job_name: lxc-pipeline
timeouts:
job:
minutes: 15
action:
minutes: 5
priority: medium
visibility: public
protocols:
lava-multinode:
roles:
remote:
device_type: lxc
count: 1
timeout:
minutes: 6
lava-lxc:
remote:
name: pipeline-lxc-test
template: debian
distribution: debian
release: stretch
mirror: http://ftp.us.debian.org/debian/
security_mirror: http://mirror.csclub.uwaterloo.ca/debian-security/
actions:
- deploy:
timeout:
minutes: 5
to: lxc
os: debian
role:
- remote
- boot:
prompts:
- 'root@(.*):/#'
timeout:
minutes: 5
method: lxc
role:
- remote
- test:
timeout:
minutes: 5
definitions:
- repository: git://git.linaro.org/qa/test-definitions.git
from: git
path: common/dmidecode.yaml
name: dmidecode
role:
- remote
This job runs without errors, even though it has only one single device in a multinode context. From here I can build my actual multinode job by adding my real device to it.
>HTH. Thank You.
>--
>Senthil Kumaran S
>http://www.stylesen.org/
>http://www.sasenthilkumaran.com/
>
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hello,
I want to set up a multinode job containing an LXC device. Unfortunately I always get the following error:
Missing protocol 'lava-lxc' in ['lava-multinode']
My LXC device works with singlenode jobs. I used this example job for reference:
https://github.com/Linaro/lava-dispatcher/blob/release/lava_dispatcher/test…
I only changed "release" from "sid" to "stretch" and then this job works correctly.
For a minimal multinode job I added the "lava-multinode" protocol, defined a role and assigned this role to each action:
job_name: lxc-pipeline
timeouts:
job:
minutes: 15
action:
minutes: 5
priority: medium
visibility: public
protocols:
lava-multinode:
roles:
remote:
device_type: lxc
count: 1
timeout:
minutes: 6
lava-lxc:
name: pipeline-lxc-test
template: debian
distribution: debian
release: stretch
mirror: http://ftp.us.debian.org/debian/
security_mirror: http://mirror.csclub.uwaterloo.ca/debian-security/
actions:
- deploy:
timeout:
minutes: 5
to: lxc
os: debian
role:
- remote
- boot:
prompts:
- 'root@(.*):/#'
timeout:
minutes: 5
method: lxc
role:
- remote
- test:
timeout:
minutes: 5
definitions:
- repository: git://git.linaro.org/qa/test-definitions.git
from: git
path: common/dmidecode.yaml
name: dmidecode
role:
- remote
With this job definition the above error appears. Am I missing anything or is this a bug in LAVA?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz