On Tue, 27 Aug 2019 at 15:37, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
-----Ursprüngliche Nachricht-----
Von: Milosz Wasilewski milosz.wasilewski@linaro.org Gesendet: Dienstag, 27. August 2019 16:32 An: Tim Jaacks tim.jaacks@garz-fricke.com Cc: Axel Lebourhis axel.lebourhis@linaro.org; lava-users@lists.lavasoftware.org Betreff: Re: [Lava-users] Multinode job: how can I assure that all devices are on the same worker?
On Tue, 27 Aug 2019 at 15:10, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
-----Ursprüngliche Nachricht-----
Von: Milosz Wasilewski milosz.wasilewski@linaro.org Gesendet: Dienstag, 27. August 2019 15:59 An: Tim Jaacks tim.jaacks@garz-fricke.com Cc: Axel Lebourhis axel.lebourhis@linaro.org; lava-users@lists.lavasoftware.org Betreff: Re: [Lava-users] Multinode job: how can I assure that all devices are on the same worker?
On Tue, 27 Aug 2019 at 14:44, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
Hi Axel,
thanks for your reply. I don’t want to force job submission to a SPECIFIC worker, I just want to make sure that all of the devices are on the SAME worker.
When I submit a job, I don’t care on which specific worker it is scheduled, as long as all of the nodes are on the same worker.
Or differently spoken: as soon as one node is scheduled, can we define scheduling rules for all the remaining nodes?
I don't think this is currently possible. So far the main objective was to grab devices as they're available to avoid deadlocks.
Having said that I'm wondering why are you using multinode with LXC and HW? Is this for synchronization only? Maybe the test can be re-written in such way to not require multinode? By default LXC container is launched on the same worker that the DUT is connected to.
Well, we need to access several hardware interfaces and software services running on the DUT from the outside in order to validate their functionality. We use an LXC device in order to achieve this. The LXC is the "outside world" and connects to the DUT, e.g. via telnet to validate the telnet server running on the DUT.
Is this the wrong approach? How would the correct LAVA way be for this?
I don't think it's wrong, but it drives you to the troubles with networks. As I wrote scheduling 'on the same worker' is not implemented in LAVA and I doubt it will change soon. Did you try 'secondary connections'? https://master.lavasoftware.org/static/docs/v2/connections.html#index-12
How would this work in my scenario? As I understand it, a secondary connection is another login shell on the device. I don't need another login shell, I want to actually execute commands on a remote side (which is the worker) as part of the test. Is this possible with secondary connections? If yes, do you have an example which is similar to my use case?
In order to connect from DUT to LXC you would need to configure SSH on the worker and LXC. I'm not sure if it's possible to do dynamically. Surely it adds complexity to the setup. Maybe running the tests scripts 'on the other end' is a better approach?
Main use case for LXC was flashing boards with fastboot over USB and accessing android with adb over USB. So this implicitly assumes there is a proper USB connection available in LXC container running the job.
We are not using LXC for establishing the initial serial connection, as it is done with Android devices. The LXC is a separate device from LAVA's perspective, running test scripts in parallel to the test scripts on the DUT. They are synchronized using the MultiNode API. Is that understood?
Yes, I got that. I'm just suggesting that you might try to reimplement the test and to the synchronization yourself. This way you avoid the problem of LXC 'device' being in different network than your DUT.
But how would I execute commands on the worker then as part of the test?
Ok, let's back up one step. In the 1st email you wrote the test consists of 1. DUT running SFTP server 2. Client (LXC) sending a file 3. comparison of md5 sum of source file and transmitted copy
I would do the following 1. boot DUT 2. start sftp server on DUT. Testing script has to determine whether the daemon is running (pass) or not (fail) 3. start LXC and transfer the file (you should know IP address of DUT from it's device dictionary) 4. using secondary connection/second uart connect from LXC to DUT and check md5 checksum of the file 5. in LXC check the checksum of the file and compare with checksum from 4
That is what I meant by 'rewrite' the test. Would that work for you?
milosz
milosz
milosz
Regards,
Tim
Von: Axel Lebourhis axel.lebourhis@linaro.org Gesendet: Dienstag, 27. August 2019 15:34 An: Tim Jaacks tim.jaacks@garz-fricke.com Cc: lava-users@lists.lavasoftware.org Betreff: Re: [Lava-users] Multinode job: how can I assure that all devices are on the same worker?
Hi Tim,
I think the best solution is to use tags. I do it to force job submissions to a specific worker.
Regards,
Axel
On Tue, 27 Aug 2019 at 15:28, Tim Jaacks tim.jaacks@garz-fricke.com wrote:
Hello again,
I have several test cases where we use LAVA multinode to test hardware and software interfaces externally. E.g. we have an SFTP server running on our DUT. In order to test that, we submit a test using two nodes:
- The DUT
- An LXC container
The LXC device connects to the DUT via SFTP and uploads a file. Both sides determine the MD5 sum and the DUT compares them.
This works as long as both the DUT and the LXC device are in the same network (or at least can reach each other via the network).
Now there are more test cases which require additional hardware connections between the worker and the DUT, e.g. a serial interface test. The serial interface on the DUT is connected via an RS232-USB converter to the worker. The LXC can access this converter and send or receive data from the serial interface.
This works as long as the LXC is running on the expected worker the serial interface of the DUT is connected to.
As we are growing our lab, we will add more workers to our setup. There will be LXC devices on all of the workers.
When submitting such a multinode job, which relies on hardware connections between the DUT and the worker, how can I make sure that the LXC part of the job is scheduled on an LXC device on the correct worker?
Mit freundlichen Grüßen / Best regards Tim Jaacks DEVELOPMENT ENGINEER Garz & Fricke GmbH Tempowerkring 2 21079 Hamburg Direct: +49 40 791 899 - 55 Fax: +49 40 791899 - 39 tim.jaacks@garz-fricke.com www.garz-fricke.com WE MAKE IT YOURS!
Sitz der Gesellschaft: D-21079 Hamburg Registergericht: Amtsgericht Hamburg, HRB 60514 Geschäftsführer: Matthias Fricke, Manfred Garz, Marc-Michael Braun
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users