I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises: - When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet. - I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http) - I was surprised there were no :latest containers published - I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example. - my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try. - If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
Hi Dan, Thanks for great work
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
I tried this today and there seem to be some access/race issue with database. Here is the snippet from the log
database_1 | waiting for server to shut down....LOG: shutting down database_1 | LOG: database system is shut down server_1 | Starting PostgreSQL 9.6 database server: main. server_1 | done server_1 | server_1 | Applying migrations server_1 | Traceback (most recent call last): server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection server_1 | self.connect() server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 171, in connect server_1 | self.connection = self.get_new_connection(conn_params) server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection server_1 | connection = Database.connect(**conn_params) server_1 | File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 164, in connect server_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) server_1 | psycopg2.OperationalError: could not connect to server: Connection refused server_1 | Is the server running on host "database" (172.18.0.2) and accepting server_1 | TCP/IP connections on port 5432?
However when I later checked all tables were created.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do
what is the use case for this?
milosz
that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Thu, Jan 03, 2019 at 01:31:06PM +0000, Milosz Wasilewski wrote:
Hi Dan, Thanks for great work
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
I tried this today and there seem to be some access/race issue with database. Here is the snippet from the log
database_1 | waiting for server to shut down....LOG: shutting down database_1 | LOG: database system is shut down server_1 | Starting PostgreSQL 9.6 database server: main. server_1 | done server_1 | server_1 | Applying migrations server_1 | Traceback (most recent call last): server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection server_1 | self.connect() server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 171, in connect server_1 | self.connection = self.get_new_connection(conn_params) server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection server_1 | connection = Database.connect(**conn_params) server_1 | File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 164, in connect server_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) server_1 | psycopg2.OperationalError: could not connect to server: Connection refused server_1 | Is the server running on host "database" (172.18.0.2) and accepting server_1 | TCP/IP connections on port 5432?
However when I later checked all tables were created.
It's a race that Remi fixed that already @ https://git.lavasoftware.org/lava/pkg/docker/commit/98fe3deecbdddfc03e99fdb8..., but the containers have not been republished yet. I updated https://github.com/danrue/lava.home.therub.org/tree/3792145057146d491413e71b... to use the fixed entrypoint.sh locally in the meantime.
Note that the compose file is using a docker volume for pgsql, which you'll have to remove if you want to re-test the first run.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do
what is the use case for this?
kernelci doesn't rely on any data being kept in lava. It's kind of a nice idea - if you can get away from keeping state in lava, it means you can deploy from scratch every time and so it makes trying arbitrary lava versions easy, not to mention easy upgrades and downgrades.
milosz
that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Thu, 3 Jan 2019 at 16:33, Dan Rue dan.rue@linaro.org wrote:
On Thu, Jan 03, 2019 at 01:31:06PM +0000, Milosz Wasilewski wrote:
Hi Dan, Thanks for great work
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
I tried this today and there seem to be some access/race issue with database. Here is the snippet from the log
database_1 | waiting for server to shut down....LOG: shutting down database_1 | LOG: database system is shut down server_1 | Starting PostgreSQL 9.6 database server: main. server_1 | done server_1 | server_1 | Applying migrations server_1 | Traceback (most recent call last): server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection server_1 | self.connect() server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 171, in connect server_1 | self.connection = self.get_new_connection(conn_params) server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection server_1 | connection = Database.connect(**conn_params) server_1 | File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 164, in connect server_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) server_1 | psycopg2.OperationalError: could not connect to server: Connection refused server_1 | Is the server running on host "database" (172.18.0.2) and accepting server_1 | TCP/IP connections on port 5432?
However when I later checked all tables were created.
It's a race that Remi fixed that already @ https://git.lavasoftware.org/lava/pkg/docker/commit/98fe3deecbdddfc03e99fdb8..., but the containers have not been republished yet. I updated https://github.com/danrue/lava.home.therub.org/tree/3792145057146d491413e71b... to use the fixed entrypoint.sh locally in the meantime.
Note that the compose file is using a docker volume for pgsql, which you'll have to remove if you want to re-test the first run.
I'll try it once I finish with other tasks.
I noticed one more small issue - the device-types and devices are not defined in the database. This can be easily fixed with adding initial fixture to the database.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do
what is the use case for this?
kernelci doesn't rely on any data being kept in lava. It's kind of a nice idea - if you can get away from keeping state in lava, it means you can deploy from scratch every time and so it makes trying arbitrary lava versions easy, not to mention easy upgrades and downgrades.
ok, got it. One potential issue is that all the files required to reproduce the test job are gone. So when lava code changes the same test job definition might produce different result. In some cases it's not a problem though.
milosz
On Thu, 3 Jan 2019 at 17:43, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On Thu, 3 Jan 2019 at 16:33, Dan Rue dan.rue@linaro.org wrote:
On Thu, Jan 03, 2019 at 01:31:06PM +0000, Milosz Wasilewski wrote:
Hi Dan, Thanks for great work
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
I tried this today and there seem to be some access/race issue with database. Here is the snippet from the log
database_1 | waiting for server to shut down....LOG: shutting down database_1 | LOG: database system is shut down server_1 | Starting PostgreSQL 9.6 database server: main. server_1 | done server_1 | server_1 | Applying migrations server_1 | Traceback (most recent call last): server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection server_1 | self.connect() server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 171, in connect server_1 | self.connection = self.get_new_connection(conn_params) server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection server_1 | connection = Database.connect(**conn_params) server_1 | File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 164, in connect server_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) server_1 | psycopg2.OperationalError: could not connect to server: Connection refused server_1 | Is the server running on host "database" (172.18.0.2) and accepting server_1 | TCP/IP connections on port 5432?
However when I later checked all tables were created.
It's a race that Remi fixed that already @ https://git.lavasoftware.org/lava/pkg/docker/commit/98fe3deecbdddfc03e99fdb8..., but the containers have not been republished yet. I updated https://github.com/danrue/lava.home.therub.org/tree/3792145057146d491413e71b... to use the fixed entrypoint.sh locally in the meantime.
Note that the compose file is using a docker volume for pgsql, which you'll have to remove if you want to re-test the first run.
I'll try it once I finish with other tasks.
I noticed one more small issue - the device-types and devices are not defined in the database. This can be easily fixed with adding initial fixture to the database.
I don't think fixtures are necessary (and could cause problems with upgrades containing database migrations themselves).
lava-server manage has the support to add device-types and devices to the database.
lavacli has device add support, via the XMLRPC API: https://staging.validation.linaro.org/api/help/#scheduler.devices.add
Similarly for device-types: https://staging.validation.linaro.org/api/help/#scheduler.devices.add - these calls (with the relevant authentication) will create device-types and devices in the database.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do
what is the use case for this?
kernelci doesn't rely on any data being kept in lava. It's kind of a nice idea - if you can get away from keeping state in lava, it means you can deploy from scratch every time and so it makes trying arbitrary lava versions easy, not to mention easy upgrades and downgrades.
ok, got it. One potential issue is that all the files required to reproduce the test job are gone. So when lava code changes the same test job definition might produce different result. In some cases it's not a problem though.
milosz
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Mon, 7 Jan 2019 at 09:01, Neil Williams neil.williams@linaro.org wrote:
On Thu, 3 Jan 2019 at 17:43, Milosz Wasilewski milosz.wasilewski@linaro.org wrote:
On Thu, 3 Jan 2019 at 16:33, Dan Rue dan.rue@linaro.org wrote:
On Thu, Jan 03, 2019 at 01:31:06PM +0000, Milosz Wasilewski wrote:
Hi Dan, Thanks for great work
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
I tried this today and there seem to be some access/race issue with database. Here is the snippet from the log
database_1 | waiting for server to shut down....LOG: shutting down database_1 | LOG: database system is shut down server_1 | Starting PostgreSQL 9.6 database server: main. server_1 | done server_1 | server_1 | Applying migrations server_1 | Traceback (most recent call last): server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection server_1 | self.connect() server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/base/base.py", line 171, in connect server_1 | self.connection = self.get_new_connection(conn_params) server_1 | File "/usr/lib/python3/dist-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection server_1 | connection = Database.connect(**conn_params) server_1 | File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 164, in connect server_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async) server_1 | psycopg2.OperationalError: could not connect to server: Connection refused server_1 | Is the server running on host "database" (172.18.0.2) and accepting server_1 | TCP/IP connections on port 5432?
However when I later checked all tables were created.
It's a race that Remi fixed that already @ https://git.lavasoftware.org/lava/pkg/docker/commit/98fe3deecbdddfc03e99fdb8..., but the containers have not been republished yet. I updated https://github.com/danrue/lava.home.therub.org/tree/3792145057146d491413e71b... to use the fixed entrypoint.sh locally in the meantime.
Note that the compose file is using a docker volume for pgsql, which you'll have to remove if you want to re-test the first run.
I'll try it once I finish with other tasks.
I noticed one more small issue - the device-types and devices are not defined in the database. This can be easily fixed with adding initial fixture to the database.
I don't think fixtures are necessary (and could cause problems with upgrades containing database migrations themselves).
lava-server manage has the support to add device-types and devices to the database.
lavacli has device add support, via the XMLRPC API: https://staging.validation.linaro.org/api/help/#scheduler.devices.add
Similarly for device-types: https://staging.validation.linaro.org/api/help/#scheduler.devices.add
- these calls (with the relevant authentication) will create
device-types and devices in the database.
This is what Dan did in his last version and it works. Thanks for confirming :)
milosz
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do
what is the use case for this?
kernelci doesn't rely on any data being kept in lava. It's kind of a nice idea - if you can get away from keeping state in lava, it means you can deploy from scratch every time and so it makes trying arbitrary lava versions easy, not to mention easy upgrades and downgrades.
ok, got it. One potential issue is that all the files required to reproduce the test job are gone. So when lava code changes the same test job definition might produce different result. In some cases it's not a problem though.
milosz
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
--
Neil Williams
neil.williams@linaro.org http://www.linux.codehelp.co.uk/
Hello Dan, thanks for the investigation and the summary!
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
This is something we would want to fix. I created an issue: https://git.lavasoftware.org/lava/lava/issues/195 Feel free to comment in the issue.
Next step - I spoke with Matt Hart about the idea of having lava be more
ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Not sure about your use case for this but lavacli has some features that should help: "lavacli system export" might help. The reverse ("import") is still missing (I'm waiting for your patch :)).
Cheers.
On Fri, Jan 04, 2019 at 12:11:03PM +0100, Remi Duraffort wrote:
Hello Dan, thanks for the investigation and the summary!
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
This is something we would want to fix. I created an issue: https://git.lavasoftware.org/lava/lava/issues/195 Feel free to comment in the issue.
Thanks! I think there is a slight urgency here because containers were originally published to lavasoftware/lava-(server|dispatcher), then changed to lavasoftware/(aarch64|amd64)-lava-(server|dispatcher). But, the most recent released documentation still refers to the former paths. So, if it's fixed before next release, then the documentation can stay consistent, and perhaps the (aarch64|amd64)-* containers can be removed altogether.
So, it ends up being rather confusing to a new user looking at https://hub.docker.com/u/lavasoftware/. (hub.lavasoftware.org has no indexing/viewing enabled). There's also containers in there named -master, and I'm not sure their purpose.
Next step - I spoke with Matt Hart about the idea of having lava be more
ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Not sure about your use case for this but lavacli has some features that should help: "lavacli system export" might help. The reverse ("import") is still missing (I'm waiting for your patch :)).
Oohh, that is nice. Does lavacli have the primitives to add device-type and device jinja templates (I noticed it exports them)? Is it even possible? I thought the filesystem operations like adding device templates and device-type templates had to be done outside lava. Also, I noticed lavacli export didn't get the health-check files.
Dan
On Wed, 2 Jan 2019 at 22:51, Dan Rue dan.rue@linaro.org wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Is a tun device absolutely essential? It would be much better to not need privileged mode at all. In the Harston lab, we only really need tun to be able to support hacking sessions. netdevice: user is enough to get network usage within QEMU working.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
Adding devices can be scripted using lava-server manage.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
I've continued to work on using the official docker containers with docker-compose.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images. The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
On Fri, 11 Jan 2019 at 21:38, Dan Rue dan.rue@linaro.org wrote:
I've continued to work on using the official docker containers with docker-compose.
Thanks for your work on this.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images.
We should be able to support image files on files.lavasoftware.org or other permanent hosting locations. Why do you feel the need to run a container for these files locally? How do you update the files inside the container? Isolating the image files themselves in a temporary location does not generally help triage. Rather than hosting the files, wouldn't it be better to investigate a local proxy container, maybe which is seeded with a known list of images?
The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
https://git.lavasoftware.org/lava/lava/issues/114 should provide some improvements here.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
There is also work going on in https://projects.linaro.org/browse/LSS-243
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Mon, Jan 21, 2019 at 11:14:42AM +0000, Neil Williams wrote:
On Fri, 11 Jan 2019 at 21:38, Dan Rue dan.rue@linaro.org wrote:
I've continued to work on using the official docker containers with docker-compose.
Thanks for your work on this.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images.
We should be able to support image files on files.lavasoftware.org or other permanent hosting locations. Why do you feel the need to run a container for these files locally? How do you update the files inside the container? Isolating the image files themselves in a temporary location does not generally help triage. Rather than hosting the files, wouldn't it be better to investigate a local proxy container, maybe which is seeded with a known list of images?
It's a hack. I just didn't want to abuse images.v.l.o for every run, and it's slow, so I did something quick and simple. I'll look at switching it to squid, which is hopefully just as simple.
Thanks for the links (below) to the other works in progress. I'll keep an eye on them.
Dan
The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
https://git.lavasoftware.org/lava/lava/issues/114 should provide some improvements here.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
There is also work going on in https://projects.linaro.org/browse/LSS-243
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
--
Neil Williams
neil.williams@linaro.org http://www.linux.codehelp.co.uk/
More updates! Described below, referenced links go to source: - beaglebone-black is now working for me [1] - ser2net containerized [2] - LAVA upgrade process is documented [3] - Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume
sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata
3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and `lavasoftware/amd64-lava-dispatcher:2018.11` to `lavasoftware/amd64-lava-dispatcher:2019.01` in docker-compose.yml. 4. Change the FROM line if any containers are being rebuilt, such as ./server-docker/Dockerfile 5. Start containers.
Please note the implication there. Since the containers have no stateful data, we don't care about 'upgrading' their contents. We can just stop the old ones and start the new ones. A downgrade is simple: restore the DB and start the old containers. One issue that I ran into (where I use https) is the following error:
lava_server | ERROR 2019-01-25 21:11:08,479 exception Invalid HTTP_HOST header: 'lava.therub.org'. You may need to add 'lava.therub.org'
I added the following line to /etc/lava-server/settings.conf to fix:
"ALLOWED_HOSTS": ["lava.therub.org", "127.0.0.1"],
Other than that, no changes were necessary to use 2019.01 LAVA!
Finally, I removed the hacky 'images' container that was hosting health check images behind nginx, and replaced it with a general purpose squid container, backed by a docker volume. Note that in a real production type environment, both approaches might be necessary. I do like the idea of health check images being statically hosted as they are in validation.l.o.
The sources can be found at https://github.com/danrue/lava-docker-compose/, and I also "tweeted" about some of what I've learned at https://twitter.com/mndrue/status/1088627889426350080.
Thanks again for reading this far! Dan
[1] https://github.com/danrue/lava-docker-compose/tree/beaglebone-black [2] https://github.com/danrue/lava-docker-compose/commit/af1f62b22dbfd60757ec8a7... [3] https://github.com/danrue/lava-docker-compose/commit/76d69783a3ea7ace6d85538... [4] https://github.com/danrue/lava-docker-compose/commit/bb37d9cb42d490b61c9c7ff...
On Fri, Jan 11, 2019 at 03:38:01PM -0600, Dan Rue wrote:
I've continued to work on using the official docker containers with docker-compose.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images. The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
On Fri, 25 Jan 2019 at 23:00, Dan Rue dan.rue@linaro.org wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and
Is the content of /var/lib/lava-server/default/media/job-output/ also preserved in this scenario? If not, this dir should also probably be mapped into a volume so it is moved between migrated versions.
milosz
`lavasoftware/amd64-lava-dispatcher:2018.11` to `lavasoftware/amd64-lava-dispatcher:2019.01` in docker-compose.yml. 4. Change the FROM line if any containers are being rebuilt, such as ./server-docker/Dockerfile 5. Start containers.
Please note the implication there. Since the containers have no stateful data, we don't care about 'upgrading' their contents. We can just stop the old ones and start the new ones. A downgrade is simple: restore the DB and start the old containers. One issue that I ran into (where I use https) is the following error:
lava_server | ERROR 2019-01-25 21:11:08,479 exception Invalid HTTP_HOST header: 'lava.therub.org'. You may need to add 'lava.therub.org'
I added the following line to /etc/lava-server/settings.conf to fix:
"ALLOWED_HOSTS": ["lava.therub.org", "127.0.0.1"],
Other than that, no changes were necessary to use 2019.01 LAVA!
Finally, I removed the hacky 'images' container that was hosting health check images behind nginx, and replaced it with a general purpose squid container, backed by a docker volume. Note that in a real production type environment, both approaches might be necessary. I do like the idea of health check images being statically hosted as they are in validation.l.o.
The sources can be found at https://github.com/danrue/lava-docker-compose/, and I also "tweeted" about some of what I've learned at https://twitter.com/mndrue/status/1088627889426350080.
Thanks again for reading this far! Dan
[1] https://github.com/danrue/lava-docker-compose/tree/beaglebone-black [2] https://github.com/danrue/lava-docker-compose/commit/af1f62b22dbfd60757ec8a7... [3] https://github.com/danrue/lava-docker-compose/commit/76d69783a3ea7ace6d85538... [4] https://github.com/danrue/lava-docker-compose/commit/bb37d9cb42d490b61c9c7ff...
On Fri, Jan 11, 2019 at 03:38:01PM -0600, Dan Rue wrote:
I've continued to work on using the official docker containers with docker-compose.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images. The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
-- Linaro - Kernel Validation
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Mon, Jan 28, 2019 at 09:24:26AM +0000, Milosz Wasilewski wrote:
On Fri, 25 Jan 2019 at 23:00, Dan Rue dan.rue@linaro.org wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and
Is the content of /var/lib/lava-server/default/media/job-output/ also preserved in this scenario? If not, this dir should also probably be mapped into a volume so it is moved between migrated versions.
Oh, good catch. Fixed with a docker volume @ https://github.com/danrue/lava-docker-compose/blob/master/docker-compose.yml...
Dan
milosz
`lavasoftware/amd64-lava-dispatcher:2018.11` to `lavasoftware/amd64-lava-dispatcher:2019.01` in docker-compose.yml. 4. Change the FROM line if any containers are being rebuilt, such as ./server-docker/Dockerfile 5. Start containers.
Please note the implication there. Since the containers have no stateful data, we don't care about 'upgrading' their contents. We can just stop the old ones and start the new ones. A downgrade is simple: restore the DB and start the old containers. One issue that I ran into (where I use https) is the following error:
lava_server | ERROR 2019-01-25 21:11:08,479 exception Invalid HTTP_HOST header: 'lava.therub.org'. You may need to add 'lava.therub.org'
I added the following line to /etc/lava-server/settings.conf to fix:
"ALLOWED_HOSTS": ["lava.therub.org", "127.0.0.1"],
Other than that, no changes were necessary to use 2019.01 LAVA!
Finally, I removed the hacky 'images' container that was hosting health check images behind nginx, and replaced it with a general purpose squid container, backed by a docker volume. Note that in a real production type environment, both approaches might be necessary. I do like the idea of health check images being statically hosted as they are in validation.l.o.
The sources can be found at https://github.com/danrue/lava-docker-compose/, and I also "tweeted" about some of what I've learned at https://twitter.com/mndrue/status/1088627889426350080.
Thanks again for reading this far! Dan
[1] https://github.com/danrue/lava-docker-compose/tree/beaglebone-black [2] https://github.com/danrue/lava-docker-compose/commit/af1f62b22dbfd60757ec8a7... [3] https://github.com/danrue/lava-docker-compose/commit/76d69783a3ea7ace6d85538... [4] https://github.com/danrue/lava-docker-compose/commit/bb37d9cb42d490b61c9c7ff...
On Fri, Jan 11, 2019 at 03:38:01PM -0600, Dan Rue wrote:
I've continued to work on using the official docker containers with docker-compose.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images. The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
-- Linaro - Kernel Validation
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Mon, 28 Jan 2019 at 16:21, Dan Rue dan.rue@linaro.org wrote:
On Mon, Jan 28, 2019 at 09:24:26AM +0000, Milosz Wasilewski wrote:
On Fri, 25 Jan 2019 at 23:00, Dan Rue dan.rue@linaro.org wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and
Is the content of /var/lib/lava-server/default/media/job-output/ also preserved in this scenario? If not, this dir should also probably be mapped into a volume so it is moved between migrated versions.
Oh, good catch. Fixed with a docker volume @ https://github.com/danrue/lava-docker-compose/blob/master/docker-compose.yml...
Today I managed to get LAVA and SQUAD working together in containerized setup. Here is the repository with docker-compose: https://github.com/mwasilew/lava-docker-compose I didn't update README yet. It's still not ideal as there might be race conditions starting SQUAD (only first time when DB is not yet populated). One big issue is lack of cmd line tools for user management in SQUAD. This means that admin user can be added but password can't be set. I'm planning to copy this code from LAVA to provide the same options. So far I managed to submit 1 QEMU job to LAVA via SQUAD proxy and retrieve the results once the job finished.
Comments are welcome :)
milosz
Hello Milosz,
I've been also playing with LAVA docker containers and docker-compose. I now provide a docker-compose.yaml file and some configuration files for lava-server at http://git.lavasoftware.org/lava/pkg/docker-compose/ This docker-compose files allows to run lava server services with each services running in is own container.
Hope that helps. Cheers
Le ven. 15 févr. 2019 à 18:54, Milosz Wasilewski < milosz.wasilewski@linaro.org> a écrit :
On Mon, 28 Jan 2019 at 16:21, Dan Rue dan.rue@linaro.org wrote:
On Mon, Jan 28, 2019 at 09:24:26AM +0000, Milosz Wasilewski wrote:
On Fri, 25 Jan 2019 at 23:00, Dan Rue dan.rue@linaro.org wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a
bit
difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: -
/dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial
port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz
/var/lib/docker/volumes/lava-server-pgdata
3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and
Is the content of /var/lib/lava-server/default/media/job-output/ also preserved in this scenario? If not, this dir should also probably be mapped into a volume so it is moved between migrated versions.
Oh, good catch. Fixed with a docker volume @
https://github.com/danrue/lava-docker-compose/blob/master/docker-compose.yml...
Today I managed to get LAVA and SQUAD working together in containerized setup. Here is the repository with docker-compose: https://github.com/mwasilew/lava-docker-compose I didn't update README yet. It's still not ideal as there might be race conditions starting SQUAD (only first time when DB is not yet populated). One big issue is lack of cmd line tools for user management in SQUAD. This means that admin user can be added but password can't be set. I'm planning to copy this code from LAVA to provide the same options. So far I managed to submit 1 QEMU job to LAVA via SQUAD proxy and retrieve the results once the job finished.
Comments are welcome :)
milosz
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
On Tue, 19 Feb 2019 at 10:10, Remi Duraffort remi.duraffort@linaro.org wrote:
Hello Milosz,
I've been also playing with LAVA docker containers and docker-compose. I now provide a docker-compose.yaml file and some configuration files for lava-server at http://git.lavasoftware.org/lava/pkg/docker-compose/ This docker-compose files allows to run lava server services with each services running in is own container.
what is the difference between lava-server and lava-master in your example? Also why apache2 container uses LAVA image not httpd image?
milosz
Hope that helps. Cheers
Le ven. 15 févr. 2019 à 18:54, Milosz Wasilewski milosz.wasilewski@linaro.org a écrit :
On Mon, 28 Jan 2019 at 16:21, Dan Rue dan.rue@linaro.org wrote:
On Mon, Jan 28, 2019 at 09:24:26AM +0000, Milosz Wasilewski wrote:
On Fri, 25 Jan 2019 at 23:00, Dan Rue dan.rue@linaro.org wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and
Is the content of /var/lib/lava-server/default/media/job-output/ also preserved in this scenario? If not, this dir should also probably be mapped into a volume so it is moved between migrated versions.
Oh, good catch. Fixed with a docker volume @ https://github.com/danrue/lava-docker-compose/blob/master/docker-compose.yml...
Today I managed to get LAVA and SQUAD working together in containerized setup. Here is the repository with docker-compose: https://github.com/mwasilew/lava-docker-compose I didn't update README yet. It's still not ideal as there might be race conditions starting SQUAD (only first time when DB is not yet populated). One big issue is lack of cmd line tools for user management in SQUAD. This means that admin user can be added but password can't be set. I'm planning to copy this code from LAVA to provide the same options. So far I managed to submit 1 QEMU job to LAVA via SQUAD proxy and retrieve the results once the job finished.
Comments are welcome :)
milosz
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users
-- Rémi Duraffort LAVA Team, Linaro
Hello Milosz,
Le dim. 24 févr. 2019 à 11:43, Milosz Wasilewski < milosz.wasilewski@linaro.org> a écrit :
On Tue, 19 Feb 2019 at 10:10, Remi Duraffort remi.duraffort@linaro.org wrote:
Hello Milosz,
I've been also playing with LAVA docker containers and docker-compose. I
now provide a docker-compose.yaml file and some configuration files for lava-server at http://git.lavasoftware.org/lava/pkg/docker-compose/
This docker-compose files allows to run lava server services with each
services running in is own container.
what is the difference between lava-server and lava-master in your example? Also why apache2 container uses LAVA image not httpd image?
lava-server is the gunicorn process serving the html pages lava-master is the scheduler and master process (master as in master <-> slave)
apache2 is serving static files (lava css, js, ...) that are coming from the lava-server image. Maybe there is a better way?
Cheers
On Mon, 25 Feb 2019 at 10:57, Remi Duraffort remi.duraffort@linaro.org wrote:
Hello Milosz,
Le dim. 24 févr. 2019 à 11:43, Milosz Wasilewski milosz.wasilewski@linaro.org a écrit :
On Tue, 19 Feb 2019 at 10:10, Remi Duraffort remi.duraffort@linaro.org wrote:
Hello Milosz,
I've been also playing with LAVA docker containers and docker-compose. I now provide a docker-compose.yaml file and some configuration files for lava-server at http://git.lavasoftware.org/lava/pkg/docker-compose/ This docker-compose files allows to run lava server services with each services running in is own container.
what is the difference between lava-server and lava-master in your example? Also why apache2 container uses LAVA image not httpd image?
lava-server is the gunicorn process serving the html pages lava-master is the scheduler and master process (master as in master <-> slave)
apache2 is serving static files (lava css, js, ...) that are coming from the lava-server image. Maybe there is a better way?
I have these 3 in single container. What is the benefit of separating gunicorn from lava-master? Did you possibly try to have several of gunicorn containers to balance the load? I'm not sure if it's (currently) possible to have more than one lava-master running.
milosz
Cheers
-- Rémi Duraffort LAVA Team, Linaro
Le lun. 25 févr. 2019 à 12:09, Milosz Wasilewski < milosz.wasilewski@linaro.org> a écrit :
On Mon, 25 Feb 2019 at 10:57, Remi Duraffort remi.duraffort@linaro.org wrote:
Hello Milosz,
Le dim. 24 févr. 2019 à 11:43, Milosz Wasilewski <
milosz.wasilewski@linaro.org> a écrit :
On Tue, 19 Feb 2019 at 10:10, Remi Duraffort remi.duraffort@linaro.org
wrote:
Hello Milosz,
I've been also playing with LAVA docker containers and
docker-compose. I now provide a docker-compose.yaml file and some configuration files for lava-server at http://git.lavasoftware.org/lava/pkg/docker-compose/
This docker-compose files allows to run lava server services with
each services running in is own container.
what is the difference between lava-server and lava-master in your example? Also why apache2 container uses LAVA image not httpd image?
lava-server is the gunicorn process serving the html pages lava-master is the scheduler and master process (master as in master <->
slave)
apache2 is serving static files (lava css, js, ...) that are coming from
the lava-server image. Maybe there is a better way?
I have these 3 in single container. What is the benefit of separating gunicorn from lava-master? Did you possibly try to have several of gunicorn containers to balance the load?
In this docker-compose setup, every service is running in his own container. The only benefits would be to have many gunicorn processes running in parallel for load balancing (and maybe HA). That's the next thing I wanted to evaluate.
I'm not sure if it's (currently) possible to have more than one lava-master running.
Currently, only one lava-master, lava-logs and lava-publisher can run in parallel. But we had evaluated the possibility to have many lava-logs running because that *might* be a bottleneck on some really large labs.
A few smaller updates, but things continue to get better! - tftpd and nfsd are now in docker - x15 board added - Added restart policy to each container
At this point, the only container that gets built locally is lava-server, and that need goes away once the next version of LAVA is released (just need it to run the provision script in the entrypoint).
Just today, I installed debian on a fresh host. I installed docker, cloned my repository, and ran 'docker-compose up -d'. Everything came up including lava (obviously), squid, postgres, tftp, nfs, and ser2net. No packages other than docker and docker-compose are installed on the host.
By using docker-compose up -d, the containers start in the background. With the restart policy defined in docker-compose.yml, everything comes up after a reboot automatically.
My working setup can be found at https://github.com/danrue/lava-docker-compose/tree/lava.therub.org and seen at https://lava.therub.org/.
Specifically, with regard to tftp, I created a container at danrue/tftpd-hpa[1] and use it like this:
tftpd-hpa: image: danrue/tftpd-hpa:5.2 container_name: lava_tftp ports: - 69:69/udp volumes: - tftp:/srv/tftp restart: always
For NFS, I found one[2] that works great so no work was needed:
nfsd: image: erichough/nfs-server container_name: lava_nfsd ports: - 2049:2049 #nfsv4 volumes: - nfsd:/var/lib/lava/dispatcher/tmp - ./nfsd/exports:/etc/exports cap_add: - SYS_ADMIN
Both of these are using a docker volume for storage, and that docker volume is shared with the dispatcher container.
So now, all configuration lives in the repository, and all storage is in docker volumes.
The other trick with it is to expose the ports to the host, and then use the host's static IP, which has to be hard coded into a couple of places:
drue@lava1:~/lava-docker-compose$ grep -R 10.100.0.60 * dispatcher-overlay/etc/lava-dispatcher/lava-slave:NFS_SERVER_IP="10.100.0.60" server-overlay/etc/lava-server/dispatcher.d/dispatcher.yaml:dispatcher_ip: 10.100.0.60 server-overlay/etc/dispatcher.d/dispatcher.yaml:dispatcher_ip: 10.100.0.60 # used for nfs and tftp
Feedback/questions/complaints welcome! Thanks for reading this far, and thanks for your help and ideas.
Dan
[1] https://hub.docker.com/r/danrue/tftpd-hpa [2] https://hub.docker.com/r/erichough/nfs-server
On Fri, Jan 25, 2019 at 04:59:59PM -0600, Dan Rue wrote:
More updates! Described below, referenced links go to source:
- beaglebone-black is now working for me [1]
- ser2net containerized [2]
- LAVA upgrade process is documented [3]
- Squid container added; nginx images hack removed [4]
The beaglebone-black branch represents what's now an actual working docker-compose environment for my bbb, using a recent u-boot (this turned out to be the hardest part - totally unrelated to docker). I ended up running NFS and TFTP on the host and mounting the paths into the dispatcher. I'd like to containerize those still, but NFS is a bit difficult in particular and I just wanted to see things work.
The beaglebone-black branch is back to using the dispatcher without rebuilding it. I did this by breaking ser2net into its own container that can be found at danrue/ser2net and used as follows:
version: '3.4' services: ser2net: image: danrue/ser2net:3.5 volumes: - ./ser2net/ser2net.conf:/etc/ser2net.conf devices: - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
The best part is running something like this to spy on the serial port during testing:
docker-compose exec dispatcher telnet ser2net 5001
The LAVA upgrade has been documented in the README, but it's simple enough I'll reproduce it here:
1. Stop containers. 2. Back up pgsql from its docker volume sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to `lavasoftware/amd64-lava-server:2019.01` and `lavasoftware/amd64-lava-dispatcher:2018.11` to `lavasoftware/amd64-lava-dispatcher:2019.01` in docker-compose.yml. 4. Change the FROM line if any containers are being rebuilt, such as ./server-docker/Dockerfile 5. Start containers.
Please note the implication there. Since the containers have no stateful data, we don't care about 'upgrading' their contents. We can just stop the old ones and start the new ones. A downgrade is simple: restore the DB and start the old containers. One issue that I ran into (where I use https) is the following error:
lava_server | ERROR 2019-01-25 21:11:08,479 exception Invalid HTTP_HOST header: 'lava.therub.org'. You may need to add 'lava.therub.org'
I added the following line to /etc/lava-server/settings.conf to fix:
"ALLOWED_HOSTS": ["lava.therub.org", "127.0.0.1"],
Other than that, no changes were necessary to use 2019.01 LAVA!
Finally, I removed the hacky 'images' container that was hosting health check images behind nginx, and replaced it with a general purpose squid container, backed by a docker volume. Note that in a real production type environment, both approaches might be necessary. I do like the idea of health check images being statically hosted as they are in validation.l.o.
The sources can be found at https://github.com/danrue/lava-docker-compose/, and I also "tweeted" about some of what I've learned at https://twitter.com/mndrue/status/1088627889426350080.
Thanks again for reading this far! Dan
[1] https://github.com/danrue/lava-docker-compose/tree/beaglebone-black [2] https://github.com/danrue/lava-docker-compose/commit/af1f62b22dbfd60757ec8a7... [3] https://github.com/danrue/lava-docker-compose/commit/76d69783a3ea7ace6d85538... [4] https://github.com/danrue/lava-docker-compose/commit/bb37d9cb42d490b61c9c7ff...
On Fri, Jan 11, 2019 at 03:38:01PM -0600, Dan Rue wrote:
I've continued to work on using the official docker containers with docker-compose.
If you have docker and docker-compose installed, you can clone https://github.com/danrue/lava-docker-compose and run "make". You should end up with LAVA running at http://localhost, with a qemu worker and qemu device which will run a successful health-check automatically.
I tried to keep everything as simple and as upstream native as possible. As the containers improve, this example repository will simplify. For example, currently it builds a new lava-server container (using upstream lava-server as a base) to work around one bug that's fixed but not yet released, and to add the ability to do the initial provisioning.
The dispatcher container runs directly from the released container.
There are two additional containers in use. An official postgres container is used for the lava-server database, which has a nice interface and semantics as you would expect. Lastly, an nginx container is included to serve health-check images. The initial image files are retrieved with a Makefile rule.
In a separate branch (named beaglebone-black), I've started adding support for beaglebone-black. As expected, this required a lot more work in the dispatcher container, and it is still not quite working. It is also lab-specific.
My goal here is to try to develop a reference implementation of deploying the official LAVA docker containers to help people get started quickly and without adding any additional layers of complexity/abstraction (there are enough already!)
Dan
On Wed, Jan 02, 2019 at 04:51:25PM -0600, Dan Rue wrote:
To follow up on my original post which I guess didn't garner much attention, here's a working docker compose config which uses the latest production lava containers, and launches a single qemu worker.
https://github.com/danrue/lava.home.therub.org/tree/b17ed0f8a660f9a8c31f892d...
Running 'docker-compose up' then brings up a postgres container, a lava master container, and a dispatcher container. A few settings files are mounted into the master container, as well as devices and health-checks. The dispatcher itself must be run in privileged mode in order to use (at least) /dev/net/tun.
Once up, run something like the following to add an admin user: docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Then, you may go into the admin interface to add the devices.
The only real gap that I can see is that the docker containers have the architecture hard coded in the container name. This should use a manifest[1] instead so that running docker on ARM "just works". A lot of work has been done to make architecture transparent when running docker (for the benefit of ARM users like us) - we should use it.
Next step - I spoke with Matt Hart about the idea of having lava be more ephemeral. If you don't actually need your historical runs, you could have lava re-configure itself every time on start-up, based on the contents of devices/ and health-checks/. I'm not sure the best way to do that - perhaps as an additional setup script that's supported by entrypoint.sh, or perhaps as an outside thing using lavacli. Or perhaps it can be done with docker exec and lava-server manage...
Thanks! Dan
[1] https://docs.docker.com/edge/engine/reference/commandline/manifest/
On Mon, Dec 10, 2018 at 04:58:23PM -0600, Dan Rue wrote:
I started playing with the official lava images today and wanted to share my work in progress, in case others are doing something similar or have feedback. My goal is to deploy a lava lab locally. My architecture is a single host (for now) that will host both the lava server and one dispatcher. Once it's all working, I'll start deploying a qemu worker followed by some actual boards (hopefully).
So far, I have the following docker-compose.yml:
version: '3' services: database: image: postgres:9.6 environment: POSTGRES_USER: lavaserver POSTGRES_PASSWORD: mysecretpassword PGDATA: /var/lib/postgresql/data/pgdata volumes: - ${PWD}/pgdata:/var/lib/postgresql/data/pgdata server: image: lavasoftware/amd64-lava-server:2018.11 ports: - 80:80 volumes: - ${PWD}/etc/lava-server/settings.conf:/etc/lava-server/settings.conf - ${PWD}/etc/lava-server/instance.conf:/etc/lava-server/instance.conf depends_on: - database dispatcher: image: lavasoftware/amd64-lava-dispatcher:2018.11 environment: - "DISPATCHER_HOSTNAME=--hostname=dispatcher.lava.therub.org" - "LOGGER_URL=tcp://server:5555" - "MASTER_URL=tcp://server:5556"
With that file, settings.conf, and instance.conf in place, I run 'mkdir pgdata; docker-compose up' and the 3 containers come up and start talking to each other. The only thing exposed to the outside world is lava-server's port 80 at the host's IP, which gives the lava homepage as expected. The first time they come up, the database isn't up fast enough (it has to initialize the first time) and lava-server fails to connect. If you cancel and run again it will connect the second time.
A few things to note here. First, it doesn't seem like a persistent DB volume is possible with the existing lava-server container, because the DB is initialized at container build time rather than run-time, so there's not really a way to mount in a volume for the data. Anyway, postgres already solves this. In fact, I found their container documentation and entrypoint interface to be well done, so it may be a nice model to follow: https://hub.docker.com/_/postgres/
The server mostly works as listed above. I copied settings.conf and instance.conf out of the original container and into ./etc/lava-server/ and modified as needed.
The dispatcher then runs and points to the server.
It's notable that docker-compose by default sets up a docker network, allowing references to "database", "server", "dispatcher" to resolve within the containers.
Once up, I ran the following to create my superuser:
docker-compose exec server lava-server manage users add --staff --superuser --email dan.rue@linaro.org --passwd foo drue
Now, for things I've run into and surprises:
- When I used a local database, I could log in. With the database in a separate container, I can't. Not sure why yet.
- I have the dreaded CSRF problem, which is unlikely to be related to docker, but the two vars in settings.conf didn't seem to help. (I'm terminating https outside of the container context, and then proxying into the container over http)
- I was surprised there were no :latest containers published
- I was surprised the containers were renamed to include the architecture name was in the container name. My understanding is that's the 'old' way to do it. The better way is to transparently detect arch using manifests. Again, see postgres/ as an example.
- my pgdata/ directory gets chown'd when I run postgres container. I see the container has some support for running under a different uid, which I might try.
- If the entrypoint of server supported some variables like LAVA_DB_PASSWORD, LAVA_DB_SERVER, SESSION_COOKIE_SECURE, etc, I wouldn't need to mount in things like instance.conf, settings.conf.
I pushed my config used here to https://github.com/danrue/lava.home.therub.org. Git clone and then run 'docker-compose up' should just work.
Anyway, thanks for the official images! They're a great start and will hopefully really simplify deploying lava. My next step is to debug some of the issues I mentioned above, and then start looking at dispatcher config (hopefully it's just a local volume mount).
Dan
-- Linaro - Kernel Validation
-- Linaro - Kernel Validation
Hello,
The other trick with it is to expose the ports to the host, and then use the
host's static IP, which has to be hard coded into a couple of places:
drue@lava1:~/lava-docker-compose$ grep -R 10.100.0.60 *
dispatcher-overlay/etc/lava-dispatcher/lava-slave:NFS_SERVER_IP="10.100.0.60"
server-overlay/etc/lava-server/dispatcher.d/dispatcher.yaml:dispatcher_ip: 10.100.0.60 server-overlay/etc/dispatcher.d/dispatcher.yaml:dispatcher_ip: 10.100.0.60 # used for nfs and tftp
Normally, you should only need to add the dispatcher_ip in dispatcher.yaml.
Thanks for the update on your progresses.
Cheers
lava-users@lists.lavasoftware.org