We have two agenda items [1]
- A quick update of the virtio-interface work progress
- A discussion about collaborating on a demo to prove the value of
standardizing on the virtio interfaces
[1]
https://collaborate.linaro.org/display/STR/2021-04-01+Project+Stratos+Sync+…
On Mon, Mar 29, 2021 at 8:08 AM Mike Holmes via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> Hi All
>
> The Stratos call is this Thursday, do we have specific agenda topics for
> this weeks call?
>
> Mike
>
> --
> Mike Holmes | Director, Foundation Technologies, Linaro
> Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
> "Work should be fun and collaborative; the rest follows."
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hello,
This is an initial implementation of a generic vhost-user backend for
the I2C bus. This is based of the virtio specifications (already merged)
for the I2C bus.
The kernel virtio I2C driver is still under review, here is the latest
version (v10):
https://lore.kernel.org/lkml/226a8d5663b7bb6f5d06ede7701eedb18d1bafa1.16164…
The backend is implemented as a vhost-user device because we want to
experiment in making portable backends that can be used with multiple
hypervisors. We also want to support backends isolated in their own
separate service VMs with limited memory cross-sections with the
principle guest. This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
I mentioned this to explain the decision to write the daemon as a fairly
pure glib application that just depends on libvhost-user.
We are not sure where the vhost-user backend should get queued, qemu or
a separate repository. Similar questions were raised by an earlier
thread by Alex Bennée for his RPMB work:
https://lore.kernel.org/qemu-devel/20200925125147.26943-1-alex.bennee@linar…
Testing:
-------
I didn't have access to a real hardware where I can play with a I2C
client device (like RTC, eeprom, etc) to verify the working of the
backend daemon, so I decided to test it on my x86 box itself with
hierarchy of two ARM64 guests.
The first ARM64 guest was passed "-device ds1338,address=0x20" option,
so it could emulate a ds1338 RTC device, which connects to an I2C bus.
Once the guest came up, ds1338 device instance was created within the
guest kernel by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[
Note that this may end up binding the ds1338 device to its driver,
which won't let our i2c daemon talk to the device. For that we need to
manually unbind the device from the driver:
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
]
After this is done, you will get /dev/rtc1. This is the device we wanted
to emulate, which will be accessed by the vhost-user-i2c backend daemon
via the /dev/i2c-0 file present in the guest VM.
At this point we need to start the backend daemon and give it a
socket-path to talk to from qemu (you can pass -v to it to get more
detailed messages):
vhost-user-i2c --socket-path=vi2c.sock --device-list 0:20
[ Here, 0:20 is the bus/device mapping, 0 for /dev/i2c-0 and 20 is
client address of ds1338 that we used while creating the device. ]
Now we need to start the second level ARM64 guest (from within the first
guest) to get the i2c-virtio.c Linux driver up. The second level guest
is passed the following options to connect to the same socket:
-chardev socket,path=vi2c.sock,id=vi2c \
-device vhost-user-i2c-pci,chardev=vi2c,id=i2c
Once the second level guest boots up, we will see the i2c-virtio bus at
/sys/bus/i2c/devices/i2c-X/. From there we can now make it emulate the
ds1338 device again by doing:
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
[ This time we want ds1338's driver to be bound to the device, so it
should be enabled in the kernel as well. ]
And we will get /dev/rtc1 device again here in the second level guest.
Now we can play with the rtc device with help of hwclock utility and we
can see the following sequence of transfers happening if we try to
update rtc's time from system time.
hwclock -w -f /dev/rtc1 (in guest2) ->
Reaches i2c-virtio.c (Linux bus driver in guest2) ->
transfer over virtio ->
Reaches the qemu's vhost-i2c device emulation (running over guest1) ->
Reaches the backend daemon vhost-user-i2c started earlier (in guest1) ->
ioctl(/dev/i2c-0, I2C_RDWR, ..); (in guest1) ->
reaches qemu's hw/rtc/ds1338.c (running over host)
I hope I was able to give a clear picture of my test setup here :)
Thanks.
Viresh Kumar (5):
hw/virtio: add boilerplate for vhost-user-i2c device
hw/virtio: add vhost-user-i2c-pci boilerplate
tools/vhost-user-i2c: Add backend driver
docs: add a man page for vhost-user-i2c
MAINTAINERS: Add entry for virtio-i2c
MAINTAINERS | 9 +
docs/tools/index.rst | 1 +
docs/tools/vhost-user-i2c.rst | 75 +++
hw/virtio/Kconfig | 5 +
hw/virtio/meson.build | 2 +
hw/virtio/vhost-user-i2c-pci.c | 79 +++
hw/virtio/vhost-user-i2c.c | 286 +++++++++
include/hw/virtio/vhost-user-i2c.h | 37 ++
include/standard-headers/linux/virtio_ids.h | 1 +
tools/meson.build | 8 +
tools/vhost-user-i2c/50-qemu-i2c.json.in | 5 +
tools/vhost-user-i2c/main.c | 652 ++++++++++++++++++++
tools/vhost-user-i2c/meson.build | 10 +
13 files changed, 1170 insertions(+)
create mode 100644 docs/tools/vhost-user-i2c.rst
create mode 100644 hw/virtio/vhost-user-i2c-pci.c
create mode 100644 hw/virtio/vhost-user-i2c.c
create mode 100644 include/hw/virtio/vhost-user-i2c.h
create mode 100644 tools/vhost-user-i2c/50-qemu-i2c.json.in
create mode 100644 tools/vhost-user-i2c/main.c
create mode 100644 tools/vhost-user-i2c/meson.build
--
2.25.0.rc1.19.g042ed3e048af
Hi All,
We've been discussing various ideas for Stratos in and around STR-7
(common virtio library). I'd originally de-emphasised the STR-7 work
because I wasn't sure if this was duplicate effort given we already had
libvhost-user as well as interest in rust-vmm for portable backends in
user-space. However we have seen from the Windriver hypervisor-less
virtio, NXP's Zephyr/Jailhouse and the requirements for the SCMI server
that there is a use-case for a small, liberally licensed C library that
is suitable for embedding in lightweight backends without a full Linux
stack behind it. These workloads would run in either simple command
loops, RTOSes or Unikernels.
Given the multiple interested parties I'm hoping we have enough people
who can devote time to collaborate on the project to make the following
realistic over the next cycle and culminate in the following demo in 6
months:
Components
==========
portable virtio backend library
-------------------------------
* source based library (so you include directly in your project)
* liberally licensed (Apache? to facilitate above)
* tailored for non-POSIX, limited resource setups
- e.g. avoid malloc/free, provide abstraction hooks where needed
- not assume OS facilities (so embeddable in RTOS or Unikernel)
- static virtio configuration supplied from outside library (location
of queues etc)
* hypervisor agnostic
- provide a hook to signal when queues need processing
I suspect this should be a from scratch implementation but it's
certainly worth investigating the BSD implementation as Windriver have
suggested.
SCMI server
-----------
This is the work product of STR-4, embedded in an RTOS. I'm suggesting
Zephyr makes the most sense here given the previous work done by Peng
and Akashi-san but I guess an argument could be made for a unikernel. I
would suggest demonstrating portability to a unikernel would be a
stretch goal for the demo.
The server would be *build* time selectable to deploy either in a
Jailhouse partition or a Xen Dom U deployment. Either way there will
need to be an out-of-band communication of location of virtual queues
and memory maps - I assume via DTB.
I'm unfamiliar with the RTOS build process but I guess this would be a
single git repository with the RTOS and glue code and git sub-projects
for the virtio and scmi libraries?
Deployments
===========
To demonstrate portability we would have:
- Xen hypervisor
- Dom0 with Linux/xen tools
- DomU with Linux with a virtio-scmi front-end
- DomU with RTOS/SCMI server with virtio-scmi back-end
The Dom0 in this case is just for convenience of the demo as we don't
currently have a fully working dom0-less setup. The key communication is
between the two DomU guests.
- Jailhouse
- Linux kernel partition with virtio-scmi front-end
- RTOS/SCMI server partition with a virtio-scmi back-end
This is closer to Windrivers' hypervisor-less virtio deployment as
Jailhouse is not a "proper" hypervisor in this case just a way of
partitioning up the resources. There will need to be some way for the
kernel and server partitions to signal each other when queues are
updated.
Platform
========
We know we have working Xen on Synquacer and Jailhouse on the iMX.
Should we target those as well as a QEMU -M virt for those who wish to
play without hardware?
Stretch Goals
=============
* Integrate Arnd's fat virtqueues
Hopefully this will be ready soon enough in the cycle that we can add
this to the library and prototype the minimal memory cross section.
* Port the server/library to another RTOS/unikernel
This would demonstrate the core code hasn't grown any assumptions
about what it is running in.
* Run the server blob on another hypervisor
Running in KVM is probably boring at this point. Maybe investigate
having it Hafnium? Or in a R-profile safety island setup?
So what do people think? Thoughts? Comments? Volunteers?
--
Alex Bennée
Hi All
The Stratos call is this Thursday, do we have specific agenda topics for
this weeks call?
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi All
The next Stratos call is in a week, do we have any additional agenda [1]
items?
Currently, we have one topic from Srivatsa Vaddagiri
- Qualcomm presentation on their interests in RustVMM
Mike
[1]
https://collaborate.linaro.org/display/STR/2021-03-18+Project+Stratos+Sync+…
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi,
Trying to get my ducks in a row for a merge of this before softfreeze
so this is my pre-PR posting of the Xen guest-loader support.
Everything apart from the loader itself is reviewed and given it's
been tested in other patches and I'm going to maintain it I don't see
a reason to hold it up from going in. However if you would like to
review it please do ;-)
The only real change is a tweak to the final patch where I've added a
stable archive URL for the Debian Xen packages.
Alex Bennée (7):
hw/board: promote fdt from ARM VirtMachineState to MachineState
hw/riscv: migrate fdt field to generic MachineState
device_tree: add qemu_fdt_setprop_string_array helper
hw/core: implement a guest-loader to support static hypervisor guests
docs: move generic-loader documentation into the main manual
docs: add some documentation for the guest-loader
tests/avocado: add boot_xen tests
docs/generic-loader.txt | 92 ---------
docs/system/generic-loader.rst | 117 +++++++++++
docs/system/guest-loader.rst | 54 +++++
docs/system/index.rst | 2 +
hw/core/guest-loader.h | 34 ++++
include/hw/arm/virt.h | 1 -
include/hw/boards.h | 1 +
include/hw/riscv/virt.h | 1 -
include/sysemu/device_tree.h | 17 ++
hw/arm/virt.c | 356 +++++++++++++++++----------------
hw/core/guest-loader.c | 145 ++++++++++++++
hw/riscv/virt.c | 20 +-
softmmu/device_tree.c | 26 +++
MAINTAINERS | 9 +-
hw/core/meson.build | 2 +
tests/acceptance/boot_xen.py | 118 +++++++++++
16 files changed, 719 insertions(+), 276 deletions(-)
delete mode 100644 docs/generic-loader.txt
create mode 100644 docs/system/generic-loader.rst
create mode 100644 docs/system/guest-loader.rst
create mode 100644 hw/core/guest-loader.h
create mode 100644 hw/core/guest-loader.c
create mode 100644 tests/acceptance/boot_xen.py
--
2.20.1
In practice the protocol negotiation between vhost master and slave
occurs before the final feature negotiation between backend and
frontend. This has lead to an inconsistency between the rust-vmm vhost
implementation and the libvhost-user library in their approaches to
checking if all the requirements for REPLY_ACK processing were met.
As this is purely a function of the protocol negotiation and not of
interest to the frontend lets make the language clearer about the
requirements for a successfully negotiated protocol feature.
Signed-off-by: Alex Bennée <alex.bennee(a)linaro.org>
Cc: Jiang Liu <gerry(a)linux.alibaba.com>
---
docs/interop/vhost-user.rst | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index d6085f7045..3ac221a8c7 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -301,12 +301,22 @@ If *slave* detects some error such as incompatible features, it may also
close the connection. This should only happen in exceptional circumstances.
Any protocol extensions are gated by protocol feature bits, which
-allows full backwards compatibility on both master and slave. As
-older slaves don't support negotiating protocol features, a feature
+allows full backwards compatibility on both master and slave. As older
+slaves don't support negotiating protocol features, a device feature
bit was dedicated for this purpose::
#define VHOST_USER_F_PROTOCOL_FEATURES 30
+However as the protocol negotiation something that only occurs between
+parts of the backend implementation it is permissible to for the master
+to mask the feature bit from the guest. As noted for the
+``VHOST_USER_GET_PROTOCOL_FEATURES`` and
+``VHOST_USER_SET_PROTOCOL_FEATURES`` messages this occurs before a
+final ``VHOST_USER_SET_FEATURES`` comes from the guest. So the
+enabling of protocol features need only require the advertising of the
+feature by the slave and the successful get/set protocol features
+sequence.
+
Starting and stopping rings
---------------------------
--
2.20.1