Hello,
This was earlier sent as part of a patch series [1] adding support for GPIO/I2C
virtio devices. The device specific patches would require some rework and
possibly several versions, and so this series separates out the generic
independent patches into a series of their own.
This series makes some of the generic code independent of the disk device, since
it can be used for other device types later on.
Rebased over staging branch from today.
V5->V6:
- Separated into a patch series of their own.
- Updated commit log of 1st patch to cover all changes.
- Rename make_virtio_mmio_node_simple() as make_virtio_mmio_node().
- New patch 3/3, separated code from device specific patch.
--
Viresh
Viresh Kumar (3):
libxl: arm: Create alloc_virtio_mmio_params()
libxl: arm: Split make_virtio_mmio_node()
libxl: arm: make creation of iommu node independent of disk device
tools/libs/light/libxl_arm.c | 83 +++++++++++++++++++++++++-----------
1 file changed, 57 insertions(+), 26 deletions(-)
--
2.31.1.272.g89b43f80a514
--
Viresh
[1] https://lore.kernel.org/all/cover.1661159474.git.viresh.kumar@linaro.org/
() b
On Thu, Aug 25, 2022 at 3:44 PM Harald Mommer
<harald.mommer(a)opensynergy.com> wrote:
>
> - CAN Control
>
> - "ip link set up can0" starts the virtual CAN controller,
> - "ip link set up can0" stops the virtual CAN controller
>
> - CAN RX
>
> Receive CAN frames. CAN frames can be standard or extended, classic or
> CAN FD. Classic CAN RTR frames are supported.
>
> - CAN TX
>
> Send CAN frames. CAN frames can be standard or extended, classic or
> CAN FD. Classic CAN RTR frames are supported.
>
> - CAN Event indication (BusOff)
>
> The bus off handling is considered code complete but until now bus off
> handling is largely untested.
>
> Signed-off-by: Harald Mommer <hmo(a)opensynergy.com>
This looks nice overall, but as you say there is still some work needed in all
the details. I've done a rough first pass at reviewing it, but I have
no specific
understanding of CAN, so these are mostly generic comments about
coding style or network drivers.
> drivers/net/can/Kconfig | 1 +
> drivers/net/can/Makefile | 1 +
> drivers/net/can/virtio_can/Kconfig | 12 +
> drivers/net/can/virtio_can/Makefile | 5 +
> drivers/net/can/virtio_can/virtio_can.c | 1176 +++++++++++++++++++++++
> include/uapi/linux/virtio_can.h | 69 ++
Since the driver is just one file, you probably don't need the subdirectory.
> +struct virtio_can_tx {
> + struct list_head list;
> + int prio; /* Currently always 0 "normal priority" */
> + int putidx;
> + struct virtio_can_tx_out tx_out;
> + struct virtio_can_tx_in tx_in;
> +};
Having a linked list of these appears to add a little extra complexity.
If they are always processed in sequence, using an array would be
much simpler, as you just need to remember the index.
> +#ifdef DEBUG
> +static void __attribute__((unused))
> +virtio_can_hexdump(const void *data, size_t length, size_t base)
> +{
> +#define VIRTIO_CAN_MAX_BYTES_PER_LINE 16u
This seems to duplicate print_hex_dump(), maybe just use that?
> +
> + while (!virtqueue_get_buf(vq, &len) && !virtqueue_is_broken(vq))
> + cpu_relax();
> +
> + mutex_unlock(&priv->ctrl_lock);
A busy loop is probably not what you want here. Maybe just
wait_for_completion() until the callback happens?
> + /* Push loopback echo. Will be looped back on TX interrupt/TX NAPI */
> + can_put_echo_skb(skb, dev, can_tx_msg->putidx, 0);
> +
> + err = virtqueue_add_sgs(vq, sgs, 1u, 1u, can_tx_msg, GFP_ATOMIC);
> + if (err != 0) {
> + list_del(&can_tx_msg->list);
> + virtio_can_free_tx_idx(priv, can_tx_msg->prio,
> + can_tx_msg->putidx);
> + netif_stop_queue(dev);
> + spin_unlock_irqrestore(&priv->tx_lock, flags);
> + kfree(can_tx_msg);
> + if (err == -ENOSPC)
> + netdev_info(dev, "TX: Stop queue, no space left\n");
> + else
> + netdev_warn(dev, "TX: Stop queue, reason = %d\n", err);
> + return NETDEV_TX_BUSY;
> + }
> +
> + if (!virtqueue_kick(vq))
> + netdev_err(dev, "%s(): Kick failed\n", __func__);
> +
> + spin_unlock_irqrestore(&priv->tx_lock, flags);
There should not be a need for a spinlock or disabling interrupts
in the xmit function. What exactly are you protecting against here?
As a further optimization, you may want to use the xmit_more()
function, as the virtqueue kick is fairly expensive and can be
batched here.
> + kfree(can_tx_msg);
> +
> + /* Flow control */
> + if (netif_queue_stopped(dev)) {
> + netdev_info(dev, "TX ACK: Wake up stopped queue\n");
> + netif_wake_queue(dev);
> + }
You may want to add netdev_sent_queue()/netdev_completed_queue()
based BQL flow control here as well, so you don't have to rely on the
queue filling up completely.
> +static int virtio_can_probe(struct virtio_device *vdev)
> +{
> + struct net_device *dev;
> + struct virtio_can_priv *priv;
> + int err;
> + unsigned int echo_skb_max;
> + unsigned int idx;
> + u16 lo_tx = VIRTIO_CAN_ECHO_SKB_MAX;
> +
> + BUG_ON(!vdev);
Not a useful debug check, just remove the BUG_ON(!vdev), here and elsewhere
> +
> + echo_skb_max = lo_tx;
> + dev = alloc_candev(sizeof(struct virtio_can_priv), echo_skb_max);
> + if (!dev)
> + return -ENOMEM;
> +
> + priv = netdev_priv(dev);
> +
> + dev_info(&vdev->dev, "echo_skb_max = %u\n", priv->can.echo_skb_max);
Also remove the prints, I assume this is left over from
initial debugging.
> + priv->can.do_set_mode = virtio_can_set_mode;
> + priv->can.state = CAN_STATE_STOPPED;
> + /* Set Virtio CAN supported operations */
> + priv->can.ctrlmode_supported = CAN_CTRLMODE_BERR_REPORTING;
> + if (virtio_has_feature(vdev, VIRTIO_CAN_F_CAN_FD)) {
> + dev_info(&vdev->dev, "CAN FD is supported\n");
> + } else {
> + dev_info(&vdev->dev, "CAN FD not supported\n");
> + }
Same here. There should be a way to see CAN FD support as an interactive
user, but there is no point printing it to the console.
> +
> + register_virtio_can_dev(dev);
> +
> + /* Initialize virtqueues */
> + err = virtio_can_find_vqs(priv);
> + if (err != 0)
> + goto on_failure;
Should the register_virtio_can_dev() be done here? I would expect this to be
the last thing after setting up the queues.
> +static struct virtio_driver virtio_can_driver = {
> + .feature_table = features,
> + .feature_table_size = ARRAY_SIZE(features),
> + .feature_table_legacy = NULL,
> + .feature_table_size_legacy = 0u,
> + .driver.name = KBUILD_MODNAME,
> + .driver.owner = THIS_MODULE,
> + .id_table = virtio_can_id_table,
> + .validate = virtio_can_validate,
> + .probe = virtio_can_probe,
> + .remove = virtio_can_remove,
> + .config_changed = NULL,
> +#ifdef CONFIG_PM_SLEEP
> + .freeze = virtio_can_freeze,
> + .restore = virtio_can_restore,
> +#endif
You can remove the #ifdef here and above, and replace that with the
pm_sleep_ptr() macro in the assignment.
> diff --git a/include/uapi/linux/virtio_can.h b/include/uapi/linux/virtio_can.h
> new file mode 100644
> index 000000000000..0ca75c7a98ee
> --- /dev/null
> +++ b/include/uapi/linux/virtio_can.h
> @@ -0,0 +1,69 @@
> +/* SPDX-License-Identifier: BSD-3-Clause */
> +/*
> + * Copyright (C) 2021 OpenSynergy GmbH
> + */
> +#ifndef _LINUX_VIRTIO_VIRTIO_CAN_H
> +#define _LINUX_VIRTIO_VIRTIO_CAN_H
> +
> +#include <linux/types.h>
> +#include <linux/virtio_types.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/virtio_config.h>
Maybe a link to the specification here? I assume the definitions in this file
are all lifted from that document, rather than specific to the driver, right?
Arnd
Hi All,
There have been discussions about virtio-camera before and more recently
I've heard the term virtio-sensor used. I think using "sensor" eludes to
the fact that there are a whole class of devices that provide some sort
of 2d plane view of the world (cameras, fingerprint readers, LIDAR?)
that would benefit in being consumed by a workload in a standard
non-bespoke way.
Why not virtio-video?
=====================
There is already a specification and various implementations of
virtio-video in various states of up-streaming. It is tempting to think
of a camera as a simplified subset of processing video streams. However
while virtio-video allows the consumption and display of various video
formats it offers no direct control of the source itself.
Complex control plane
=====================
Modern cameras are more than a simple CCD recording photons. Aside from
controlling things like f-stop/exposure/position there are also more
complex computational photography aspects. The camera soc might be
capable of doing edge or object detection or even facial and feature
recognition. Cameras are no longer simple webcams and have long since
gone past the relatively simple API that V4L present (c.f. libcamera).
Cloud native
============
One of the drivers for these virtio devices is the concept of cloud
native development. That is developing your workload in the cloud and
feeding it data through standardised VirtIO interfaces. Once you are
happy with its behaviour you can take that workload and run the same
binaries in your edge device but this time with data being provided by a
real sensor which is exposed via the same VirtIO interface.
Competing Requirements?
=======================
I've heard about use cases across a wide range of deployment scenarios
including:
Virtualised Mobile Devices
Here the backend containing the vendors secret sauce exists in its own
isolated VM with access to the real camera HW and exports virtio-camera
to a standardised main OS.
Desktop Virtualisation
Here the aim is to expose host camera devices (such as webcams) to
guest system which would be an otherwise sandboxed VM that needs access
to the system camera for a particular task.
Automotive
Cars are rapidly growing cameras both as driver aids and for more
advanced use cases such as self driving. The cloud native case is
particularly strong here as a lot of validation and iteration will be
taking place in the relatively limitless resources of the cloud before
being run in the carefully controlled and isolated safety critical
environs of the car itself.
Do these use-cases have competing demands? Can a solution be found that
satisfies all of them?
So for the next Stratos sync-up call I'd like to discuss virtio-camera
and if there is enough interest to specify a work package to define and
upstream the device. I'm casting a wide net for people who are
interested in the topic so we can talk through the issues and see if we
can arrive at consensus for a minimal viable product.
To help with that I would welcome people submitting ahead of time any:
- use-cases and user stories
- concrete requirements
and also any:
- previous work and RFCs
Please forward this email to anyone else who you think might find this
discussion relevant.
The next Stratos meeting will be on the 14th October, 15:00 UTC / 16:00
BST @ https://meet.google.com/rct-xgqm-woi
The meeting notes will be:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28771778789/2022-10-14+P…
And as ever the project jumping off page is at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
Thanks,
--
Alex Bennée
Hi Everyone,
As we finish our summer holidays (at least in the northern hemisphere)
and peoples availability returns to normal it is time to consider what
next steps we should be taking for project Stratos.
As a brief reminder over the last year we have been involved:
- up-streaming VirtIO specifications
- reviewing and implementing kernel drivers
- written a number of vhost-user daemons
- enabled more QEMU VirtIO stubs
- developed the rust based Xen Vhost Master
While there is plenty of ongoing maintainer work to do in the various
projects we need to consider what projects to look at next. As a
reminder in our original project goals we listed 4 areas of
interest:
- High-performance Virtio interfaces
- Virtual Machine Monitors with a safety island
- Boot Orchestration
- Written Standards for the hypercalls
So far we have been mostly focused on VirtIO itself. Is there still
interest in pursuing the other parts? The safety island work could for
example involve extending our rust-vmm work to write a VMM monitor for
restarting VMs on a statically configured system. Boot Orchestration may
be something to defer or feed into to other projects such as SOAFEE
which has plans for both system orchestration and cloud native support
for testing these workloads.
I think we should at least review the project description at:
https://linaro.atlassian.net/wiki/spaces/STR/overview
and ensure it is updated to reflect the current goals and aspirations of
the project.
As to more concrete potential work areas:
- Expand rust-vmm to support Xen in vmm-reference?
- Test rust-vmm and its devices with pKVM?
- Investigate bare metal VirtIO backends with Rust?
- Push forward up-streaming more VirtIO devices?
- where does inter-VM comms fit in here?
And of course as this is an open collaborative project we encourage
people to join in the effort. This is not meant to be merely a way of
watching Linaro engineers work ;-)
As finding time slots that the majority of people can attend is a pain I
propose the following poll to select our new time, those who aren't
interested in attending need not vote:
https://doodle.com/meeting/participate/id/aKZB8Qne
I've also decided to close our regular Stratos Rust sync in favour of
attending the upstream rust-vmm call which we participate in anyway.
Looking forward to next week,
--
Alex Bennée
Hi all,
I came across this work [1] from Oleksandr today while shuffling
through patches on LKML. I haven't looked at the details but from the
cover letter is seems to provide the same kind of functionality as
P-KVM. I will monitor the progress of this patchset.
Regards,
Mathieu
[1]. https://www.spinics.net/lists/arm-kernel/msg970906.html
Hi Mathieu,Viresh,
Talking to the pKVM guys about accessing guest private memory I was
pointed toward this lkml thread:
Subject: [PATCH v7 00/14] KVM: mm: fd-based approach for supporting KVM guest private memory
Date: Wed, 6 Jul 2022 16:20:02 +0800
Message-Id: <20220706082016.2603916-1-chao.p.peng(a)linux.intel.com>
It's focus is currently very much KVM and Intel's TDX private guest
support but Will thinks this is could be something that could evolve
into a standardised interface for accessing guest private memory. It's
being actively iterated so now would be the time to comment if there is
something we could use.
I've only skimmed over it so far and I'll try and talk to some of the
other mm/kvm hackers at KVM Forum to get a better understanding. The
first question I'm trying to answer is if this could be used for kernel
pages as well or are we going to run into the same issues as the privcmd
mmap approach?
Any thoughts?
--
Alex Bennée
Hello,
This patchset adds toolstack support for I2C and GPIO virtio devices. This is
inspired from the work done by Oleksandr for the Disk device.
This is developed as part of Linaro's Project Stratos, where we are working
towards Hypervisor agnostic Rust based backend [1].
This is based of origin/staging (commit f6cd15188e09 ("amd/msr: implement
VIRT_SPEC_CTRL for HVM guests using legacy SSBD")) which already has Oleksandr's
patches applied.
V4->V5:
- Fixed indentation at few places.
- Removed/added blank lines.
- Added few comments.
- Added review tags from Oleksandr.
- Rebased over latest staging branch.
V3->V4:
- Update virtio_enabled independently of all devices, so we don't miss setting
it to true.
- Add iommu handling for i2c/gpio and move it as part of
make_virtio_mmio_node_common(), which gets backend_domid parameter as a
result.
V2->V3:
- Rebased over latest tree and made changes according to changes in Oleksandr's
patches from sometime back.
- Minor cleanups.
V1->V2:
- Patches 3/6 and 4/6 are new.
- Patches 5/6 and 6/6 updated based on the above two patches.
- Added link to the bindings for I2C and GPIO.
- Rebased over latest master branch.
Thanks.
--
Viresh
[1] https://lore.kernel.org/xen-devel/20220414092358.kepxbmnrtycz7mhe@vireshk-i…
Viresh Kumar (6):
libxl: Add support for Virtio I2C device
libxl: Add support for Virtio GPIO device
libxl: arm: Create alloc_virtio_mmio_params()
libxl: arm: Split make_virtio_mmio_node()
libxl: Allocate MMIO params for I2c device and update DT
libxl: Allocate MMIO params for GPIO device and update DT
tools/golang/xenlight/helpers.gen.go | 212 ++++++++++++++++++++
tools/golang/xenlight/types.gen.go | 54 ++++++
tools/include/libxl.h | 64 ++++++
tools/include/libxl_utils.h | 6 +
tools/libs/light/Makefile | 2 +
tools/libs/light/libxl_arm.c | 175 ++++++++++++++---
tools/libs/light/libxl_create.c | 26 +++
tools/libs/light/libxl_dm.c | 34 +++-
tools/libs/light/libxl_gpio.c | 226 ++++++++++++++++++++++
tools/libs/light/libxl_i2c.c | 226 ++++++++++++++++++++++
tools/libs/light/libxl_internal.h | 2 +
tools/libs/light/libxl_types.idl | 48 +++++
tools/libs/light/libxl_types_internal.idl | 2 +
tools/ocaml/libs/xl/genwrap.py | 2 +
tools/ocaml/libs/xl/xenlight_stubs.c | 2 +
tools/xl/Makefile | 2 +-
tools/xl/xl.h | 6 +
tools/xl/xl_cmdtable.c | 30 +++
tools/xl/xl_gpio.c | 142 ++++++++++++++
tools/xl/xl_i2c.c | 142 ++++++++++++++
tools/xl/xl_parse.c | 160 +++++++++++++++
tools/xl/xl_parse.h | 2 +
tools/xl/xl_sxp.c | 4 +
23 files changed, 1539 insertions(+), 30 deletions(-)
create mode 100644 tools/libs/light/libxl_gpio.c
create mode 100644 tools/libs/light/libxl_i2c.c
create mode 100644 tools/xl/xl_gpio.c
create mode 100644 tools/xl/xl_i2c.c
--
2.31.1.272.g89b43f80a514
Hi,
This email is driven by a brain storming session at a recent sprint
where we considered what VirtIO devices we should look at implementing
next. I ended up going through all the assigned device IDs hunting for
missing spec discussion and existing drivers so I'd welcome feedback
from anybody actively using them - especially as my suppositions about
device types I'm not familiar with may be way off!
Work so far
===========
The devices we've tackled so far have been relatively simple ones and
more focused on the embedded workloads. Both the i2c and gpio virtio
devices allow for a fairly simple backend which can multiplex multiple
client VM requests onto a set of real HW presented via the host OS.
We have also done some work on a vhost-user backend for virtio-video and
have a working PoC although it is a couple of iterations behind the
latest submission to the virtio spec. Continuing work on this is
currently paused while Peter works on libcamera related things (although
more on that later).
Upstream first
==============
We've been pretty clear about the need to do things in an upstream
compatible way which means devices should be:
- properly specified in the OASIS spec
- have at least one driver up-streamed (probably in Linux)
- have a working public backend
for Stratos I think we are pretty happy to implement all new backends in
Rust under the auspices of the rust-vmm project and the vhost-device
repository.
We obviously also need a reasonable use case for why abstracting a HW
type is useful. For example i2c was envisioned as useful on mobile
devices where a lot of disparate auxillary HW is often hanging of an i2c
bus.
Current reserved IDs
====================
Looking at the spec there are currently 42 listed device types in the
reserved ID table. While there are quite a lot that have Linux driver
implementations a number are nothing more than reserved numbers:
ioMemory / 6
------------
No idea what this was meant to be.
rpmsg / 7
---------
Not formalised in the specification but there is a driver in the Linux
kernel. AFAIUI I think it's a fairly simple wrapper around the existing
rpmsg bus. I think this has also been used for OpenAMP's hypervisor-less
VirtIO experiments to communicate between processor domains.
mac80211 wlan / 10
mac80211 hwsim wireless simulation device / 29
----------------------------------------------
When the discussion about a virtio-wifi come up there is inevitably a
debate about what the use case is. There are usually two potential use
cases:
- simulation environment
Here the desire is to have something that looks like a real WiFi
device in simulation so the rest of the stack (up from the driver)
can be the same as when running on real HW.
- abstraction environment
Devices with WiFi are different from fixed networking as they need
to deal with portability events like changing networks and reporting
connection status and quality. If the guest VM is responsible for
the UI it needs to gather this information and generally wants it's
userspace components to use the same kernel APIs to get it as it
would with real HW.
Neither of these have up-streamed the specification to OASIS but there
is an implementation of the mac80211_hwsim in the Linux kernel. I found
evidence of a plain 80211 virtio_wifi.c existing in the Android kernel
trees. So far I've been unable to find backends for these devices but I
assume they must exist if the drivers do!
Debates about what sort of features and control channels need to be
supported often run into questions about why existing specifications
can't be expanded (for example expand virtio-net with a control channel
to report additional wifi related metadata) or use pass through sockets
for talking to the host netlink channel.
rproc serial / 11
-----------------
Again this isn't documented in the standard. I'm not sure if this is
related to rpmsg but there is an implementation as part of the kernel
virtio_console code.
virtio CAIF / 12
----------------
Not documented in the specification although there is a driver in the
kernel as part of the orphaned CAIF networking subsystem. From the
kernel documentation this was a sub-system for talking to modem parts.
memory balloon / 13
-------------------
This seems like an abandoned attempt at a next generation version of the
memory ballooning interface.
Timer/Clock device / 17
-----------------------
This looks like a simple reservation with no proposed implementation.
I don't know if there is a case for this on most modern architectures
which usually have virtualised architected timers anyway.
Access to RTC information may be something that mediated by
firmware/system control buses. For emulation there are a fair number of
industry standard RTC chips modelled and RTC access tends not to be
performance critical.
Signal Distribution Module / 21
-------------------------------
This appears to be a intra-domain communication channel for which an RFC
was posted:
https://lists.oasis-open.org/archives/virtio-dev/201606/msg00030.html
it came with references to kernel and QEMU implementations. I don't know
if this approach has been obviated by other communcation channels like
vsock or scmi.
pstore device / 22
------------------
This appears to be a persistent storage device that was intended to
allow guests to dump information like crash dumps. There was a proposed
kernel driver:
https://lwn.net/Articles/698744/
and a proposed QEMU backend:
https://lore.kernel.org/all/1469632111-23260-1-git-send-email-namhyung@kern…
which were never merged. As far as I can tell no proposal for the virtio spec itself.
Video encoder device / 30
Video decoder device / 31
-------------------------
This is an ongoing development which has iterated several versions of
the spec and the kernel side driver.
NitroSecureModule / 33
----------------------
This is a stripped down Trusted Platform Module (TPM) intended to expose
TPM functionality such as cryptographic functions and attestation to
guests. This looks like it is closely tied with AWS's Nitro Enclaves.
I haven't been able to find any public definition of the spec or
implementation details. How would this interact with other TPM
functionality solutions?
Watchdog / 35
-------------
Discussion about this is usually conflated with reset functionality as
the two are intimately related.
An early interest in this was for providing a well specified reset
functionality firmware running on the -M virt machine model in QEMU. The
need has been reduced somewhat with the provision of the sbsa-ref model
which does have a defined reset pin.
Other questions that would need to be answered include how the
functionality would interact with the hypervisor given a vCPU could
easily not be scheduled by it and therefore miss its kick window.
Currently there have been no proposals for the spec or implementations.
CAN / 36
--------
This is a device of interest to the Automotive industry as it looks to
consolidate numerous ECUs into VM based work loads. There was a proposed
RFC last year:
https://markmail.org/message/hdxj35fsthypllkt?q=virtio-can+list:org%2Eoasis…
and it is presumed there are frontend and backend drivers in vendor
trees. At the last AGL virtualization expert meeting the Open Synergy
guys said they hoped to post new versions of the spec and kernel driver
soon:
https://confluence.automotivelinux.org/pages/viewpage.action?spaceKey=VE&ti…
During our discussion it became clear that while the message bus itself
was fairly simple real HW often has a vendor specific control plane to
enable specific features. Being able to present this flexibility via the
virtio interface without baking in a direct mapping of the HW would be
the challenge.
Parameter Server / 38
---------------------
This is a proposal for a key-value parameter store over virtio. The
exact use case is unclear but I suspect for Arm at least there is
overlap with what is already supported by DT and UEFI variables.
The proposal only seems to have been partially archived on the lists:
https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg07201.html
It may be Android related?
Audio policy device / 39
------------------------
Again I think this stems from the Android world and provides a policy
and control device to work in concert with the virtio-sound device. The
initial proposal to the list is here:
https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg07255.html
The idea seems to be to have a control layer for dealing with routing
and priority of multiple audio streams.
Bluetooth device / 40
---------------------
Bluetooth suffers from similar complexity problems as 802.11 WiFi.
However the virtio_bt driver in the kernel concentrates on providing a
pipe for a standardised Host Control Interface (HCI) albeit with support
for a selection of vendor specific commands.
I could not find any submission of the specification for standarisation.
Specified but missing backends?
===============================
GPU device / 16
---------------
This is now a fairly mature part of the spec and has implementations is
the kernel, QEMU and a vhost-user backend. However as is commensurate
with the complexity of GPUs there is ongoing development moving from the
VirGL OpenGL encapsulation to a thing called GFXSTREAM which is meant to
make some things easier.
A potential area of interest here is working out what the differences
are in use cases between virtio-gpu and virtio-wayland. virtio-wayland
is currently a ChromeOS only invention so hasn't seen any upstreaming or
specification work but may make more sense where multiple VMs are
drawing only elements of a final display which is composited by a master
program. For further reading see Alyssa's write-up:
https://alyssa.is/using-virtio-wl/
I'm not sure how widely used the existing vhost-user backend is for
virtio-gpu but it could present an opportunity for a more beefy rust-vmm
backend implementation?
Audio device / 25
-----------------
This has a specification and a working kernel driver. However there
isn't a working backend for QEMU although one has been proposed:
Subject: [RFC PATCH 00/27] Virtio sound card implementation
Date: Thu, 29 Apr 2021 17:34:18 +0530
Message-Id: <20210429120445.694420-1-chouhan.shreyansh2702(a)gmail.com>
this could be a candidate for a rust-vmm version?
Other suggestions
=================
When we started Project Stratos there was a survey amongst members on
where there was interest.
virtio-spi/virtio-greybus
-------------------------
Yet another serial bus. We chose to do i2c but doing another similar bus
wouldn't be pushing the state of the art. We could certainly
mentor/guide someone else who wants to get involved in rust-vmm though.
virtio-tuner/virtio-radio
-------------------------
These were early automotive requests. I don't know where these would sit
in relation to the existing virtio-sound and audio policy devices.
virtio-camera
-------------
We have a prototype of virtio-video but as the libcamera project shows
interfacing with modern cameras is quite a complex task these days.
Modern cameras have all sorts of features powered by complex IP blocks
including various amounts of AI. Perhaps it makes more sense to leave
this to see how the libcamera project progresses before seeing what
common features could be exposed.
Conclusion
==========
Considering the progress we've made so far and our growing confidence
with rust-vmm I think the next device we implement a backend for should
be a more complex device. Discussing this with Viresh and Mathieu
earlier today we thought it would be nice if the device was more demo
friendly as CLI's don't often excite.
My initial thoughts is that a rust-vmm backend for virtio-gpu would fit
the bill because:
- already up-streamed in specification and kernel
- known working implementations in QEMU and C based vhost-user daemon
- ongoing development would be a good test of Rust's flexibility
I think virtio-can would also be a useful target for the automotive use
case. Given there will be a new release of the spec soon we should
certainly keep an eye on it.
Anyway I welcome peoples thoughts.
--
Alex Bennée
See also:
Remaining Xen enabling work for rust-vmm - 87pmk472ii.fsf(a)linaro.org
vhost-device outstanding tasks - 87zgj87alq.fsf(a)linaro.org
I am pretty sure the reasons have to do with old x86 PV guests, so I am
CCing Juergen and Boris.
> Hi,
>
> While we've been working on the rust-vmm virtio backends on Xen we
> obviously have to map guest memory info the userspace of the daemon.
> However following the logic of what is going on is a little confusing.
> For example in the Linux backend we have this:
>
> void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
> uint32_t dom, void *addr,
> int prot, int flags, size_t num,
> const xen_pfn_t arr[/*num*/], int err[/*num*/])
> {
> int fd = fmem->fd;
> privcmd_mmapbatch_v2_t ioctlx;
> size_t i;
> int rc;
>
> addr = mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED,
> fd, 0);
> if ( addr == MAP_FAILED )
> return NULL;
>
> ioctlx.num = num;
> ioctlx.dom = dom;
> ioctlx.addr = (unsigned long)addr;
> ioctlx.arr = arr;
> ioctlx.err = err;
>
> rc = ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2, &ioctlx);
>
> Where the fd passed down is associated with the /dev/xen/privcmd device
> for issuing hypercalls on userspaces behalf. What is confusing is why
> the function does it's own mmap - one would assume the passed addr would
> be associated with a anonymous or file backed mmap region already that
> the calling code has setup. Applying a mmap to a special device seems a
> little odd.
>
> Looking at the implementation on the kernel side it seems the mmap
> handler only sets a few flags:
>
> static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> {
> /* DONTCOPY is essential for Xen because copy_page_range doesn't know
> * how to recreate these mappings */
> vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTCOPY |
> VM_DONTEXPAND | VM_DONTDUMP;
> vma->vm_ops = &privcmd_vm_ops;
> vma->vm_private_data = NULL;
>
> return 0;
> }
>
> So can I confirm that the mmap of /dev/xen/privcmd is being called for
> side effects? Is it so when the actual ioctl is called the correct flags
> are set of the pages associated with the user space virtual address
> range?
>
> Can I confirm there shouldn't be any limitation on where and how the
> userspace virtual address space is setup for the mapping in the guest
> memory?
>
> Is there a reason why this isn't done in the ioctl path itself?
>
> I'm trying to understand the differences between Xen and KVM in the API
> choices here. I think the equivalent is the KVM_SET_USER_MEMORY_REGION
> ioctl for KVM which brings a section of the guest physical address space
> into the userspaces vaddr range.