Hello,
This patchset adds toolstack support for I2C and GPIO virtio devices. This is
inspired from the work done by Oleksandr for the Disk device.
This is developed as part of Linaro's Project Stratos, where we are working
towards Hypervisor agnostic Rust based backend [1].
This is based of origin/staging (commit 01ca29f0b17a ("sched: dom0_vcpus_pin
should only affect dom0")) which already has Oleksandr's patches applied.
V2->V3:
- Rebased over latest tree and made changes according to changes in Oleksandr's
patches from sometime back.
- Minor cleanups.
V1->V2:
- Patches 3/6 and 4/6 are new.
- Patches 5/6 and 6/6 updated based on the above two patches.
- Added link to the bindings for I2C and GPIO.
- Rebased over latest master branch.
Thanks.
--
Viresh
[1] https://lore.kernel.org/xen-devel/20220414092358.kepxbmnrtycz7mhe@vireshk-i…
Viresh Kumar (6):
libxl: Add support for Virtio I2C device
libxl: Add support for Virtio GPIO device
libxl: arm: Create alloc_virtio_mmio_params()
libxl: arm: Split make_virtio_mmio_node()
libxl: Allocate MMIO params for I2c device and update DT
libxl: Allocate MMIO params for GPIO device and update DT
tools/golang/xenlight/helpers.gen.go | 212 ++++++++++++++++++++
tools/golang/xenlight/types.gen.go | 54 ++++++
tools/include/libxl.h | 64 ++++++
tools/include/libxl_utils.h | 6 +
tools/libs/light/Makefile | 2 +
tools/libs/light/libxl_arm.c | 138 +++++++++++--
tools/libs/light/libxl_create.c | 26 +++
tools/libs/light/libxl_dm.c | 34 +++-
tools/libs/light/libxl_gpio.c | 226 ++++++++++++++++++++++
tools/libs/light/libxl_i2c.c | 226 ++++++++++++++++++++++
tools/libs/light/libxl_internal.h | 2 +
tools/libs/light/libxl_types.idl | 48 +++++
tools/libs/light/libxl_types_internal.idl | 2 +
tools/ocaml/libs/xl/genwrap.py | 2 +
tools/ocaml/libs/xl/xenlight_stubs.c | 2 +
tools/xl/Makefile | 2 +-
tools/xl/xl.h | 6 +
tools/xl/xl_cmdtable.c | 30 +++
tools/xl/xl_gpio.c | 142 ++++++++++++++
tools/xl/xl_i2c.c | 142 ++++++++++++++
tools/xl/xl_parse.c | 160 +++++++++++++++
tools/xl/xl_parse.h | 2 +
tools/xl/xl_sxp.c | 4 +
23 files changed, 1509 insertions(+), 23 deletions(-)
create mode 100644 tools/libs/light/libxl_gpio.c
create mode 100644 tools/libs/light/libxl_i2c.c
create mode 100644 tools/xl/xl_gpio.c
create mode 100644 tools/xl/xl_i2c.c
--
2.31.1.272.g89b43f80a514
Hi All,
I'll be on holiday (moving house) next week so I won't be able to chair
the Stratos sync meeting. As it is the middle of summer I'm going to
propose we skip next weeks sync and re-convene on the 3rd of August. Any
objections?
While on the subject of sync meetings are there any topics to discuss.
We've had some discussions on next rust-vmm device to implement but
beyond a vague "maybe virtio-gpu to help with demos" I don't think we've
nailed it down. virtio-can also keeps getting mentioned and while useful
I'm wary it's not pushing our exploration of the possibilities of virtio
further.
I did a talk at GST22 last week which was an overview of VirtIO and what
we had done so far as well as discussing some future directions. You can
see the talk at:
https://huawei-events.de/en/gsts22-j83dco-vod.htm
(Day 2 stream, Chapter 6/TS 05:13:00)
In it potential future areas of exploration where:
Improve Xen API
═══════════════
• More standard mmap
• direct irqfd/eventfd routing
Memory Isolation
════════════════
• fat virtqueue
• iommu/grants vs regions
(x-over with pKVM/CCA?)
Bare metal rust
═══════════════
• re-use exiting VirtIO logic
• but without POSIX layer
I'd like to get a better steer on what we should focus on next after
we've demoed our existing rust-vmm daemons and the Xen vhost-master
work.
--
Alex Bennée
Hello,
We verified our hypervisor-agnostic Rust based vhost-user backends with Qemu
based setup earlier, and there was growing concern if they were truly
hypervisor-agnostic.
In order to prove that, we decided to give it a try with Xen, a type-1
bare-metal hypervisor.
We are happy to announce that we were able to make progress on that front and
have a working setup where we can test our existing Rust based backends, like
I2C, GPIO, RNG (though only I2C is tested as of now) over Xen.
Key components:
--------------
- Xen: https://github.com/vireshk/xen
Xen requires MMIO and device specific support in order to populate the
required devices at the guest. This tree contains four patches on the top of
mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C).
- libxen-sys: https://github.com/vireshk/libxen-sys
We currently depend on the userspace tools/libraries provided by Xen, like
xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates provides Rust
wrappers over those calls, generated automatically with help of bindgen
utility in Rust, that allow us to use the installed Xen libraries. Though we
plan to replace this with Rust based "oxerun" (find below) in longer run.
- oxerun (WIP): https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls
This is Rust based implementations for Ioctl and hypercalls to Xen. This is WIP
and should eventually replace "libxen-sys" crate entirely (which are C based
implementation of the same).
- vhost-device: https://github.com/vireshk/vhost-device
These are Rust based vhost-user backends, maintained inside the rust-vmm
project. This already contain support for I2C and RNG, while GPIO is under
review. These are not required to be modified based on hypervisor and are
truly hypervisor-agnostic.
Ideally the backends are hypervisor agnostic, as explained earlier, but
because of the way Xen maps the guest memory currently, we need a minor update
for the backends to work. Xen maps the memory via a kernel file
/dev/xen/privcmd, which needs calls to mmap() followed by an ioctl() to make
it work. For this a hack has been added to one of the rust-vmm crates,
vm-virtio, which is used by vhost-user.
https://github.com/vireshk/vm-memory/commit/54b56c4dd7293428edbd7731c4dbe57…
The update to vm-memory is responsible to do ioctl() after the already present
mmap().
- vhost-user-master (WIP): https://github.com/vireshk/vhost-user-master
This implements the master side interface of the vhost protocol, and is like
the vhost-user-backend (https://github.com/rust-vmm/vhost-user-backend) crate
maintained inside the rust-vmm project, which provides similar infrastructure
for the backends to use. This shall be hypervisor independent and provide APIs
for the hypervisor specific implementations. This will eventually be
maintained inside the rust-vmm project and used by all Rust based hypervisors.
- xen-vhost-master (WIP): https://github.com/vireshk/xen-vhost-master
This is the Xen specific implementation and uses the APIs provided by
"vhost-user-master", "oxerun" and "libxen-sys" crates for its functioning.
This is designed based on the EPAM's "virtio-disk" repository
(https://github.com/xen-troops/virtio-disk/) and is pretty much similar to it.
One can see the analogy as:
Virtio-disk == "Xen-vhost-master" + "vhost-user-master" + "oxerun" + "libxen-sys" + "vhost-device".
Test setup:
----------
1. Build Xen:
$ ./configure --libdir=/usr/lib --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu --disable-docs --disable-golang --disable-ocamltools --with-system-qemu=/root/qemu/build/i386-softmmu/qemu-system-i386;
$ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64
2. Run Xen via Qemu on X86 machine:
$ qemu-system-aarch64 -machine virt,virtualization=on -cpu cortex-a57 -serial mon:stdio \
-device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \
-device virtio-scsi-pci -drive file=/home/vireshk/virtio/debian-bullseye-arm64.qcow2,index=0,id=hd0,if=none,format=qcow2 -device scsi-hd,drive=hd0 \
-display none -m 8192 -smp 8 -kernel /home/vireshk/virtio/xen/xen \
-append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \
-device guest-loader,addr=0x46000000,kernel=/home/vireshk/kernel/barm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \
-device ds1338,address=0x20 # This is required to create a virtual I2C based RTC device on Dom0.
This should get Dom0 up and running.
3. Build rust crates:
$ cd /root/
$ git clone https://github.com/vireshk/xen-vhost-master
$ cd xen-vhost-master
$ cargo build
$ cd ../
$ git clone https://github.com/vireshk/vhost-device
$ cd vhost-device
$ cargo build
4. Setup I2C based RTC device
$ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device; echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
5. Lets run everything now
# Start the I2C backend in one terminal (open new terminal with "ssh
# root@localhost -p8022"). This tells the I2C backend to hook up to
# "/root/vi2c.sock0" socket and wait for the master to start transacting.
$ /root/vhost-device/target/debug/vhost-device-i2c -s /root/vi2c.sock -c 1 -l 0:32
# Start the xen-vhost-master in another terminal. This provides the path of
# the socket to the master side and the device to look from Xen, which is I2C
# here.
$ /root/xen-vhost-master/target/debug/xen-vhost-master --socket-path /root/vi2c.sock0 --name i2c
# Start guest in another terminal, i2c_domu.conf is attached. The guest kernel
# should have Virtio related config options enabled, along with i2c-virtio
# driver.
$ xl create -c i2c_domu.conf
# The guest should boot fine now. Once the guest is up, you can create the I2C
# RTC device and use it. Following will create /dev/rtc0 in the guest, which
# you can configure with 'hwclock' utility.
$ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
Hope this helps.
--
viresh
Hi,
In my last survey of assigned device numbers I went through all the
currently assigned device numbers and attempted to glean their current
status. However we currently don't have any devices that might be useful
in a Cloud Native development environment.
To define terms cloud native is the idea you can build a workload
processing element as a VM and run it in the cloud. It consumes data
from virtio-devices and processes it in someway. This VM can then be
moved from being hosted in the cloud and into a real platform which
still provides it's data via a virtio device. The idea being you get the
same behaviour (as well as allowing for data to be recorded so future
debugging/tuning work can be done in the cloud).
Currently most of the virtio devices are actually data sinks - for
example for virtio-video the guest pushes data to the video device for
it to process. What we need is a device(s?) to be a source of data to
feed to these workloads.
Why virtio-media-source? Well rather than creating a device for every
data type maybe it would make more sense to have a generic device which
can advertise the data stream info in it's configuration space. This
would allow the kernel driver to then route the data to the appropriate
kernel subsystem (e.g. v4l or alsa).
Would having a virtio driver potentially feeding different sub-systems
based on configuration be a problem?
What do people think?
--
Alex Bennée
Hi,
This is one of several emails to follow up on Linaro's internal KWG
sprint last week in Cambridge where a number of Project Stratos hackers
discussed what next steps we have and started to think about future
work. I am splitting the update into several emails so I can freely CC
the relevant lists for each without too much cross-posting spam.
Intro
=====
We've made good progress over the last year and have up-streamed a number
of device models as vhost-user daemons. We have also gotten our first
proof of concept build of the xen-vhost-master which has allowed us to
reuse these backends on the Xen hypervisor.
https://github.com/vireshk/xen-vhost-master
Remaining Work
==============
Scope out the remainder of APIs needed for oxerun
-------------------------------------------------
The current xen-vhost-master uses a combination of the native rust
oxerun and a bindgen import of libxensys
(https://github.com/vireshk/libxen-sys) and a number of xen libraries
built directly in the xen-vhost-master repository.
Our intention for the Stratos work is to remove any C dependency for the
rust backend and use native rust bindings to talk to the hypervisor
control ioctl.
Identifying what is needed should be easy enough as we can see where in
master repository C calls are being made. This work should be broken
down into groups in JIRA so the work can be efficiently divided up.
Currently our focus for the rust-vmm repo is to support the vhost-user
daemons but a wider conversation needs to be had with the community
about the rest of the tooling involved in the creation and control of
DomU guests. For Stratos we would like to explore the possibilities of
bare metal monitor programs for dom0-less (or dom0-light?) setups.
Strategy for testing oxerun in the rust-vmm project
---------------------------------------------------
Currently the rust-vmm projects rely heavily on unit tests and a (mostly) x86
build farm. While building for non-x86 architectures isn't
insurmountable doing blackbox testing on real hypervisors isn't
currently supported. Given the low level nature of the interactions
simply mocking the ioctl interface to the kernel will not likely
sufficiently exercise things.
We need a way to execute tests on a real system with a real Xen
hypervisor and dom0 setup. We can either:
- somehow add Xen hosts to the Buildkite runner pool for rust-vmm
or
- investigate using QEMU TCG as a portable system in a box to run Xen
and guests
Currently this is blocking wider up-streaming of the oxerun code to
https://github.com/rust-vmm/xen-sys in the same way other rust-vmm repos
work.
See also
========
Other subjects discussed will be the subject of other emails today with
different distribution lists. These are:
- Remaining work for vhost-device
- Additional virtio devices
- Integrating rust-vmm with QEMU
Happy reading ;-)
--
Alex Bennée
Hi,
This is one of several emails to follow up on Linaro's internal KWG
sprint last week in Cambridge where a number of Project Stratos hackers
discussed what next steps we have and started to think about future
work. I am splitting the update into several emails so I can freely CC
the relevant lists for each without too much cross-posting spam.
Intro
=====
We've made good progress over the last year and have up-streamed a number
of device models as vhost-user daemons. We have also gotten our first
proof of concept build of the xen-vhost-master which has allowed us to
reuse these backends on the Xen hypervisor.
Outstanding work
================
vm-virtio definitions
---------------------
Given our vhost-user daemons were not re-implementing existing virtio
device models a number of the queue handling definitions are in the
vhost-device repository itself. As discussed before now we have these
working we should migrate common definitions to the vm-virtio crate so
in-VMM virtio emulation can re-use this code.
Get outstanding vsock PR merged
-------------------------------
We actually have two outstanding PR's against the vhost-device
repository which implement virtio-vsock and virtio-scsi. They were done
as GSoC projects but didn't get merged at the time due to lack of
review. They currently have outstanding requests for code changes but
due to the nature of GSoC it looks like the original authors don't have
time to make the changes which is understandable given changes the
repository has gone through over the last two years.
I'm agnostic about virtio-scsi but given the usefulness of virtio-vsock
it seems a shame to leave an implementation to wither on a branch.
There has been some work on vm-virtio to improve the queue handling and
with Andreea's help I have a branch that uses that. Should we just pick
up the branch and finish the pull request process?
Sort out an official vhost-master repository in rust-vmm
--------------------------------------------------------
The rust-vmm project has the vhost-user-backend which implements the
core backend behaviour for handling vhost-user messages. There is also
an abstraction for vhost (user and kernel handling) from the VMM side in
the vhost repository. However it doesn't provide everything needed to
implement a full vhost-master. Currently Viresh is using:
https://github.com/vireshk/vhost-user-master
is the xen-vhost-master project which is constructed from the in-VMM
vhost-master bits from Cloud Hypervisor. We should get this properly
up-streamed into the rust-vmm project.
Should this be merged into the existing rust-vmm/vhost repository or
does it require it's own repository?
Properly document and support cross-compilation
-----------------------------------------------
Currently most of our testing is on Arm systems and currently we are
either:
- hacking up the local repo for cross-compilation
or
- doing a "native" build in an QEMU emulated Aarch64 system
the second option is potentially quite slow, at least for the first
build. Given building backends for non-x86 systems is core to Linaro's
goals we should properly support cross compilation for the vhost-device
repository and document it. This should be also be enabled in the CI to
ensure the configuration doesn't bitrot.
See Also
========
Other subjects discussed will be the subject of other emails today with
different distribution lists. These are:
- Xen specific enabling work
- Additional virtio devices
- Integrating rust-vmm with QEMU
Happy reading ;-)
--
Alex Bennée
When we introduced FEAT_LPA to QEMU's -cpu max we discovered older
kernels had a bug where the physical address was copied directly from
ID_AA64MMFR0_EL1.PARange field. The early cpu_init code of Xen commits
the same error by blindly copying across the max supported range.
Unsurprisingly when the page tables aren't set up for these greater
ranges hilarity ensues and the hypervisor crashes fairly early on in
the boot-up sequence. This happens when we write to the control
register in enable_mmu().
Attempt to fix this the same way as the Linux kernel does by gating
PARange to the maximum the hypervisor can handle. I also had to fix up
code in p2m which panics when it sees an "invalid" entry in PARange.
Signed-off-by: Alex Bennée <alex.bennee(a)linaro.org>
Cc: Richard Henderson <richard.henderson(a)linaro.org>
Cc: Stefano Stabellini <sstabellini(a)kernel.org>
Cc: Julien Grall <julien(a)xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk(a)epam.com>
Cc: Bertrand Marquis <bertrand.marquis(a)arm.com>
---
v2
- clamp p2m_ipa_bits = PADDR_BIT instead
---
xen/arch/arm/arm64/head.S | 6 ++++++
xen/arch/arm/p2m.c | 10 +++++-----
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index aa1f88c764..057dd5d925 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -473,6 +473,12 @@ cpu_init:
ldr x0, =(TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(64-48))
/* ID_AA64MMFR0_EL1[3:0] (PARange) corresponds to TCR_EL2[18:16] (PS) */
mrs x1, ID_AA64MMFR0_EL1
+ /* Limit to 48 bits, 256TB PA range (#5) */
+ ubfm x1, x1, #0, #3
+ mov x2, #5
+ cmp x1, x2
+ csel x1, x1, x2, lt
+
bfi x0, x1, #16, #3
msr tcr_el2, x0
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fb71fa4c1c..3349b464a3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -32,10 +32,10 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
#define P2M_ROOT_PAGES (1<<P2M_ROOT_ORDER)
/*
- * Set larger than any possible value, so the number of IPA bits can be
+ * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
* restricted by external entity (e.g. IOMMU).
*/
-unsigned int __read_mostly p2m_ipa_bits = 64;
+unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
/* Helpers to lookup the properties of each level */
static const paddr_t level_masks[] =
@@ -2030,7 +2030,7 @@ void __init setup_virt_paging(void)
unsigned int root_order; /* Page order of the root of the p2m */
unsigned int sl0; /* Desired SL0, maximum in comment */
} pa_range_info[] = {
- /* T0SZ minimum and SL0 maximum from ARM DDI 0487A.b Table D4-5 */
+ /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
/* PA size, t0sz(min), root-order, sl0(max) */
[0] = { 32, 32/*32*/, 0, 1 },
[1] = { 36, 28/*28*/, 0, 1 },
@@ -2038,7 +2038,7 @@ void __init setup_virt_paging(void)
[3] = { 42, 22/*22*/, 3, 1 },
[4] = { 44, 20/*20*/, 0, 2 },
[5] = { 48, 16/*16*/, 0, 2 },
- [6] = { 0 }, /* Invalid */
+ [6] = { 52, 12/*12*/, 3, 3 },
[7] = { 0 } /* Invalid */
};
@@ -2069,7 +2069,7 @@ void __init setup_virt_paging(void)
}
}
- /* pa_range is 4 bits, but the defined encodings are only 3 bits */
+ /* pa_range is 4 bits but we don't support all modes */
if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
--
2.30.2
Hello,
This patchset adds toolstack support for I2C and GPIO virtio devices. This is
inspired from the work done by Oleksandr for the Disk device [1].
The first two patches can be applied right away, while the last four need
Oleksandr's series [1] to be applied first.
This is developed as part of Linaro's Project Stratos, where we are working
towards Hypervisor agnostic Rust based backend [2].
I must accept that I am a beginner to Xen and developed this patchset based on
support for existing devices like Disk or Keyboard. There may be bits which I
missed or the one I added which aren't really required.
Thanks.
--
Viresh
[1] https://lore.kernel.org/xen-devel/1651598763-12162-1-git-send-email-oleksty…
[2] https://lore.kernel.org/xen-devel/20220414092358.kepxbmnrtycz7mhe@vireshk-i…
Viresh Kumar (6):
libxl: Add support for Virtio I2C device
libxl: Add support for Virtio GPIO device
libxl: arm: Create alloc_virtio_mmio_params()
libxl: arm: Split make_virtio_mmio_node()
libxl: Allocate MMIO params for I2c device and update DT
libxl: Allocate MMIO params for GPIO device and update DT
tools/golang/xenlight/helpers.gen.go | 220 ++++++++++++++++++++
tools/golang/xenlight/types.gen.go | 54 +++++
tools/include/libxl.h | 64 ++++++
tools/include/libxl_utils.h | 6 +
tools/libs/light/Makefile | 2 +
tools/libs/light/libxl_arm.c | 132 ++++++++++--
tools/libs/light/libxl_create.c | 26 +++
tools/libs/light/libxl_dm.c | 34 +++-
tools/libs/light/libxl_gpio.c | 236 ++++++++++++++++++++++
tools/libs/light/libxl_i2c.c | 236 ++++++++++++++++++++++
tools/libs/light/libxl_internal.h | 2 +
tools/libs/light/libxl_types.idl | 52 +++++
tools/libs/light/libxl_types_internal.idl | 2 +
tools/ocaml/libs/xl/genwrap.py | 2 +
tools/ocaml/libs/xl/xenlight_stubs.c | 2 +
tools/xl/Makefile | 2 +-
tools/xl/xl.h | 6 +
tools/xl/xl_cmdtable.c | 30 +++
tools/xl/xl_gpio.c | 143 +++++++++++++
tools/xl/xl_i2c.c | 143 +++++++++++++
tools/xl/xl_parse.c | 160 +++++++++++++++
tools/xl/xl_parse.h | 2 +
tools/xl/xl_sxp.c | 4 +
23 files changed, 1539 insertions(+), 21 deletions(-)
create mode 100644 tools/libs/light/libxl_gpio.c
create mode 100644 tools/libs/light/libxl_i2c.c
create mode 100644 tools/xl/xl_gpio.c
create mode 100644 tools/xl/xl_i2c.c
--
2.31.1.272.g89b43f80a514