Viresh,
Apologies for the messy nature of my docker repo but there is a new
image:
https://github.com/stsquad/dockerfiles/tree/master/crossbuild/bullseye-arm64
which I have confirmed can build current Xen master:
make[4]: Leaving directory '/home/alex.bennee/lsrc/xen/xen.build.arm64-xen-master-for-bullseye/tools/pygrub'
make[3]: Leaving directory '/home/alex.bennee/lsrc/xen/xen.build.arm64-xen-master-for-bullseye/tools'
make[2]: Leaving directory '/home/alex.bennee/lsrc/xen/xen.build.arm64-xen-master-for-bullseye/tools'
make[1]: Leaving directory '/home/alex.bennee/lsrc/xen/xen.build.arm64-xen-master-for-bullseye/tools'
fakeroot sh ./tools/misc/mkdeb /home/alex.bennee/lsrc/xen/xen.build.arm64-xen-master-for-bullseye $(make -C xen xenversion --no-print-directory)
dpkg-deb: building package 'xen-upstream' in 'xen-upstream-4.17-unstable.deb'.
🕙14:00:03 alex.bennee@4372e168fe23:xen.build.arm64-xen-master-for-bullseye on xen.build.arm64-xen-master-for-bullseye [?] took 31s
➜ ls -l dist/
total 24667
-rw-r--r-- 1 alex.bennee alex.bennee 20830 Jun 7 2021 COPYING
-rw-r--r-- 1 alex.bennee alex.bennee 8632 Jun 7 2021 README
drwxr-xr-x 6 alex.bennee alex.bennee 6 Feb 11 13:59 install/
-rwxr-xr-x 1 alex.bennee alex.bennee 658 Jun 7 2021 install.sh*
-rw-r--r-- 1 alex.bennee alex.bennee 25213092 Feb 11 14:00 xen-upstream-4.17-unstable.deb
--
Alex Bennée
This series adds support for virtio-video decoder devices in Qemu
and also provides a vhost-user-video vmm implementation.
The vhost-user-video vmm currently parses virtio-vido v3 protocol
(as that is what the Linux frontend driver implements).
It then converts that to a v4l2 mem2mem stateful decoder device.
Currently this has been tested using v4l2 vicodec test driver in Linux
[1] but it is intended to be used with Arm SoCs which often implement
v4l2 stateful decoders/encoders drivers for their video accelerators.
The primary goal so far has been to allow continuing development
of virtio-video Linux frontend driver and testing with Qemu. Using
vicodec on the host allows a purely virtual dev env, and allows for
ci integration in the future by kernelci etc.
This series also adds the virtio_video.h header and adds the
FWHT format which is used by vicodec driver.
I have tested this VMM using v4l2-ctl from v4l2 utils in the guest
to do a video decode to a file. This can then be validated using ffplay
v4l2-compliance tool in the guest has also been run which stresses the
interface and issues lots of syscall level tests
See the README.md for example commands on how to configure guest kernel
and do a video decode using Qemu, vicodec using this VMM.
Linux virtio-video frontend driver code:
https://github.com/petegriffin/linux/commits/v5.10-virtio-video-latest
Qemu vmm code:
https://github.com/petegriffin/qemu/tree/vhost-virtio-video-master-v1
This is part of a wider initiative by Linaro called
"project Stratos" for which you can find information here:
https://collaborate.linaro.org/display/STR/Stratos+Home
Applies cleanly to git://git.qemu.org/qemu.git master(a3607def89).
Thanks,
Peter.
[1] https://lwn.net/Articles/760650/
Peter Griffin (8):
vhost-user-video: Add a README.md with cheat sheet of commands
MAINTAINERS: Add virtio-video section
vhost-user-video: boiler plate code for vhost-user-video device
vhost-user-video: add meson subdir build logic
standard-headers: Add virtio_video.h
virtio_video: Add Fast Walsh-Hadamard Transform format
hw/display: add vhost-user-video-pci
tools/vhost-user-video: Add initial vhost-user-video vmm
MAINTAINERS | 8 +
hw/display/Kconfig | 5 +
hw/display/meson.build | 3 +
hw/display/vhost-user-video-pci.c | 82 +
hw/display/vhost-user-video.c | 386 ++++
include/hw/virtio/vhost-user-video.h | 41 +
include/standard-headers/linux/virtio_video.h | 484 +++++
tools/meson.build | 9 +
tools/vhost-user-video/50-qemu-rpmb.json.in | 5 +
tools/vhost-user-video/README.md | 98 +
tools/vhost-user-video/main.c | 1680 ++++++++++++++++
tools/vhost-user-video/meson.build | 10 +
tools/vhost-user-video/v4l2_backend.c | 1777 +++++++++++++++++
tools/vhost-user-video/v4l2_backend.h | 99 +
tools/vhost-user-video/virtio_video_helpers.c | 462 +++++
tools/vhost-user-video/virtio_video_helpers.h | 166 ++
tools/vhost-user-video/vuvideo.h | 43 +
17 files changed, 5358 insertions(+)
create mode 100644 hw/display/vhost-user-video-pci.c
create mode 100644 hw/display/vhost-user-video.c
create mode 100644 include/hw/virtio/vhost-user-video.h
create mode 100644 include/standard-headers/linux/virtio_video.h
create mode 100644 tools/vhost-user-video/50-qemu-rpmb.json.in
create mode 100644 tools/vhost-user-video/README.md
create mode 100644 tools/vhost-user-video/main.c
create mode 100644 tools/vhost-user-video/meson.build
create mode 100644 tools/vhost-user-video/v4l2_backend.c
create mode 100644 tools/vhost-user-video/v4l2_backend.h
create mode 100644 tools/vhost-user-video/virtio_video_helpers.c
create mode 100644 tools/vhost-user-video/virtio_video_helpers.h
create mode 100644 tools/vhost-user-video/vuvideo.h
--
2.25.1
Hi,
To start the new year I thought would dump some of my thoughts on
zero-copy between VM domains. For project Stratos we've gamely avoided
thinking too hard about this while we've been concentrating on solving
more tractable problems. However we can't put it off forever so lets
work through the problem.
Memory Sharing
==============
For any zero-copy to work there has to be memory sharing between the
domains. For traditional KVM this isn't a problem as the host kernel
already has access to the whole address space of all it's guests.
However type-1 setups (and now pKVM) are less promiscuous about sharing
their address space across the domains.
We've discussed options like dynamically sharing individual regions in
the past (maybe via iommu hooks). However given the performance
requirements I think that is ruled out in favour of sharing of
appropriately sized blocks of memory. Either one of the two domains has
to explicitly share a chunk of its memory with the other or the
hypervisor has to allocate the memory and make it visible to both. What
considerations do we have to take into account to do this?
* the actual HW device may only have the ability to DMA to certain
areas of the physical address space.
* there may be alignment requirements for HW to access structures (e.g.
GPU buffers/blocks)
Which domain should do the sharing? The hypervisor itself likely doesn't
have all the information to make the choice but in a distributed driver
world it won't always be the Dom0/Host equivalent. While the domain with
the HW driver in it will know what the HW needs it might not know if the
GPA's being used are actually visible to the real PA it is mapped to.
I think this means for useful memory sharing we need the allocation to
be done by the HW domain but with support from the hypervisor to
validate the region meets all the physical bus requirements.
Buffer Allocation
=================
Ultimately I think the majority of the work that will be needed comes
down to how buffer allocation is handled in the kernels. This is also
the area I'm least familiar with so I look forward to feedback from
those with deeper kernel knowledge.
For Linux there already exists the concept of DMA reachable regions that
take into account the potentially restricted set of addresses that HW
can DMA to. However we are now adding a second constraint which is where
the data is eventually going to end up.
For example the HW domain may be talking to network device but the
packet data from that device might be going to two other domains. We
wouldn't want to share a region for received network packets between
both domains because that would leak information so the network driver
needs knowledge of which shared region to allocate from and hope the HW
allows us to filter the packets appropriately (maybe via VLAN tag). I
suspect the pure HW solution of just splitting into two HW virtual
functions directly into each domain is going to remain the preserve of
expensive enterprise kit for some time.
Should the work be divided up between sub-systems? Both the network and
block device sub-systems have their own allocation strategies and would
need some knowledge about the final destination for their data. What
other driver sub-systems are going to need support for this sort of
zero-copy forwarding? While it would be nice for every VM transaction to
be zero-copy we don't really need to solve it for low speed transports.
Transparent fallback and scaling
================================
As we know memory is always a precious resource that we never have
enough of. The more we start carving up memory regions for particular
tasks the less flexibility the system has as a whole to make efficient
use of it. We can almost guarantee whatever number we pick for given
VM-to-VM conduit will be wrong. Any memory allocation system based on
regions will have to be able to fall back graciously to using other
memory in the HW domain and rely on traditional bounce buffering
approaches while under heavy load. This will be a problem for VirtIO
backends to understand when some data that needs to go to the FE domain
needs this bounce buffer treatment. This will involve tracking
destination domain metadata somewhere in the system so it can be queried
quickly.
Is there a cross-over here with the kernels existing support for NUMA
architectures? It seems to me there are similar questions about the best
place to put memory that perhaps we can treat multi-VM domains as
different NUMA zones?
Finally there is the question of scaling. While mapping individual
transactions would be painfully slow we need to think about how dynamic
a modern system is. For example do you size your shared network region
to cope with a full HD video stream of data? Most of the time the
user won't be doing anything nearly as network intensive.
Of course adding the dynamic addition (and removal) of shared memory
regions brings in more potential synchronisation problems of ensuring
shared memory isn't accessed by either side when taken down. We would
need some sort of assurance the sharee has finished with all the data in
a given region before the sharer brings the share down.
Conclusion
==========
This long text hasn't even attempted to come up with a zero-copy
architecture for Linux VMs. I'm hoping as we discuss this we can capture
all the various constraints any such system is going to need to deal
with. So my final questions are:
- what other constraints we need to take into account?
- can we leverage existing sub-systems to build this support?
I look forward to your thoughts ;-)
--
Alex Bennée
+Bill Mills <bill.mills(a)linaro.org>
I confirm that the patch gets Ubuntu 21.10 "mostly" work (SD card access
issue related to IRQ I believe) as DOM0 for Xen 4.16 (booted
SystemReady-IR) on macchiatobin.
The patch allows the Linux ComPhy driver to call SiP specific SMC call to
initialize the ComPhy chip (SerDes in my mind).
Investigation shows that EDK2 is initializing ComPhy (PCI, SATA... lanes)
prior booting Linux and thus do not need the patch (SMC call is done prior
Xen takes over).
Cheers
FF
Additional information:
Documentation: auto-init of ComPhy
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
Comphy configuration in Marvell.dec
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
MvComphyInit called in PlatInitDxe
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
MvComPhyInit
<https://github.com/andreiw/MacchiatoBin-edk2/blob/uefi-2.7-armada-18.12-and…>
:
BoardDescProtocol->BoardDescComPhyGet (BoardDescProtocol,
&ComPhyBoardDesc); /* gets the configuration above in a C structure */
For each chip {
InitComPhyConfig (PtrChipCfg, LaneData, &ComPhyBoardDesc[Index]);
Configure each lane based on Pcd data
}
On Thu, 22 Oct 2020 at 23:46, Stefano Stabellini via Stratos-dev <
stratos-dev(a)op-lists.linaro.org> wrote:
> On Thu, 22 Oct 2020, Alex Bennée wrote:
> > Stefano Stabellini <stefano.stabellini(a)xilinx.com> writes:
> > (XEN) Check compatible for node /a(XEN) Loading d0 initrd from
> 00000000aefac000 to 0x0000000028200000-0x0000000029eedb66
> > (XEN) Loading d0 DTB to 0x0000000028000000-0x0000000028005ed9
> > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > (XEN) Scrubbing Free RAM in background
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) ***************************************************
> > (XEN) PLEASE SPECIFY dom0_mem PARAMETER - USING 512M FOR NOW
> > (XEN) ***************************************************
>
> This is a problem, especially if you are booting Debian 512MB are not
> going to be enough. It might also be the reason for the hang below.
>
> Try to add something like: dom0_mem=2G to the Xen command line.
>
>
> > (XEN) 3... 2... 1...
> > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch
> input)
> > (XEN) Check compatible for node /chosen/module@b0c9b000
> > (XEN) cplen 17
> > (XEN) multiboot,module
> > (XEN) Check compatible for node /chosen/module@aefac000
> > (XEN) cplen 17
> > (XEN) multiboot,module
> > (XEN) Freed 340kB init memory.
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER24
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER28
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER32
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER36
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER40
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v2: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v3: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000001 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000002 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000004 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000008 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000010 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000020 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000040 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000080 to ICPENDR8
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000100 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000200 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000000400 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000000800 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000001000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000002000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000004000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000008000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000010000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000020000 to ICPENDR8
> > (XEN) d0v2 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000040000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000080000 to ICPENDR8
> > (XEN) d0v2 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000100000 to ICPENDR8
> > (XEN) d0v2: vGICD: unhandled word write 0x00000000200000 to ICPENDR8
> > (XEN) d0v0: vGICD: unhandled word write 0x00000000400000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000800000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000001000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000002000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000004000000 to ICPENDR8
> > (XEN) d0v1: vGICD: unhandled word write 0x00000008000000 to ICPENDR8
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v3 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v3: vGICD: unhandled word write 0x00000010000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000020000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000040000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000080000000 to ICPENDR8
> > (XEN) d0v3: vGICD: unhandled word write 0x00000000000001 to ICPENDR12
> > (XEN) d0v3 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000002
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000002 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000004 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000008 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000010 to ICPENDR12
> > (XEN) d0v1: vGICD: unhandled word write 0x00000000000020 to ICPENDR12
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000001
> > (XEN) d0v1 Unhandled SMC/HVC: 0x82000002
>
> I suspect the ICPENDR problem is not an issue anymore. We are seeing
> another hang, maybe due to the lack of dom0_mem or something else.
>
> The "Unhandled SMC/HVC" messages are interesting: Xen blocks SMC calls
> by default. The dom0 kernel here is trying to make two SiP calls, Xen
> blocks them and returns "unimplemented". I don't know if it causes any
> issues to the kernel but I can imagine that the kernel driver might
> refuse to continue. It is typically a firmware driver
> (drivers/firmware).
>
> If the two calls are actually required to boot, then Xen should have a
> 'mediator' driver to filter the calls allowed from the ones that are not
> allowed. See for instance xen/arch/arm/platforms/xilinx-zynqmp-eemi.c.
> As a test, the appended patch allows all SMC calls for dom0:
>
>
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index a36db15fff..821c15852a 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -286,10 +286,32 @@ static bool vsmccc_handle_call(struct cpu_user_regs
> *regs)
>
> if ( !handled )
> {
> - gprintk(XENLOG_INFO, "Unhandled SMC/HVC: %#x\n", funcid);
> + if ( is_hardware_domain(current->domain) )
> + {
> + struct arm_smccc_res res;
> +
> + arm_smccc_1_1_smc(get_user_reg(regs, 0),
> + get_user_reg(regs, 1),
> + get_user_reg(regs, 2),
> + get_user_reg(regs, 3),
> + get_user_reg(regs, 4),
> + get_user_reg(regs, 5),
> + get_user_reg(regs, 6),
> + get_user_reg(regs, 7),
> + &res);
> +
> + set_user_reg(regs, 0, res.a0);
> + set_user_reg(regs, 1, res.a1);
> + set_user_reg(regs, 2, res.a2);
> + set_user_reg(regs, 3, res.a3);
> + }
> + else
> + {
> + gprintk(XENLOG_INFO, "Unhandled SMC/HVC: %#x\n", funcid);
>
> - /* Inform caller that function is not supported. */
> - set_user_reg(regs, 0, ARM_SMCCC_ERR_UNKNOWN_FUNCTION);
> + /* Inform caller that function is not supported. */
> + set_user_reg(regs, 0, ARM_SMCCC_ERR_UNKNOWN_FUNCTION);
> + }
> }
>
> return true;
> --
> Stratos-dev mailing list
> Stratos-dev(a)op-lists.linaro.org
> https://op-lists.linaro.org/mailman/listinfo/stratos-dev
>
--
François-Frédéric Ozog | *Director Business Development*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
Hello,
I submitted https://lore.kernel.org/all/CAKycSdDMxfto6oTqt06TbJxXY=S7p_gtEXWDQv8mz0d9zt…
and my attention was drawn here and have a few comments.
Firstly, I was wondering why you didn't create a separate *-sys crate
for these bindings?
see https://doc.rust-lang.org/cargo/reference/build-scripts.html#-sys-packages
for more information.
Secondly, I noticed when developing my aforementioned, patch that
`bindgen` adds quite a few dependencies that probably aren't needed by
the average consumer of this crate.
So I was wondering what are your thoughts about generating and
committing a bindings.rs then optionally using these dependencies via
a feature flag?
Lastly, With your `make` integration, it looks like we could also
remove the `cc` dependency by allowing `make` to build libgpiod
instead and just linking with that, instead of compiling libgpiod
twice.
Kind regards,
Gerard.
Hi GIC experts,
This came up last week in the Stratos sync call when we were discussing
Vincent's SCMI setup:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28665741503/2021-12-09+P…
with the shared memory between the two guests the only reason we
currently exit to the guest userspace (QEMU) is to forward notifications
of virtqueue kicks from one guest to the other. I'm vaguely aware from
my time looking at GIC code that it can be configured for IPI IRQs
between two cores. Do we have that ability between two vCPUs from
different guests?
If not what would it take to enable such a feature?
--
Alex Bennée
Hi,
So the following is what I had in my head for a demo setup based on what
we talked about this morning. Does it make sense?
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATOS TSN NETWORKING WITH VMS
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Abstract
.. 1. Hardware
.. 2. Setup
1 Abstract
══════════
In a multi-zone/multi-VM setup it is important that higher priority
(i.e. safety critical) workloads are not impaired by lower priority
ones. In a modern automotive setup which was using something like
ethernet to link a number of units we need to ensure the networking
also allows for effective and timely delivery of important packets.
This demo intends to show how Time Sensitive Networking (TSN) can be
combined with an accelerated AF_XDP path from host to guest to ensure
correct behaviour. The primary guest will be injesting a video stream
which represents a reversing camera which has low latency requirements
while a second guest loads a large amount of mapping data for a
navigation app. We will run two scenarios. The first will use
in-kernel TSN support using software to handle the packet scheduling.
The second will use dedicated HW with TSN support which should achieve
much lower latency than relying on SW.
1.1 Hardware
────────────
• 2 x Machiatobin with 10G copper SFP
• 2 x PCIe network cards with TSN offload (Intel I255 or equivalent)
• 1 x PCIe video card
• 1 x USB Audio
1.2 Setup
─────────
The setup is the same as used for previous AF_XDP measurements except
that *ethT* is either using the builtin SoC networking or the TSN
accelerated PCIe network device.
The target machine (representing a automotive display console) runs
two VMs. A low priority one for the navigation function which is
fetching large amounts of none time critical data for the map display.
The higher priority VM receives and decompressed the state of the
reversing camera.
Both VMs display their output via an virtio-gpu device which is
composited to a single display on the host. For simplicity we shall
assume KVM virtualisation.
--
Alex Bennée