On Mon, Sep 27, 2021 at 3:06 AM Alex Bennée via Stratos-dev < stratos-dev@op-lists.linaro.org> wrote:
Marek Marczykowski-Górecki marmarek@invisiblethingslab.com writes:
[[PGP Signed Part:Undecided]] On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Bennée wrote:
Hi,
Hi,
2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) ───────────────────────────────────────────────────────────────
Currently the foreign memory mapping support only works for dom0 due to reference counting issues. If we are to support backends running in their own domains this will need to get fixed.
Estimate: 8w
I'm pretty sure it was discussed before, but I can't find relevant (part of) thread right now: does your model assumes the backend (running outside of dom0) will gain ability to map (or access in other way) _arbitrary_ memory page of a frontend domain? Or worse: any domain?
The aim is for some DomU's to host backends for other DomU's instead of all backends being in Dom0. Those backend DomU's would have to be considered trusted because as you say the default memory model of VirtIO is to have full access to the frontend domains memory map.
I share Marek's concern. I believe that there are Xen-based systems that will want to run guests using VirtIO devices without extending this level of trust to the backend domains.
That is a significant regression in terms of security model Xen provides. It would give the backend domain _a lot more_ control over the system that it normally has with Xen PV drivers - negating significant part of security benefits of using driver domains.
It's part of the continual trade off between security and speed. For things like block and network backends there is a penalty if data has to be bounce buffered before it ends up in the guest address space.
I think we have significant flexibility in being able to modify several layers of the stack here to make this efficient, and it would be beneficial to avoid bounce buffering if possible without sacrificing the ability to enforce isolation. I wonder if there's a viable approach possible with some implementation of a virtual IOMMU (which enforces access control) that would allow a backend to commission I/O on a physical device on behalf of a guest, where the data buffers do not need to be mapped into the backend and so avoid the need for a bounce?
So, does the above require frontend agreeing (explicitly or implicitly) for accessing specific pages by the backend? There were several approaches to that discussed, including using grant tables (as PV drivers do), vIOMMU(?), or even drastically different model with no shared memory at all (Argo). Can you clarify which (if any) approach your attempt of VirtIO on Xen will use?
There are separate strands of work in Stratos looking at how we could further secure VirtIO for architectures with distributed backends (e.g. you may accept the block backend having access to the whole of memory but an i2c multiplexer has different performance characteristics).
Currently the only thing we have prototyped is "fat virtqueues" which Arnd has been working on. Here the only actual shared memory required is the VirtIO config space and the relevant virt queues.
I think the "fat virtqueues" work is a positive path for investigation and I don't think shared memory between front and backend is hard requirement for those to function: a VirtIO-Argo transport driver would be able to operate with them without shared memory.
Other approaches have been discussed including using the virtio-iommu to selectively make areas available to the backend or use memory zoning so for example network buffers are only allocated in a certain region of guest physical memory that is shared with the backend.
A more general idea: can we collect info on various VirtIO on Xen approaches (since there is more than one) in a single place, including:
- key characteristics, differences
- who is involved
- status
- links to relevant threads, maybe
I'd propose to revive https://wiki.xenproject.org/wiki/Virtio_On_Xen
Thanks for the reminder, Marek -- I've just overhauled that page to give an overview of the several approaches in the Xen community to enabling VirtIO on Xen, and have included a first pass at including the content you describe. I'm happy to be involved in improving it further.
From the Stratos point of view Xen is a useful proving ground for general VirtIO experimentation due to being both a type-1 and open source. Our ultimate aim is have a high degree of code sharing for backends regardless of the hypervisor choice so a guest can use a VirtIO device model without having to be locked into KVM.
Thanks, Alex - this context is useful.
If your technology choice is already fixed with a Xen hypervisor and portability isn't a concern you might well just stick to the existing well tested Xen PV interfaces.
I wouldn't quite agree; there are additional reasons beyond portability to be looking at other options than the traditional Xen PV interfaces: eg. an Argo-based interdomain transport for PV devices will enable fine-grained enforcement of Mandatory Access Control over the frontend / backend communication, and will not depend on XenStore which is advantageous for Hyperlaunch / dom0less Xen deployment configurations.
thanks,
Christopher
-- Alex Bennée -- Stratos-dev mailing list Stratos-dev@op-lists.linaro.org https://op-lists.linaro.org/mailman/listinfo/stratos-dev