On Thu, 16 Sep 2021 15:01:08 +0100 Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
On Thu, Sep 16, 2021 at 02:51:14PM +0100, Jonathan Cameron wrote:
On Thu, 16 Sep 2021 11:32:44 +0100 Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
On Tue, Aug 31, 2021 at 01:46:19PM +0100, Jonathan Cameron wrote:
On Thu, 26 Aug 2021 14:16:02 +0000 Jonathan Cameron via Linaro-open-discussions linaro-open-discussions@op-lists.linaro.org wrote:
On Fri, 20 Aug 2021 11:48:41 +0100 Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
On Wed, Aug 18, 2021 at 08:05:18PM +0800, Jammy Zhou wrote: > If we move it to 1st or 2nd, is there any topic to discuss? Otherwise, > maybe we can cancel it for this month.
I would be _extremely_ grateful if Jonathan could run a session on his series:
https://lore.kernel.org/linux-pci/20210804161839.3492053-1-Jonathan.Cameron@...
in preparation for LPC.
Sure - would be good to get my thoughts in order on this and doing it for next week will stop me leaving it all to the last minute.
Actually, I wanted to ask if there is a kernel tree/branch I can pull in order to review those patches, I am struggling to find a commit base to apply them.
My bad. I got lazy in a build up to vacation and didn't put one up anywhere. The various trees involved are rather too dynamic for just pointing at them and saying apply these series (which have merge conflicts to resolve). Naughty me....
Currently wading through the messages backlog, but will aim to have a branch up sometime tomorrow + ideally some more detailed instructions on getting it up and running.
Hi Lorenzo / All,
https://github.com/hisilicon/kernel-dev/tree/doe-spdm-v1 rebased to 5.14-rc7 https://github.com/hisilicon/qemu/tree/cxl-hacks rebased to qemu/master as of earlier today.
For qemu side of things you need to be running spdm_responsder_emu --trans PCI_DOE from https://github.com/DMTF/spdm-emu first (that will act as server to qemu acting as a client). Various parameters allow you to change the algorithms advertised and the kernel code should work for all the ones CMA mandates (but nothing beyond that for now).
For the cxl device the snippet of qemu commandline needed is: -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-mem1, id=cxl-pmem0,size=2G,spdm=true
Otherwise much the same as https://people.kernel.org/jic23/
Build at least the cxl_pci driver as a module as we need to poke the certificate into the keychain before that (find the cert in spdm_emu tree). Instructions to do that with keyctl and evmctl are in the cover letter of the patch series.
Hopefully I'll find some time next week to put together some proper instructions and email them in reply to the original posting.
Hi Jonathan,
Hi Lorenzo,
coming back to this, it'd be great if, within the LPC session, we could at least float some ideas on how a VM can use the CMA/SPDM sessions (eg PV view). There are things we won't be able to discuss but at least getting a cross-arch feeling would be ideal.
Just to check, do you mean for use with VFs with their own DOE mailboxes? That case should look similar to what happens for devices in the hypervisor / non virtualized OS. We should just pass that bit of config space straight through - no MITM attacks on the exchange beyond denial of service. (I'm hoping you mean this one as otherwise might need a few more hours :)
There might be some value in emulating a DOE to chat to the hypervisor and request measurements etc even when it does not provide a per VF CMA DOE. We could do this in lots of other ways, but perhaps reusing what we have to implement anyway makes sense and avoids us having to define a new PV interface. It 'might' be possible to have multiple end to end secure sessions running at the same time but I still have work to do to understand that bit of the SPDM spec (chapter 11). From an initial read I don't think CMA defines the transport specific stuff necessary as there isn't a session ID field.
There was a little bit of discussion on one of the lists about using SPDM as a means to establish that an emulated device provided by a hypervisor (particularly when secure VMs are involved) was indeed being emulated by who the guest thinks it is. If you were thinking along those lines, I'm far from clear how it would actually work unless the emulation is in a Trusted Compute Base rather than the normal hypervisor and I'm not sure how the secure VM would be able to tell no parts of it were intercepted.
Were you thinking one of the above, or something different?
I was thinking about all of the above - to understand the direction of travel we are taking (and this from a cross-arch perspective).
There are other bits we _can't_ talk about that are relevant in this space too - CMA/SPDM is a way for me to at least understand the direction we are likely to take.
Reworded: CMA/SPDM in a VM - what for and how.
It seems like it is something you will have to cover anyway, so all good.
Also, on my side, DOE ownership across SW ex levels, is a key question to be discussed.
Topic number 1 in the draft slides :)
Great, looking forward to that ;-)
Slides are up on the plumbers site. I'm not happy with the potential solutions for ACPI lockout of DOEs and how they will interact with kernel first hotplug management. I'll try and find some time to revise that section before Thursday.
https://linuxplumbersconf.org/event/11/contributions/1089/
All feedback of course welcome. Note I'm assuming we won't get to the last few topics of the main deck even ignoring the backup slides, but Dan and I are keen to push things forward so we cover at least some of the CMA topic.
Aim here is probably to get people thinking about the topics and then to reconvene at some later date + carry on discussion on the mailing lists in order to make the real progress.
I will probably suggest we separate out the DOE ACPI etc topic as a discussion to take forward with a small group to refine a suitable code first proposal to whichever specifications need it.
Thanks,
Jonathan
Thanks, Lorenzo
Talk soon :), Lorenzo