Hi,
These patches have been sitting around as part of a larger series to
improve the support of Xen on AArch64. The second part of the work is
currently awaiting other re-factoring and build work to go in to make
the building of a pure-Xen capable QEMU easier. As that might take
some time and these patches are essentially ready I thought I'd best
push for merging them.
There are no fundamental changes since the last revision. I've
addressed most of the comments although I haven't expanded the use of
the global *fdt to other device models. I figured that work could be
done as and when models have support for type-1 hypervisors.
I have added some documentation to describe the feature and added an
acceptance tests which checks the various versions of Xen can boot.
The only minor wrinkle is using a custom compiled Linux kernel due to
missing support in the distro kernels. If anyone can suggest a distro
which is currently well supported for Xen on AArch64 I can update the
test.
The following patches still need review:
- tests/avocado: add boot_xen tests
- docs: add some documentation for the guest-loader
- docs: move generic-loader documentation into the main manual
- hw/core: implement a guest-loader to support static hypervisor guests
Alex Bennée (7):
hw/board: promote fdt from ARM VirtMachineState to MachineState
hw/riscv: migrate fdt field to generic MachineState
device_tree: add qemu_fdt_setprop_string_array helper
hw/core: implement a guest-loader to support static hypervisor guests
docs: move generic-loader documentation into the main manual
docs: add some documentation for the guest-loader
tests/avocado: add boot_xen tests
docs/generic-loader.txt | 92 ---------
docs/system/generic-loader.rst | 117 +++++++++++
docs/system/guest-loader.rst | 54 +++++
docs/system/index.rst | 2 +
hw/core/guest-loader.h | 34 ++++
include/hw/arm/virt.h | 1 -
include/hw/boards.h | 1 +
include/hw/riscv/virt.h | 1 -
include/sysemu/device_tree.h | 17 ++
hw/arm/virt.c | 356 +++++++++++++++++----------------
hw/core/guest-loader.c | 145 ++++++++++++++
hw/riscv/virt.c | 20 +-
softmmu/device_tree.c | 26 +++
MAINTAINERS | 9 +-
hw/core/meson.build | 2 +
tests/acceptance/boot_xen.py | 117 +++++++++++
16 files changed, 718 insertions(+), 276 deletions(-)
delete mode 100644 docs/generic-loader.txt
create mode 100644 docs/system/generic-loader.rst
create mode 100644 docs/system/guest-loader.rst
create mode 100644 hw/core/guest-loader.h
create mode 100644 hw/core/guest-loader.c
create mode 100644 tests/acceptance/boot_xen.py
--
2.20.1
Hi All
Discussions on low-speed devices were had, there is a proposal to talk
about SCMI server and perhaps RustVMM next call, these have been added to
the agenda.
Some updates on the action item which is copied below, this item will now
be moved to Jira
- Ilias Apalodimas
<https://collaborate.linaro.org/display/~ilias.apalodimas@linaro.org> , Arnd
Bergmann <https://collaborate.linaro.org/display/~arnd@linaro.org> , Alex
Bennée <https://collaborate.linaro.org/display/~alex.bennee@linaro.org> -
chase Kernel on RPMB API, propose spec update to handle eMMC passthrough :
4th Feb 2021 need to talk to Ulf, Arnd says there is a hold up, but need a
proper patch for USFS but the author never followed up, this was up to v7.
there is no proper interface. Arnd / Illias to continue the discussion with
Ulf and perhaps Intel. : Feb 18th API from intel is too detailed, not
useful for RPMB - proposed to list this week. probably have to respin with
a higher-level API for RPMB. If this does not go in OASIS might have to
change. Alex change driver mode of Linux is the best answer
Full notes can be found here
https://collaborate.linaro.org/display/STR/2021-02-18+Project+Stratos+Sync+…
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi Gerd,
I was in a discussion with the AGL folks today talking about approaches
to achieving zero-copy when running VirGL virtio guests. AIUI (which is
probably not very much) the reasons for copy can be due to a number of
reasons:
- the GPA not being mapped to a HPA that is accessible to the final HW
- the guest allocation of a buffer not meeting stride/alignment requirements
- data needing to be transformed for consumption by the real hardware?
any others? Is there an impedance between different buffer resource
allocators in the guest and the guest? Is that just a problem for
non-FLOSS blob drivers in the kernel?
I'm curious if it's possible to measure the effect of these extra copies
and where do they occur? Do all resources get copied from the guest buffer to
host or does this only occur when there is a mismatch in the buffer
requirements?
Are there any functions where I could add trace points to measure this?
If this occurs in the kernel I wonder if I could use an eBPF probe to
count the number of bytes copied?
Apologies for the wall of questions I'm still very new to the 3D side of
things ;-)
--
Alex Bennée
Hi All
We don't have an anchor topic for agenda this week, that does not appear to
stop us from making good use of the time, but does anyone have anything
they would like to raise?
https://collaborate.linaro.org/display/STR/Stratos+Home
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi All
Thanks for a good call yesterday, meeting notes and Robins RustVMM slides
available at
https://collaborate.linaro.org/display/STR/2021-02-04+Project+Stratos+Sync+…
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
Hi All
We have an interesting agenda item for the call this week;
- *Robin Randhawa (Arm) will talk about RustVMM*
Are there any other topics people want to raise?
Meeting: Thursday, 4 February
4:00 – 5:00 pm (GMT+00:00) United Kingdom Time
Virtual meeting: meet.google.com/uak-yrcj-tydhttps://collaborate.linaro.org/display/STR/Stratos+Home
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
On 15-01-21, 14:16, Arnd Bergmann via Stratos-dev wrote:
> You need a driver in the guest though that understands the
> device specific signaling and additional data.
> Let's look at a moderately complex example I picked at random,
> drivers/leds/leds-lp8501.c:
>
> I assume the idea would be to not replace the entire driver
> with a greybus specific one, but to reused as much as possible
> from the existing code. The driver has no interrupts but it needs
> to access a gpio line and some device specific configuration
> data, which it can get either from a platform_data or from DT
> properties.
>
> Passing such a device through greybus then requires at least
> these steps:
>
> * allocate a unique device:vendor ID pair
> * create a lp8501 specific manifest binding for that ID
> * for the host, create an lp8501 specific greybus host driver to
> - read the device tree in the host, convert into
> manifest format according to the binding
> - open the raw i2c device and gpio line from user space
> - create virtual devices for these two, describe them
> in the manifest
> * for the guest, create an lp8501 greybus device driver for the
> vendor:device ID pair, to
> - interpret the manifest, convert data into lp55xx_platform_data
> - instantiate a gpio controller with one gpio line,
> - allocate a gpio number for that controller, add it to the platform data
> - instantiate a i2c host
> - instantiate a i2c device on that host, using the platform_data
> and the "lp8501" i2c_device_id string.
>
> If a device has no DT properties or platform_data, and no gpio,
> reset, regulator, clock, or other dependendencies, some of the
> steps can be skipped, but at the minimum you still need device
> specific code in the guest to map the vendor:device ID to
> an i2c_device_id.
Right, I misunderstood it earlier when I thought you are talking about
the controller.
Greybus takes care of how a i2c message gets transferred and gets
translated to the host's controller driver, but the device sits above
this layer.
The device (like i2c memory or touchscreen) has its own driver and
protocols to follow, which don't have much to do with i2c (it can just
be spi as well with almost exactly the same driver). i2c here is just
a bus, as we have amba buses for ARM platforms (where we can directly
access registers). And the end device's driver will be there at the
guest and not host.
I don't think copying their properties to manifest adds any real value
here. We should do that part over DT itself, as the guest is going to
receive one from the host anyway. We just need to see how we divide
this information between DT and manifests, or maybe we can make
greybus work with DT as well (at the client side).
At this point I would also like to ask if we should also get this
discussion going over the greybus-list, specially people like Greg KH
and Johan Hovold, as any changes to greybus would require their
approval as well anyway and they may be able to give some ideas as
well.
--
viresh
Hi All (inc upstream authors CC'ed),
There have been various discussions over the last few weeks about where
the development priorities for Stratos should be. I wanted to lay out a
summary of those discussions and where I think the focus is and what
open questions remain.
Multimedia
==========
With virtio-video approaching standardisation:
Subject: [RFC PATCH v5] virtio-video: Add virtio video device specification
Date: Wed, 20 Jan 2021 17:31:43 +0900
Message-Id: <20210120083143.766189-1-acourbot(a)chromium.org>
we think enabling this would be a good introduction to the challenges of
high bandwidth multimedia. We considered more advanced devices such as
cameras but thought that given the Linux kernel API is still evolving it
was too soon to try and stabilise a VirtIO specification - especially if
we want to avoid just making it ape the Linux API. virtio-gpu (including
virtio-wayland) already has a number of implementations across a number
of VMMs and hypervisors that it doesn't make sense to add yet another
one to the mix. However virtio-video does share some similar problems
including needing to solve the management of memory across virtual
domains where the final location and alignment of memory are important.
Peter Griffin is leading this work and creating some cards shortly.
Broadly this will cover:
- Helping get the Linux FE (from Google's ChromeOS) up-streamed
- Implementing a standalone Backend (vhost-user, via QEMU)
- Architecture document for more complex deployments
The initial demo will involve terminating the backend on a KVM Host or
Xen Dom0. The architecture work will consider how the more complex
deployments would work (splitting domains, mapping to secure world etc)
and form the basis for future work.
Memory Isolation
================
We did a bunch of investigative work last cycle but generated rather
more questions than concrete answers. There are a number of avenues to
explore but currently there isn't a clear way forward for a general
purpose solution for the problem. There is ongoing work in the community
on solving the specific zero-copy problem for virtio-gpu and we hope to
learn more lessons with our virtio-video work. In the meantime there was
a potential copy based solution proposed that works for low performance
interface. Currently described as "Fat VirtQueues" (name subject to
change) this embeds all data inside the virtqueues themselves. The major
limitation is that any data frames passed this way must be fully self
contained and not reference memory outside the queue.
This makes the isolation problem more tractable as the queue itself will
be the only thing that needs to be shared between virtual domains.
Arnd Bergmann will be leading this work which is currently captured in
the STR-25 card:
https://projects.linaro.org/browse/STR-25
Xen Work
========
We did a bit of work on Xen last cycle which was mostly housekeeping
work to fix regressions and issues booting up on ARM64 systems. We want
to continue the work here to make Xen our reference type-1 hypervisor
for VirtIO work. There is currently a patch series:
Subject: [PATCH V4 00/24] IOREQ feature (+ virtio-mmio) on Arm
Date: Tue, 12 Jan 2021 23:52:08 +0200
Message-Id: <1610488352-18494-1-git-send-email-olekstysh(a)gmail.com>
which we are helping review and test. It currently comes with it's own
virtio-block device backend which can replace the Xen block device
approach. We plan to build on this work and enable QEMU as a generic
virtio backend for Xen ioreq devices as a general proving ground for
virtio backends. While it won't allow for the fastest virtio it will
give access to a broad range of backends thanks to QEMUs general purpose
approach.
I'll be taking the lead on this work which is covered by STR-19:
https://projects.linaro.org/browse/STR-19
We are also looking at implementing a Xen mediator for the Arm Firmware
Framework. This is a general purpose framework where a hypervisor can
communicate with the system firmware with a common API. This avoids the
need to have multiple firmware aware implementations in the hypervisor
for accessing secure services. As long as the firmware provides the
interface the hypervisor will be able to run on it.
Ruchika Gupta is leading this work under STR-23 which is part of the
broader trusted substrate initiative:
https://projects.linaro.org/browse/STR-23
SCMI Server
===========
The System Control and Management Interface (SCMI) provides a mechanism
for clients (e.g. kernels needing resources) to request hardware
resources from the system. The server usually sits in the secure
firmware layer and responds to secure calls from the kernel to turn
resources on and off. It is key to efficient power management as you
might for example want to turn clock sources off between decoded video
frames.
In a multi-domain system you have to mediate between a number of
potential users of these resources. For the non-primary domain you can
use a virtio-scmi device:
Subject: [PATCH v5] Add virtio SCMI device specification
Date: Wed, 27 May 2020 19:43:25 +0200
Message-ID: <20200527174325.9529-1-peter.hilber(a)opensynergy.com>
There is already a proposal for the kernel driver to go along with the
specification:
Subject: [RFC PATCH v2 00/10] firmware: arm_scmi: Add virtio transport
Date: Thu, 5 Nov 2020 22:21:06 +0100
Message-ID: <20201105212116.411422-1-peter.hilber(a)opensynergy.com>
So our work would be focused on helping those get upstream and working
on an open source reference implementation of the server in the backend.
The question of where the SCMI server should be implemented is an open
one.
The simplest would be a proof of concept user-space server which extends
the existing testing build. This would demonstrate the connection but
wouldn't be usable in production as there isn't currently a method for
user space to access the resource hierarchy maintained by the kernel.
Another option would be to terminate virtio-scmi inside the host kernel
where it could then be merged with the hosts own requests. However this
does seem like a horrific hack that embeds policy decisions in the
kernel.
The other two options are enable the virtio backend for OPTEE (where the
SCMI server can live) or enable the SCMI server in a Zephyr RTOS which
has already got some experimental virtio support in preparation for a
Zephyr Dom0.
This work is being led by Vincent Guittot and can be followed from:
https://projects.linaro.org/browse/STR-4
VirtIO serial devices
=====================
There is a desire to implement another serial like interface for virtio
which are common in exposing hardware on embedded and mobile devices.
There are several option available although currently only virtio-i2c
has a proposal for the standard:
Date: Fri, 8 Jan 2021 15:39:08 +0800
Message-Id: <dfb21780647c69519f01fb0afbbd18f780963af9.1610091344.git.jie.deng(a)intel.com>
Subject: [virtio-comment] [PATCH v7] virtio-i2c: add the device specification
however there have been a number of alternative proposals including
using virtio-greybus or virtio-rpmsg as general purpose multiplexer
transports for these sort of low bandwidth datagram services. Having a
virtio-i2c implementation would be useful for testing the fat virtqueue
concept, although both the existing virtio-rpmb and proposed virtio-scmi
daemons could also be pressed into service for this.
Currently we don't have anyone assigned to look at this so I think this
needs someone to step forward with a proposed use case to take this up.
Housekeeping
============
I'm planning on closing out STR-7 (Create a common virtio library for
use by programs implementing a backend) as I'm not sure what it would
achieve. We have implemented one C based backend using the libvhost code
inside the QEMU repository. Although not totally separate from the rest
of the source tree it could be made so with minimal effort if needed. In
the meantime Takahiro has enabled VirtIO inside Zephyr by adapting the
current Linux code into it.
The main contender for a common library comes from the rust-vmm project:
https://github.com/rust-vmm
and specifically the vhost-user-backend crate:
https://github.com/rust-vmm/vhost-user-backend/
There are a number of backends that have been implemented with it but it
probably requires someone with good Rust background to evaluate the
current state of the libraries. To my untrained eye there is still some
commonality in the handling that could be moved from the individual
daemons to make the core libraries easier to use. If we want to go
forward with Rust we should create a specific card for that that a Rust
expert could work on.
Summary
=======
Apologies for the long read and the delay getting this out but hopefully
that gives a good overview of the thinking of the next cycle. Do shout
if I missed anything out and please come with your questions and comments
on the list and at the Stratos sync tomorrow afternoon.
--
Alex Bennée
On Wed, Jan 13, 2021 at 12:21 AM Stefano Stabellini via Stratos-dev
<stratos-dev(a)op-lists.linaro.org> wrote:
> On Tue, 12 Jan 2021, Alex Bennée wrote:
> >
> > I wanted to bounce some ideas around about our aim of limited memory
> > sharing.
> >
> > At the end of last year Windriver presented their shared memory approach
> > for hypervisor-less virto. We have also discussed QC's iotlb approach.
> > So far there is no proposed draft the Virtio spec and there are
> > questions about how these shared memory approaches fit within the
> > existing Virtio memory model and how they would interact with a Linux
> > guest driver API to minimise the amount of copying needed as data moves
> > from a primary guest to a back-end.
> >
> > Given the performance requirements for high bandwidth multimedia devices
> > it feels like we need to get some working code published so we can
> > compare behaviour and implementations details. I think we are still a
> > fair way off in being able to propose any updates to the standard until
> > we can see the changes needed across guest APIs and get some measure of
> > performance and bottlenecks.
> >
> > However there are a range of devices we are interested in that are less
> > performance sensitive - e.g. SPI, I2C and other "serial" buses. They
> > would also benefit from having a minimal memory profile. Is it worth
> > considering addressing a separate simpler and less performance
> > orientated solution?
> >
> > Arnd suggested something that I'm going to call a fat VirtQueues. The
> > idea being that both data and descriptor are stored in the same
> > VirtQueue structure. While it would necessitate copying data from guest
> > address space to the queue and back it could be kept to the lower levels
> > of the driver stack without the drivers themselves having to worry too
> > much about the details. With everything contained in the VirtQueue there
> > is only one bit of memory to co-ordinate between the primary guest and
> > service OS which makes isolation a lot easier.
> >
> > Of course this doesn't solve the problem for the more performance
> > sensitive applications but it would be a workable demonstration of
> > memory isolation across VMs and a useful suggestion in it's own right.
> >
> > What do people think?
>
> I think it is a good idea: everyone will agree that the first step is to
> implement a solution that relies on memcpys. Anything smarter is best
> done as a second step and probably requires new hypervisor interfaces.
>
> From a performance perspective, whether we use a separate pre-shared
> buffer, or "fat VirtQueues" as Arnd suggested, the results should be
> very similar. So I think fat VirtQueues are a good way forward to me.
Ok, sounds good to me, too. I would also expect to see similar
performance between any approach using memcpy(). Even with the
current case of communication between two guests using a copy
in the hypervisor, it's probably not much faster than doing a copy in the
guest. Between various options for doing a copy in the guest, the
modified virtqueue should be conceptually cleaner than the others.
I'm still suspicious of the doing page-flipping approaches or anything
that relies on IOMMU to avoid the memcpy() for general virtqueues,
as that sounds like it will be slower than less secure than the memcpy()
because of the added complexity. The best approach that I imagine
using for some cases where the copy is too slow would be to extend
those devices to use "shared memory regions"[1] in addition to
virtqueues. This is already allowed in virtio-fs, virtio-video and virtio-gpu,
and may work for some but not all other device types (i.e. not
virtio-net or virtio-block) that do not lean towards this model.
Arnd
[1] https://github.com/oasis-tcs/virtio-spec/blob/master/shared-mem.tex
On 1/12/21 8:47 AM, Arnd Bergmann via Stratos-dev wrote:
> On Tue, Jan 12, 2021 at 12:24 PM Bill Mills <bill.mills(a)linaro.org> wrote:
>> On 1/12/21 5:57 AM, Viresh Kumar via Stratos-dev wrote:
>>> On 16-12-20, 14:54, Arnd Bergmann wrote:
I've been staying out of this for a bit but I'll offer a
few cents' worth now.
>>>> The problem we get into though is once we try to make this
>>>> work for arbitrary i2c or spi devices. The kernel has around
>>>> 1400 such drivers, and usually we use a device tree description
>>>> based on a unique string for every such device plus additional
>>>> properties for things like gpio lines or out-of-band interrupts.
I think what you're saying is that we already have DT-based
drivers for existing hardware, and to abstract them with
Greybus would likely require a special Greybus shim or something
for every one of those.
>>>> If we want to use greybus manifests in place of device trees,
>>>> that means either needing to find a way to map DT properties
>>>> into the manifest, or have the same data in another format
>>>> for each device we might want to use behind greybus, and
>>>> adapting the corresponding drivers to understand this additional
>>>> format.
Maybe, maybe not. It depends on what level of abstraction
the guest/client needs to represent the device (e.g. i2c or
spi).
Greybus assumes the device hardware is on a module and not
"directly" accessible by the AP. That is, if the AP wants
to send a byte over an I2C device on a module, the only way
to do that is by encapsulating that request in a Greybus
message. It can't use (say) a register interface to cause
the byte to be sent.
In a virtualized environment though, you might *want* to
expose a more "raw" interface to the hardware. What I
mean is you might want the host/server to grant exclusive
access to a guest/client to the register space that controls
the actual hardware, avoiding the need for a shim layer
(and most likely extra memory copies).
For low speed peripherals that probably isn't critical,
but I think it's worth considering whether you want to
use the latter approach (exclusively, or in addition to
something that abstracts hardware like Greybus does).
Note: these comments are about how the Greybus protocols
work; the "raw" approach is different from Greybus in
that respect. The *discovery* of devices available to
guests is a different issue though, and would be more
focused on Greybus *manifests* (or some other mechanism).
>>> I am a bit confused about this. I don't think we need to expose all
>>> that information over the manifest.
Viresh is saying that Greybus abstracts the hardware,
so there's no need to expose the details. The Greybus
driver on the host is the only one that needs to know
the DT-like details of specific implementations.
>>> The SPI controller will be accessible by the host OS (lets say both
>>> host and guest run Linux), the host will give/pass a manifest to the
>>> guest and the guest will send simple commands like read/write to the
>>> host. The Guest doesn't need minute details of how the controller is
>>> getting programmed, which is the responsibility of the host side
>>> controller driver which will have all this information from the DT
>>> passed from bootloader anyway. And so the manifest shouldn't be
>>> required to have equivalent of all DT properties.
>>>
>>> Isn't it ?
>>>
>>
>> Yes for the interrupt for the SPI or I2C controller.
>>
>> I presume Arnd is talking about side band signals from the devices.
>> I2C and SPI don't have a client side concept of interrupt and the
>> originator side (trying to not use the word master) would have to poll
>> otherwise. So many devices hook up a side band interrupt request to a
>> GPIO. Likewise an I2C EEPROM may have a write protect bit that is
>> controlled by a GPIO.
>>
>> Coordinating this between a virtual I2C and a virtual GPIO bank would be
>> complicated to do in the manifest if each is a separate device.
I might be misunderstanding here but I *think* in the Greybus
case, the details of how all signals (including interrupts)
are implemented are the host's responsibility, and do not need
to be visible to the guest.
>> However if we expand the definition of "I2C virtual device" to have an
>> interrupt request line and a couple outputs, the details are in fact on
>> the host side and the guest does not need to understand it all.
Yes, for the Greybus I2C protocol (for example) an
interrupt is represented as a message originated from
the owner of the "real" hardware directed at the user
of the hardware. (From the module to the AP, but in
this case it would be from the host to the guest.) So
these details would be hidden and abstracted.
>> What would this mean for the 1400 devices in the kernel? Would we need
>> to add a Greybus binding to the existing DT binding? That sounds like
>> the wrong way. It would be nice to leverage the DT binding that was
>> already in the kernel.
>
> I believe the majority of the devices are fairly simple, and the main
> thing they need is a character string for identification to work around
> the lack of a device/vendor ID mechanism that PCI or USB use.
If Greybus protocols are used I think all I2C devices would
simply be Greybus I2C devices.
-Alex
> The kernel supports three string based identification methods
> at the moment. Take a minimal wrapper like
> drivers/iio/imu/bmi160/bmi160_i2c.c as an example, where we have
>
> static const struct i2c_device_id bmi160_i2c_id[] = {
> {"bmi160", 0},
> {}
> };
> MODULE_DEVICE_TABLE(i2c, bmi160_i2c_id);
>
> static const struct acpi_device_id bmi160_acpi_match[] = {
> {"BMI0160", 0},
> { },
> };
> MODULE_DEVICE_TABLE(acpi, bmi160_acpi_match);
>
> #ifdef CONFIG_OF
> static const struct of_device_id bmi160_of_match[] = {
> { .compatible = "bosch,bmi160" },
> { },
> };
> MODULE_DEVICE_TABLE(of, bmi160_of_match);
> #endif
>
> static struct i2c_driver bmi160_i2c_driver = {
> .driver = {
> .name = "bmi160_i2c",
> .acpi_match_table = ACPI_PTR(bmi160_acpi_match),
> .of_match_table = of_match_ptr(bmi160_of_match),
> },
> .probe = bmi160_i2c_probe,
> .id_table = bmi160_i2c_id,
> };
> module_i2c_driver(bmi160_i2c_driver);
>
> The "i2c_device_id" structure has a list of strings that is unique in
> the Linux kernel but not standardized. I assume a greybus driver would
> be used to map the numbers from the manifest into this OS specific
> string, but this has to be done for each supported device that can
> be attached to a greybus device.
>
> The "of_device_id" is a similar list but meant to be globally unique
> through the DT binding.
>
>> I hear that ACPI can bind to SPI and I2C. Is this true and how does
>> that work?? (I ask for a reference. I am NOT suggesting we bring ACPI
>> into this.)
>
> The acpi_device_id has the same purpose as of_device_id, but uses
> a different namespace and is only used on PC-style machines.
>
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | wc -l
> 1424
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | xargs git
> grep -l of_device_id | wc -l
> 876
> $ git grep -wl "i2c_driver\|spi_driver" drivers sound | xargs git
> grep -l acpi_device_id | wc -l
> 145
>
> Arnd
>