On 1/12/21 5:57 AM, Viresh Kumar via Stratos-dev wrote:
> On 16-12-20, 14:54, Arnd Bergmann wrote:
>> The problem we get into though is once we try to make this
>> work for arbitrary i2c or spi devices. The kernel has around
>> 1400 such drivers, and usually we use a device tree description
>> based on a unique string for every such device plus additional
>> properties for things like gpio lines or out-of-band interrupts.
>>
>> If we want to use greybus manifests in place of device trees,
>> that means either needing to find a way to map DT properties
>> into the manifest, or have the same data in another format
>> for each device we might want to use behind greybus, and
>> adapting the corresponding drivers to understand this additional
>> format.
>
> I am a bit confused about this. I don't think we need to expose all
> that information over the manifest.
>
> The SPI controller will be accessible by the host OS (lets say both
> host and guest run Linux), the host will give/pass a manifest to the
> guest and the guest will send simple commands like read/write to the
> host. The Guest doesn't need minute details of how the controller is
> getting programmed, which is the responsibility of the host side
> controller driver which will have all this information from the DT
> passed from bootloader anyway. And so the manifest shouldn't be
> required to have equivalent of all DT properties.
>
> Isn't it ?
>
Yes for the interrupt for the SPI or I2C controller.
I presume Arnd is talking about side band signals from the devices.
I2C and SPI don't have a client side concept of interrupt and the
originator side (trying to not use the word master) would have to poll
otherwise. So many devices hook up a side band interrupt request to a
GPIO. Likewise an I2C EEPROM may have a write protect bit that is
controlled by a GPIO.
Coordinating this between a virtual I2C and a virtual GPIO bank would be
complicated to do in the manifest if each is a separate device.
However if we expand the definition of "I2C virtual device" to have an
interrupt request line and a couple outputs, the details are in fact on
the host side and the guest does not need to understand it all.
What would this mean for the 1400 devices in the kernel? Would we need
to add a Greybus binding to the existing DT binding? That sounds like
the wrong way. It would be nice to leverage the DT binding that was
already in the kernel.
I hear that ACPI can bind to SPI and I2C. Is this true and how does
that work?? (I ask for a reference. I am NOT suggesting we bring ACPI
into this.)
It would be great if we had connector based DT in the kernel. Then you
could generate a DT fragment for the virtual multifunction vPCI device
and apply it at Greybus enumeration.
Didn't project Ara have this problem? Did it have a solution?
-- Bill
Hi,
I wanted to bounce some ideas around about our aim of limited memory
sharing.
At the end of last year Windriver presented their shared memory approach
for hypervisor-less virto. We have also discussed QC's iotlb approach.
So far there is no proposed draft the Virtio spec and there are
questions about how these shared memory approaches fit within the
existing Virtio memory model and how they would interact with a Linux
guest driver API to minimise the amount of copying needed as data moves
from a primary guest to a back-end.
Given the performance requirements for high bandwidth multimedia devices
it feels like we need to get some working code published so we can
compare behaviour and implementations details. I think we are still a
fair way off in being able to propose any updates to the standard until
we can see the changes needed across guest APIs and get some measure of
performance and bottlenecks.
However there are a range of devices we are interested in that are less
performance sensitive - e.g. SPI, I2C and other "serial" buses. They
would also benefit from having a minimal memory profile. Is it worth
considering addressing a separate simpler and less performance
orientated solution?
Arnd suggested something that I'm going to call a fat VirtQueues. The
idea being that both data and descriptor are stored in the same
VirtQueue structure. While it would necessitate copying data from guest
address space to the queue and back it could be kept to the lower levels
of the driver stack without the drivers themselves having to worry too
much about the details. With everything contained in the VirtQueue there
is only one bit of memory to co-ordinate between the primary guest and
service OS which makes isolation a lot easier.
Of course this doesn't solve the problem for the more performance
sensitive applications but it would be a workable demonstration of
memory isolation across VMs and a useful suggestion in it's own right.
What do people think?
--
Alex Bennée
On 12/10/20 2:20 PM, Arnd Bergmann via Stratos-dev wrote:
> On Tue, Dec 8, 2020 at 8:12 AM Viresh Kumar via Stratos-dev
> <stratos-dev(a)op-lists.linaro.org> wrote:
>>
>> Hi Guys,
>>
>> There are offline discussions going on to assess the possibility of
>> re-using the Linux kernel Greybus framework for Virtio [1] use case,
>> where we can control some of the controllers on the back-end (Host),
>> like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
>> defined Greybus specification [2].
>>
>> The Greybus specification and kernel source was initially developed
>> for Google's Project ARA [3], and the source code was merged into
>> mainline kernel long time back (in drivers/greybus/ and
>> drivers/staging/greybus/). You can find more information about how
>> Greybus works in this LWN article [4].
>>
>> Greybus broadly provides two distinct features:
>>
>> - Device discovery: at runtime, with the help of a manifest file
>> (think of it like DT, though it has a different format). This helps
>> the user of the hardware to identify the capabilities of the remote
>> hardware, which it can use.
>>
>> - Remote control/operation: of the IPs present on the remote hardware,
>> using firmware/OS independent operations, these are already well
>> defined for a lot of device types and can be extended if required.
>>
>> We wanted to share this over email to get some discussion going, so it
>> can be discussed later on the call.
>>
>> Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
>> Linux Kernel and I worked on a wide variety of stuff and maintain some
>> of it. Both of us worked in project ARA and would like to see Greybus
>> being used in other applications and would like to contribute towards
>> it.
>
> I think the main capability this would add compared to having
> a simple virtio device per bus is that you can have a device that is
> composed of multiple back-ends, e.g. an i2c slave plus a set of
> GPIOs are tied together for one functionality, this is something
> we did not discuss in the call today. The downside is that for
> each remote device, we'd still need to add a binding and a driver
> to make use of it.
In fact, Greybus has the notion of a "bundle" of connections
exactly for this purpose. So really, a device is represented
by a bundle of one or more connections (CPorts). Each connection
uses a protocol that is specific to a service it provides. Some
services represent primitive hardware (like I2C or GPIO or UART).
But for example the camera has one CPort representing management
and another representing data from the camera.
Greybus drivers register with the Greybus core, and they provide
a match table that defines what bundles (devices) should be
associated with the driver when they are probed. The bundles
and connections, etc. are defined in a module's manifest; for
a bundle this includes its vendor id, product id, and class,
which are used in matching it with a Greybus device driver.
So basically the manifest provides an encapsulated description
of hardware functionality, and built into its design is a way
to match that hardware with a (Greybus) driver. This could be
adapted for other environments.
As an aside, let me highlight something:
- Greybus manifest describes the hardware available in a module
- A manifest describes one or more Greybus bundles, each of
which represents a device
- Greybus device driver has a way to identify which Greybus
bundle it should be bound with
- A Greybus bundle (device) is implemented with multiple
connections, each using a particular protocol
All, some, or none of these might be what's really wanted
here, but all are part of and possibly implied by the
term "Greybus." This is why I ask for clarity and precision
about what is really required.
Anyway the questions I have are more about whether using what
Greybus as it exists now aligns well with this application.
Does the Greybus manifest address what's required to provide
virtualized access to real hardware via VirtIO? Does it
limit what could be done? Does it provide functionality
beyond what is needed? How is this better or worse than
using Device Tree (for example)? Is there a more natural
way for VirtIO to advertise available hardware?
To be clear, I'm not trying to discourage using Greybus here.
But as I said, I'm viewing things through a Greybus lens. I'm
working to understand what the Stratos "model" looks like so I
can bridge the divide in my mind between that and Greybus.
-Alex
> The alternative is to use a device tree to describe these to
> the guest kernel at boot time. The advantage is that we only
> need a really simple drivers for each type of host controller
> (i2c, spi, ....), and describing the actual devices works just as
> before, using the existing DT bindings to pass bind the attached
> device to a driver and add auxiliary information with references
> to other devices (gpio, irq, device settings), at the cost of
> needing to configure them at boot time.
>
> Arnd
>
Hi All
Do we have agenda items for this week's call ?
https://collaborate.linaro.org/display/STR/Stratos+Home
Mike
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative, the rest follows"
Hi All
Thanks for the great call yesterday [1]
A reminder we will skip the December 24th call, and I am looking for agenda
items and any updates to the notes.
If Linaro members could think about the potential value of Stratos and
other major themes for our mid-cycle review, which is coming up in January,
it would help. I think that with your support we could achieve a lot more
in the next cycle.
Mike
[1]
https://collaborate.linaro.org/display/STR/2020-12-10+Project+Stratos+Sync+…
--
Mike Holmes | Director, Foundation Technologies, Linaro
Mike.Holmes(a)linaro.org <mike.holmes(a)linaro.org>
"Work should be fun and collaborative; the rest follows."
On Tue, Dec 8, 2020 at 8:12 AM Viresh Kumar via Stratos-dev
<stratos-dev(a)op-lists.linaro.org> wrote:
>
> Hi Guys,
>
> There are offline discussions going on to assess the possibility of
> re-using the Linux kernel Greybus framework for Virtio [1] use case,
> where we can control some of the controllers on the back-end (Host),
> like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
> defined Greybus specification [2].
>
> The Greybus specification and kernel source was initially developed
> for Google's Project ARA [3], and the source code was merged into
> mainline kernel long time back (in drivers/greybus/ and
> drivers/staging/greybus/). You can find more information about how
> Greybus works in this LWN article [4].
>
> Greybus broadly provides two distinct features:
>
> - Device discovery: at runtime, with the help of a manifest file
> (think of it like DT, though it has a different format). This helps
> the user of the hardware to identify the capabilities of the remote
> hardware, which it can use.
>
> - Remote control/operation: of the IPs present on the remote hardware,
> using firmware/OS independent operations, these are already well
> defined for a lot of device types and can be extended if required.
>
> We wanted to share this over email to get some discussion going, so it
> can be discussed later on the call.
>
> Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
> Linux Kernel and I worked on a wide variety of stuff and maintain some
> of it. Both of us worked in project ARA and would like to see Greybus
> being used in other applications and would like to contribute towards
> it.
I think the main capability this would add compared to having
a simple virtio device per bus is that you can have a device that is
composed of multiple back-ends, e.g. an i2c slave plus a set of
GPIOs are tied together for one functionality, this is something
we did not discuss in the call today. The downside is that for
each remote device, we'd still need to add a binding and a driver
to make use of it.
The alternative is to use a device tree to describe these to
the guest kernel at boot time. The advantage is that we only
need a really simple drivers for each type of host controller
(i2c, spi, ....), and describing the actual devices works just as
before, using the existing DT bindings to pass bind the attached
device to a driver and add auxiliary information with references
to other devices (gpio, irq, device settings), at the cost of
needing to configure them at boot time.
Arnd
Hi Guys,
There are offline discussions going on to assess the possibility of
re-using the Linux kernel Greybus framework for Virtio [1] use case,
where we can control some of the controllers on the back-end (Host),
like SPI, I2C, GPIO, etc, from front-end (VM), using the already well
defined Greybus specification [2].
The Greybus specification and kernel source was initially developed
for Google's Project ARA [3], and the source code was merged into
mainline kernel long time back (in drivers/greybus/ and
drivers/staging/greybus/). You can find more information about how
Greybus works in this LWN article [4].
Greybus broadly provides two distinct features:
- Device discovery: at runtime, with the help of a manifest file
(think of it like DT, though it has a different format). This helps
the user of the hardware to identify the capabilities of the remote
hardware, which it can use.
- Remote control/operation: of the IPs present on the remote hardware,
using firmware/OS independent operations, these are already well
defined for a lot of device types and can be extended if required.
We wanted to share this over email to get some discussion going, so it
can be discussed later on the call.
Alex Elder (Cc'd) is one of the Maintainers of the Greybus core in
Linux Kernel and I worked on a wide variety of stuff and maintain some
of it. Both of us worked in project ARA and would like to see Greybus
being used in other applications and would like to contribute towards
it.
--
viresh
[1] https://collaborate.linaro.org/display/STR/Virtio+Interfaces
[2] https://github.com/projectara/greybus-spec
[3] https://en.wikipedia.org/wiki/Project_Ara
[4] https://lwn.net/Articles/715955/
Nataliya Korovkina via Stratos-dev <stratos-dev(a)op-lists.linaro.org> writes:
> Hello,
>
> I'm going to look into the STR-11 task, specifically into Zephyr Dom0
> on Cortex-A. Will be glad to synchronize with other people who watch
> the task as well.
Hi Nataliya,
That's awesome :-)
Can I introduce you to Akashi-san (cc'ed) who has also been looking at
Zephyr on Xen and I believe already has some patches to make things work
better. I assume you already know Stefano who has a bunch of the work on
scoping out this use case in the STR-11 card.
I have a few questions:
- Are you also interested in the R-profile deployments (where the VMM
sits in it's own safety island dedicated to the VMM)?
- What platform are you considering for your implementation?
We also have a regular open fortnightly Stratos call if you want to sync up
with others in the community and discuss any technical issues.
Thanks,
--
Alex Bennée