Hi All,
I suspect this weeks Stratos sync will be the last one of the year as we
are about to head into the holiday season. Does anyone have any topics
they want to discuss?
--
Alex Bennée
Currently the GPIO Aggregator does not support interrupts. This means
that kernel drivers going from a GPIO to an IRQ using gpiod_to_irq(),
and userspace applications using line events do not work.
Add interrupt support by providing a gpio_chip.to_irq() callback, which
just calls into the parent GPIO controller.
Note that this does not implement full interrupt controller (irq_chip)
support, so using e.g. gpio-keys with "interrupts" instead of "gpios"
still does not work.
Signed-off-by: Geert Uytterhoeven <geert+renesas(a)glider.be>
---
I would prefer to avoid implementing irq_chip support, until there is a
real use case for this.
This has been tested with gpio-keys and gpiomon on the Koelsch
development board:
- gpio-keys, using a DT overlay[1]:
$ overlay add r8a7791-koelsch-keyboard-controlled-led
$ echo gpio-aggregator > /sys/devices/platform/frobnicator/driver_override
$ echo frobnicator > /sys/bus/platform/drivers/gpio-aggregator/bind
$ gpioinfo frobnicator
gpiochip12 - 3 lines:
line 0: "light" "light" output active-high [used]
line 1: "on" "On" input active-low [used]
line 2: "off" "Off" input active-low [used]
$ echo 255 > /sys/class/leds/light/brightness
$ echo 0 > /sys/class/leds/light/brightness
$ evtest /dev/input/event0
- gpiomon, using the GPIO sysfs API:
$ echo keyboard > /sys/bus/platform/drivers/gpio-keys/unbind
$ echo e6055800.gpio 2,6 > /sys/bus/platform/drivers/gpio-aggregator/new_device
$ gpiomon gpiochip12 0 1
[1] "ARM: dts: koelsch: Add overlay for keyboard-controlled LED"
https://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers.git/c…
---
drivers/gpio/gpio-aggregator.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-aggregator.c b/drivers/gpio/gpio-aggregator.c
index e9671d1660ef4b40..869dc952cf45218b 100644
--- a/drivers/gpio/gpio-aggregator.c
+++ b/drivers/gpio/gpio-aggregator.c
@@ -371,6 +371,13 @@ static int gpio_fwd_set_config(struct gpio_chip *chip, unsigned int offset,
return gpiod_set_config(fwd->descs[offset], config);
}
+static int gpio_fwd_to_irq(struct gpio_chip *chip, unsigned int offset)
+{
+ struct gpiochip_fwd *fwd = gpiochip_get_data(chip);
+
+ return gpiod_to_irq(fwd->descs[offset]);
+}
+
/**
* gpiochip_fwd_create() - Create a new GPIO forwarder
* @dev: Parent device pointer
@@ -411,7 +418,8 @@ static struct gpiochip_fwd *gpiochip_fwd_create(struct device *dev,
for (i = 0; i < ngpios; i++) {
struct gpio_chip *parent = gpiod_to_chip(descs[i]);
- dev_dbg(dev, "%u => gpio-%d\n", i, desc_to_gpio(descs[i]));
+ dev_dbg(dev, "%u => gpio %d irq %d\n", i,
+ desc_to_gpio(descs[i]), gpiod_to_irq(descs[i]));
if (gpiod_cansleep(descs[i]))
chip->can_sleep = true;
@@ -429,6 +437,7 @@ static struct gpiochip_fwd *gpiochip_fwd_create(struct device *dev,
chip->get_multiple = gpio_fwd_get_multiple_locked;
chip->set = gpio_fwd_set;
chip->set_multiple = gpio_fwd_set_multiple_locked;
+ chip->to_irq = gpio_fwd_to_irq;
chip->base = -1;
chip->ngpio = ngpios;
fwd->descs = descs;
--
2.25.1
Hi Bartosz,
This patch adds rust bindings for libgpiod v2.0, this is already partially
tested with the virtio rust backend I am developing, which uses these to talk to
the host kernel.
This is based of the next/post-libgpiod-2.0 branch.
I haven't added any mock test for this as of now and I am not sure how exactly
am I required to add them. I did see what you mentioned in your patchset about
mock-test vs gpio-sim stuff. Rust also have its own test-framework and I am not
sure if that should be used instead or something else.
Since I am posting this publicly for the first time, it is still named as V1. I
have not made significant changes to the code since last time, but just divided
the same into multiple files.
--
Viresh
Viresh Kumar (2):
libgpiod: Generate rust FFI bindings
libgpiod: Add rust wrappers
.gitignore | 6 +
bindings/rust/Cargo.toml | 14 +
bindings/rust/build.rs | 60 ++++
bindings/rust/src/bindings.rs | 16 ++
bindings/rust/src/chip.rs | 197 +++++++++++++
bindings/rust/src/edge_event.rs | 78 +++++
bindings/rust/src/event_buffer.rs | 59 ++++
bindings/rust/src/info_event.rs | 70 +++++
bindings/rust/src/lib.rs | 268 +++++++++++++++++
bindings/rust/src/line_config.rs | 431 ++++++++++++++++++++++++++++
bindings/rust/src/line_info.rs | 186 ++++++++++++
bindings/rust/src/line_request.rs | 218 ++++++++++++++
bindings/rust/src/request_config.rs | 118 ++++++++
bindings/rust/wrapper.h | 2 +
14 files changed, 1723 insertions(+)
create mode 100644 bindings/rust/Cargo.toml
create mode 100644 bindings/rust/build.rs
create mode 100644 bindings/rust/src/bindings.rs
create mode 100644 bindings/rust/src/chip.rs
create mode 100644 bindings/rust/src/edge_event.rs
create mode 100644 bindings/rust/src/event_buffer.rs
create mode 100644 bindings/rust/src/info_event.rs
create mode 100644 bindings/rust/src/lib.rs
create mode 100644 bindings/rust/src/line_config.rs
create mode 100644 bindings/rust/src/line_info.rs
create mode 100644 bindings/rust/src/line_request.rs
create mode 100644 bindings/rust/src/request_config.rs
create mode 100644 bindings/rust/wrapper.h
--
2.31.1.272.g89b43f80a514
Hi Ilias/Akashi-san,
So after the call earlier this week Mike asked me to write-up a EPIC for
the measurement work. I think this covers the deployments we want to
measure but any feedback is welcome. The Xen documentation talks rather
euphemistically about Open vSwitch but I don't know if that is just the
management name for using the same network routing internals as the
other ePPF ones.
Anyway any thoughts?
━━━━━━━━━━━━━━━━━━━━━━━
STRATOS XDP MEASURING
Alex Bennée
━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
─────────────────
1. Abstract
2. Networking Setups
.. 1. Test Setup
.. 2. Potential Packet Paths
.. 3. Host Networking
.. 4. KVM Guest with vhost networking
.. 5. Pass-through (SR-IOV or virtualised HW)
.. 6. Open vSwitch routing (Xen)
1 Abstract
══════════
As we move network endpoints into different VM configurations we need
to understand the costs and latency effects those choices will have.
With that understanding we can then consider various approaches to
optimising packet flow through virtual machines.
2 Networking Setups
═══════════════════
2.1 Test Setup
──────────────
The test setup will require two machines. The test controller will be
the source of the test packets and measure the round trip latency to
get a reply from the test client. The test client will be setup in
multiple configurations so the latency can be checked.
+-----------------------+ +-----------------------+
|c1AB | |c1AB |
| Test Control | | Test Client |
| | | |
+-+--------+-+--------+-+ +-+--------+-+--------+-+
|{mo} | |{mo} | |{mo} | |{mo} |
| eth0 | | eth1 | | eth1 | | eth0 |
|cRED | |cPNK | |cPNK | |cRED |
+--------+ +--------+ +--------+ +--------+
| ^ ^ |
| | test link | |
: +---------------------------+ :
| 10GbE |
| |
/--------------------------------------------=-----------\
| LAN |
\---------------------------------------------=----------/
2.2 Potential Packet Paths
──────────────────────────
For each experiment we need to measure the latency of 3 different
packet reflectors. A simple ping pong running either via:
• xdp_xmit - lowest latency turnaround at the driver
• xdp_redir - bypass linux networking stack to user-space
• xpd_pass - normal packet path to conventional AF_INET socket
+--------------------------------------------------------------------------+
|cDED |
| /---------------\ /---------------\ |
| |cGRE | |cGRE | |
| | Test | | Test | |
| | Program | | Program | |
| | | | | |
| +---------------+ +---------------+ user-space |
| |cYEL | |cCEF | |
| | AF_INET | | AF_XDP | |
| | | | | |
| +---------------+ +---------------+ |
| ^ ^ |
| | | |
| : : |
+--------------------------------------------------------------------------+
: :
| |
+--------------------------------------------------------------------------+
|cBAD | | |
| +---------------+ : 2. xdp_redir |
| |cYEL | | kernel-space |
| | | 3. xdp_pass +---------+ |
| | Linux | <-----=------ | XDP | ---=----+ |
| | Networking | +----+---------+----+ | 1. xdp_xmit |
| | stack | |cRED | | |
| | | | Driver | <--+ |
| | | | | |
| \---------------/ \-------------------/ |
| |
+--------------------------------------------------------------------------+
2.3 Host Networking
───────────────────
This is the default case with no virtualisation involved.
/---------------------\
|cGRE |
| Network Test | Host User Space
| Program |
| |
+---------------------+
+------------------------------------------------+
| +---------------------+ |
| |cYEL | |
| | Packet Routing | |
SW | | | |
| +---------------------+ Host Kernel Space |
| |cRED | |
| | Driver | |
| | | |
| \---------------------/ |
+------------------------------------------------+
----=-----------------------------------------------------------------------------------------
+---------------------+
|{mo} |
HW | Network |
| Card |
|c444 |
+---------------------+
2.4 KVM Guest with vhost networking
───────────────────────────────────
This is a KVM only case where the vhost device allows packets to be
delivered directly from the guest kernels address space. It still
relies on the host kernels networking stack though.
|
| /------------\
| |cGRE |
| | | Guest User
| +------------+
| ----------------=------
| +------------+
| |cYEL |
: +------------+ Guest Kernel
| |cRED |
| | virtio-net |
| +------------+
+------------------------------------------------+
| +---------------------+---------------------+ |
| |cYEL |cPNK vhost-virtio | |
| | Packet Routing +---------------------/ |
SW | | | |
| +---------------------+ Host Kernel Space |
| |cRED | |
| | Driver | |
| | | |
| \---------------------/ |
+------------------------------------------------+
----=------------------------------------------------------
+---------------------+
|{mo} |
HW | Network |
| Card |
|c444 |
+---------------------+
2.5 Pass-through (SR-IOV or virtualised HW)
───────────────────────────────────────────
Either using direct pass-through to a discrete ethernet device or a
virtualised function. The control of the packet starts and ends in the
guests kernel.
Host System/Dom0 Guest VM/DomU
| /---------\
| |cGRE |
/---------------------\ | /-----------+---------+
|cBAD Kernel | : |cBAD |cYEL |
\---------------------/ | \-----------+---------+
+-------------------------------------+ |cRED |
|cB9C Hypervisor | | Driver |
+-------------------------------------+ +---------/
----=------------------------------------------------------
+---------------+ +---------------+
|{mo} | |{mo} |
HW | NIC | | vNIC |
|c444 | |c544 |
+---------------+ +---------------+
2.6 Open vSwitch routing (Xen)
──────────────────────────────
Here the packets are switched into paravirtualized Xen interfaces by
the Dom0 kernel. I'm a little unsure as to what Open vSwitch uses to
route stuff and if it's the same as the existing eBPF stuff.
|
|
Dom0/Host : DomU Guest
|
|
| /-----------\
| |cGRE | Guest
| | | User
+------------------------------------------------+ | +-+-----------+---------------+
| +-----------------------------------------+ | | | +-------+ |
SW | |cYEL Open vSwitch | | | | |cYEL | |
| +---------------------+-----------+-------+ | | | +-------+ Guest |
| |cRED | |cPNK | | | | |cRED | Kernel |
| | Driver | | XenPV |<-=|-=|=-|=->| XenPV | |
| | | | BE | | | | | FE | |
| \---------------------/ \-------+ | | | +-------/ |
+------------------------------------------------+ | +-----------------------------+
+------------------------------------------------------------------------------------+
|cB9C Hypervisor |
+------------------------------------------------------------------------------------+
-----------------------------------------------------------------------------------------------=------------
+---------------------+
|{mo} |
HW | Network |
| Card |
|c444 |
+---------------------+
--
Alex Bennée
Hi,
Any topics to be added to the agenda for tomorrows sync-up call. I
should have the AF_XDP cards done and hopefully we will have another
topic to discuss as well.
--
Alex Bennée
Hi Bartosz/Miguel,
Here is another attempt to add rust wrappers for libgpiod, hopefully it would
look much better this time.
V1->V2
- Propagate proper errors returned by the kernel to the user. Improved error
handling overall.
- Improved Enum support, with their own helpers to reduce redundant code.
- Improved names for methods of various structures, they are updated to match
the underlying libgpiod helpers.
- SAFETY comments added to methods that return strings.
- Based off next/post-libgpiod-2.0 branch.
--
Viresh
Viresh Kumar (2):
libgpiod: Generate rust FFI bindings
libgpiod: Add rust wrappers
.gitignore | 7 +
Cargo.toml | 14 +
build.rs | 60 ++
src/bindings.h | 3 +
src/bindings.rs | 16 +
src/lib.rs | 1474 +++++++++++++++++++++++++++++++++++++++++++++++
6 files changed, 1574 insertions(+)
create mode 100644 Cargo.toml
create mode 100644 build.rs
create mode 100644 src/bindings.h
create mode 100644 src/bindings.rs
create mode 100644 src/lib.rs
--
2.31.1.272.g89b43f80a514
Hi,
Any topics to cover?
I know there are some bits that should be ready for the next sync-up and
we have some holiday clashes tomorrow.
--
Alex Bennée
Hi,
I've posted the meeting minutes the Stratos project pages:
https://linaro.atlassian.net/wiki/spaces/STR/pages/28655222830/2021-10-28+P…
As I was quite involved in the conversation and didn't have my usual
background note taker could I ask participants to give it a quick once over
while the conversation is fresh to make sure I didn't miss anything out.
Thanks,
--
Alex Bennée
KVM/QEMU Hacker for Linaro