Hi,
In light of the holiday season we are not expecting too many joiners on Dec
23. Hence, let's cancel the LOC (Linaro OP-TEE Contribution) monthly
meeting scheduled for next week.
Wish you all a great holiday and a happy new year. The next scheduled
meeting will be on 27th January 2022.
Regards,
Ruchika
(On behalf of OP-TEE team)
Hi all,
This email intends to ask for potential interest in extending the
support for the SCMI perf-domain protocol in the Linux kernel and in
the firmware. Please, feel free to ignore this, while a short answer
like "sounds interesting" would be sufficient information at this
point. Some more details below.
Today, the current support in the Linux kernel for the SCMI
perf-domain protocol, is limited to be used for CPU frequency scaling,
through the cpufreq subsystem. However, the SCMI perf-domain protocol
isn't really limited to CPUs, but fits well for other generic
peripheral devices too. By extending the support, we can avoid the
need for SoC specific protocols/interfaces, while scaling performance
for non-CPU devices.
>From the Linux kernel point of view, we already have generic support
for DVFS (Dynamic Voltage Frequency Scaling), through a mixture of
subsystems/libraries. Without going into details, we should be able to
tap into these existing infrastructures, to add support for the SCMI
perf-protocol. In this way, the support would be generic and nicely
abstracted from lower layer drivers/subsystems.
Thoughts?
Kind regards
Ulf Hansson, Linaro Kernel Team
This is the follow-up work to support cluster scheduler. Previously
we have added cluster level in the scheduler[1] to make tasks spread
between clusters to bring more memory bandwidth and decrease
cache contention. But it may hurt some workloads which are sensitive
to the communication latency as they will be placed across clusters.
We modified the select_idle_cpu() on the wake affine path in this
series, expecting the wake affined task to be woken more likely on
the same cluster with the waker. The latency will be decreased as
the waker and wakee in the same cluster may benefit from the hot
L3 cache tag.
[1]
https://lore.kernel.org/lkml/20210924085104.44806-1-21cnbao@gmail.com/
Hi Tim and Barry,
This the modified patch of packing path of cluster scheduler and tests
have been done on Kunpeng 920 2-socket 4-NUMA 128core platform, with
8 clusters on each NUMA. Patches based on 5.16-rc1.
Compared to the previous one[2], we give up scanning the first cpu of the
cluster as the cpu id may not be continuous. So we pickup the way of
scanning cluster first before LLC. The result from tbench and pgbench
are rather positive.
[2]
https://op-lists.linaro.org/pipermail/linaro-open-discussions/2021-October/…
Barry Song (2):
sched: Add per_cpu cluster domain info
sched/fair: Scan cluster before scanning LLC in wake-up path
include/linux/sched/sd_flags.h | 9 ++++++++
include/linux/sched/topology.h | 2 +-
kernel/sched/fair.c | 41 +++++++++++++++++++++++++++++++++-
kernel/sched/sched.h | 1 +
kernel/sched/topology.c | 5 +++++
5 files changed, 56 insertions(+), 2 deletions(-)
--
2.33.0
Hi,
OP-TEE Contributions (LOC) monthly meeting is planned for Thursday Nov 25
@17.00 (UTC+2).
Following topic is on the agenda:
- Walkthrough of proposal on sharing of hardware resources between multiple
secure entities - Jens Wiklander.
If you have any other topics you'd like to discuss, please let us know and
we can schedule them.
Meeting details:
---------------
Date/time: Nov 25(a)16.00 (UTC)
https://everytimezone.com/s/a43c71b2
Connection details: https://www.trustedfirmware.org/meetings/
Meeting notes: http://bit.ly/loc-notes
Regards,
Ruchika on behalf of the Linaro OP-TEE team