Hi Jonathan,
I was about to test your doe-spdm-v1 branch but it does not compile,
I was wondering if you forgot to push some files.
Have you managed to rebase it on top of the latest DOE patches ?
Please let me know, thank you very much.
Lorenzo
This is the follow-up work to support cluster scheduler. Previously
we have added cluster level in the scheduler[1] to make tasks spread
between clusters to bring more memory bandwidth and decrease
cache contention. But it may hurt some workloads which are sensitive
to the communication latency as they will be placed across clusters.
We modified the select_idle_cpu() on the wake affine path in this
series, expecting the wake affined task to be woken more likely on
the same cluster with the waker. The latency will be decreased as
the waker and wakee in the same cluster may benefit from the hot
L3 cache tag.
[1] https://lore.kernel.org/lkml/20210924085104.44806-1-21cnbao@gmail.com/
Hi Tim and Barry,
This the modified patch of packing path of cluster scheduler and tests
have been done on Kunpeng 920 2-socket 4-NUMA 128core platform, with
8 clusters on each NUMA. Patches based on 5.15-rc1.
Barry Song (2):
sched: Add per_cpu cluster domain info
sched/fair: Scan from the first cpu of cluster if presents in
select_idle_cpu
include/linux/sched/sd_flags.h | 9 +++++++++
include/linux/sched/topology.h | 2 +-
kernel/sched/fair.c | 10 +++++++---
kernel/sched/sched.h | 1 +
kernel/sched/topology.c | 5 +++++
5 files changed, 23 insertions(+), 4 deletions(-)
--
2.33.0
Hi all,
I am recently getting questions about hooking the interconnect framework
to SCMI, so i am starting a discussion on this problem and see who might
be interested in it.
The SCMI spec contains various protocols like the "Performance domain
management protocol". But none of the protocols mentioned in the current
spec (3.0) seem to fit well into the concept we are using to scale
interconnect bandwidth in Linux. I see that people are working in this
area and there is already some support for clocks, resets etc. I am
wondering what would be the right approach to support also interconnect
bus scaling via SCMI.
The interconnect framework is part of the linux kernel and it's goal
is to manage the hardware and tune it to the most optimal power-
performance profile according to the aggregated bandwidth demand between
the various endpoints in the system (SoC). This is based on the requests
coming from consumer drivers.
As interconnects scaling does not map directly to any of the currently
available protocols in the SCMI spec, i am curious whether there is
work in progress on some other protocol that could support managing
resources based on path endpoints (instead of a single ID). The
interconnect framework doesn't populate every possible path, but
it exposes endpoints to client drivers and the path lookup is dynamic,
based on what the clients request. Maybe the SCMI host could also expose
all possible endpoints and let the guest request a path from the host,
based on those endpoints.
There are already suggestions to create vendor-specific SCMI protocols
for that, but i fear that we may end up with more than one protocol for
the same thing, so that's why it might be best to discuss it in public
and have a common solution that works for everyone.
Thanks,
Georgi
Hi,
OP-TEE Contributions (LOC) monthly meeting is planned for Thursday Oct 28
@17.00 (UTC+2).
Following topics are on the agenda:
- Generic Clock Framework and Peripheral security- Clément Léger
- Discussion on device driver initialization/probing - Etienne Carriere
If you have any other topics you'd like to discuss, please let us know and
we can schedule them.
Meeting details:
---------------
Date/time: October 28(a)17.00 (UTC+2)
https://everytimezone.com/s/3f83a9ab
Connection details: https://www.trustedfirmware.org/meetings/
Meeting notes: http://bit.ly/loc-notes
Regards,
Ruchika on behalf of the Linaro OP-TEE team
Hi All,
I think most of you may be busy with the Linux Plumbers Conference this
week. Please let me know if these is something to discuss or follow next
Monday.
Regards,
Jammy
Hi,
Linaro OP-TEE Contributions (LOC) monthly meeting is planned to take place
on Thursday Sep 23(a)17.00 (UTC+2).
Following topics are on the agenda:
- OP-TEE Linaro Contribution - Current status and Roadmap - Ruchika
- FF-A based mediator in XEN - Jens
If you have any other topics you'd like to discuss, please let us know.
Meeting details:
---------------
Date/time: Thursday Sep 23(a)17.00 (UTC+2)
https://everytimezone.com/s/35c9885e
Connection details: https://www.trustedfirmware.org/meetings/
Meeting notes: http://bit.ly/loc-notes
Regards,
Ruchika on behalf of the Linaro OP-TEE team
On Thu, 26 Aug 2021 14:16:02 +0000
Jonathan Cameron via Linaro-open-discussions <linaro-open-discussions(a)op-lists.linaro.org> wrote:
> On Fri, 20 Aug 2021 11:48:41 +0100
> Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
>
> > On Wed, Aug 18, 2021 at 08:05:18PM +0800, Jammy Zhou wrote:
> > > If we move it to 1st or 2nd, is there any topic to discuss? Otherwise,
> > > maybe we can cancel it for this month.
> >
> > I would be _extremely_ grateful if Jonathan could run a session on
> > his series:
> >
> > https://lore.kernel.org/linux-pci/20210804161839.3492053-1-Jonathan.Cameron…
> >
> > in preparation for LPC.
>
> Sure - would be good to get my thoughts in order on this and doing it for next
> week will stop me leaving it all to the last minute.
>
> > Actually, I wanted to ask if there is a kernel
> > tree/branch I can pull in order to review those patches, I am struggling
> > to find a commit base to apply them.
>
> My bad. I got lazy in a build up to vacation and didn't put one up anywhere.
> The various trees involved are rather too dynamic for just pointing at them
> and saying apply these series (which have merge conflicts to resolve).
> Naughty me....
>
> Currently wading through the messages backlog, but will aim to have a branch up sometime
> tomorrow + ideally some more detailed instructions on getting it up and running.
Hi Lorenzo / All,
https://github.com/hisilicon/kernel-dev/tree/doe-spdm-v1 rebased to 5.14-rc7
https://github.com/hisilicon/qemu/tree/cxl-hacks rebased to qemu/master as of earlier today.
For qemu side of things you need to be running spdm_responsder_emu --trans PCI_DOE
from https://github.com/DMTF/spdm-emu first (that will act as server to qemu acting
as a client). Various parameters allow you to change the algorithms advertised and the
kernel code should work for all the ones CMA mandates (but nothing beyond that for now).
For the cxl device the snippet of qemu commandline needed is:
-device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-mem1, id=cxl-pmem0,size=2G,spdm=true
Otherwise much the same as https://people.kernel.org/jic23/
Build at least the cxl_pci driver as a module as we need to poke the certificate into the keychain
before that (find the cert in spdm_emu tree).
Instructions to do that with keyctl and evmctl are in the cover letter of the patch series.
Hopefully I'll find some time next week to put together some proper instructions and email them
in reply to the original posting.
Jonathan
>
> Jonathan
>
> >
> > Thanks,
> > Lorenzo
> >
> > > On Tue, 17 Aug 2021 at 21:42, Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
> > > wrote:
> > >
> > > On Tue, 17 Aug 2021 12:28:45 +0100
> > > Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
> > >
> > > > On Mon, Aug 16, 2021 at 08:41:29AM +0000, Jonathan Cameron via
> > > Linaro-open-discussions wrote:
> > > > >
> > > > >
> > > > > Hi Jammy,
> > > > >
> > > > > I'll be away until the 26th, but obviously that should be no barrier
> > > > > to topics I am less involved with as I can easily catch up later.
> > > >
> > > > Maybe it is best to reschedule it early September since it is holiday
> > > > season. I could do September 1st and 2nd and the following week.
> > >
> > > 1st and 2nd work for me. Early the following week also fine.
> > > Note Linaro connect is 8-10th Sept so we should avoid any clashes.
> > >
> > > Jonathan
> > >
> > >
> > > >
> > > > Please let me know what you think.
> > > >
> > > > Thanks,
> > > > Lorenzo
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > Jonathan
> > > > >
> > > > >
> > > > > ________________________________
> > > > >
> > > > > Jonathan Cameron
> > > > > Mobile: +44-7870588074
> > > > > Email: jonathan.cameron(a)huawei.com<mailto:jonathan.cameron@huawei.com>
> > > > > From:Jammy Zhou via Linaro-open-discussions <
> > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > To:Lorenzo Pieralisi via Linaro-open-discussions <
> > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > Date:2021-08-16 06:01:50
> > > > > Subject:[Linaro-open-discussions] LOD Meeting Agenda for August 23
> > > > >
> > > > > Hi all,
> > > > >
> > > > > We're going to have the next monthly call for Linaro Open Discussions
> > > in
> > > > > one week. Please let me know if you have any topic for discussion.
> > > > >
> > > > > Thanks,
> > > > > Jammy
> > > > > --
> > > > > Linaro-open-discussions mailing list
> > > > > https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > > > > --
> > > > > Linaro-open-discussions mailing list
> > > > > https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > >
> > >
>
Hi,
We are carrying out some NUMA related test activities (some of which we
discussed on this forum) on the Taishan 2280 v2 server.
For those tests we need memory attached to all 4 NUMA nodes equally.
That's why we added 2 extra 32GB memory modules to the existing 2.
The manual says that they should be placed into slot:
000 (CPU A) 100 (CPU A) 020 (CPU B) and 120 (CPU B).
BMC console as well as startup logs show that the memory modules get
properly detected in these slots.
But after Linux booted we don't have the expected 32 GB per NUMA node.
There is node 1 & 3 with 64GB and node 0 & 2 with no memory instead.
We didn't find any BIOS options which could explain this memory
distribution.
Do you have any hints why this is happening? Any help with this issue is
highly appreciated.
Thanks!
-- Dietmar
Hi all,
The slides and video for the meeting today has been published.
https://linaro.atlassian.net/wiki/spaces/LOD/pages/28585361507/2021-09-01+M…
Regards,
Jammy
On Tue, 31 Aug 2021 at 20:46, Jonathan Cameron via Linaro-open-discussions <
linaro-open-discussions(a)op-lists.linaro.org> wrote:
> On Thu, 26 Aug 2021 14:16:02 +0000
> Jonathan Cameron via Linaro-open-discussions <
> linaro-open-discussions(a)op-lists.linaro.org> wrote:
>
> > On Fri, 20 Aug 2021 11:48:41 +0100
> > Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
> >
> > > On Wed, Aug 18, 2021 at 08:05:18PM +0800, Jammy Zhou wrote:
> > > > If we move it to 1st or 2nd, is there any topic to discuss?
> Otherwise,
> > > > maybe we can cancel it for this month.
> > >
> > > I would be _extremely_ grateful if Jonathan could run a session on
> > > his series:
> > >
> > >
> https://lore.kernel.org/linux-pci/20210804161839.3492053-1-Jonathan.Cameron…
> > >
> > > in preparation for LPC.
> >
> > Sure - would be good to get my thoughts in order on this and doing it
> for next
> > week will stop me leaving it all to the last minute.
> >
> > > Actually, I wanted to ask if there is a kernel
> > > tree/branch I can pull in order to review those patches, I am
> struggling
> > > to find a commit base to apply them.
> >
> > My bad. I got lazy in a build up to vacation and didn't put one up
> anywhere.
> > The various trees involved are rather too dynamic for just pointing at
> them
> > and saying apply these series (which have merge conflicts to resolve).
> > Naughty me....
> >
> > Currently wading through the messages backlog, but will aim to have a
> branch up sometime
> > tomorrow + ideally some more detailed instructions on getting it up and
> running.
>
> Hi Lorenzo / All,
>
> https://github.com/hisilicon/kernel-dev/tree/doe-spdm-v1 rebased to
> 5.14-rc7
> https://github.com/hisilicon/qemu/tree/cxl-hacks rebased to qemu/master
> as of earlier today.
>
> For qemu side of things you need to be running spdm_responsder_emu --trans
> PCI_DOE
> from https://github.com/DMTF/spdm-emu first (that will act as server to
> qemu acting
> as a client). Various parameters allow you to change the algorithms
> advertised and the
> kernel code should work for all the ones CMA mandates (but nothing beyond
> that for now).
>
> For the cxl device the snippet of qemu commandline needed is:
> -device cxl-type3,bus=root_port13,memdev=cxl-mem1,lsa=cxl-mem1,
> id=cxl-pmem0,size=2G,spdm=true
>
> Otherwise much the same as https://people.kernel.org/jic23/
>
> Build at least the cxl_pci driver as a module as we need to poke the
> certificate into the keychain
> before that (find the cert in spdm_emu tree).
> Instructions to do that with keyctl and evmctl are in the cover letter of
> the patch series.
>
> Hopefully I'll find some time next week to put together some proper
> instructions and email them
> in reply to the original posting.
>
> Jonathan
>
>
>
> >
> > Jonathan
> >
> > >
> > > Thanks,
> > > Lorenzo
> > >
> > > > On Tue, 17 Aug 2021 at 21:42, Jonathan Cameron <
> Jonathan.Cameron(a)huawei.com>
> > > > wrote:
> > > >
> > > > On Tue, 17 Aug 2021 12:28:45 +0100
> > > > Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com> wrote:
> > > >
> > > > > On Mon, Aug 16, 2021 at 08:41:29AM +0000, Jonathan Cameron
> via
> > > > Linaro-open-discussions wrote:
> > > > > >
> > > > > >
> > > > > > Hi Jammy,
> > > > > >
> > > > > > I'll be away until the 26th, but obviously that should be no
> barrier
> > > > > > to topics I am less involved with as I can easily catch up
> later.
> > > > >
> > > > > Maybe it is best to reschedule it early September since it is
> holiday
> > > > > season. I could do September 1st and 2nd and the following
> week.
> > > >
> > > > 1st and 2nd work for me. Early the following week also fine.
> > > > Note Linaro connect is 8-10th Sept so we should avoid any
> clashes.
> > > >
> > > > Jonathan
> > > >
> > > >
> > > > >
> > > > > Please let me know what you think.
> > > > >
> > > > > Thanks,
> > > > > Lorenzo
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Jonathan
> > > > > >
> > > > > >
> > > > > > ________________________________
> > > > > >
> > > > > > Jonathan Cameron
> > > > > > Mobile: +44-7870588074
> > > > > > Email: jonathan.cameron(a)huawei.com<mailto:
> jonathan.cameron(a)huawei.com>
> > > > > > From:Jammy Zhou via Linaro-open-discussions <
> > > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > > To:Lorenzo Pieralisi via Linaro-open-discussions <
> > > > linaro-open-discussions(a)op-lists.linaro.org>
> > > > > > Date:2021-08-16 06:01:50
> > > > > > Subject:[Linaro-open-discussions] LOD Meeting Agenda for
> August 23
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > We're going to have the next monthly call for Linaro Open
> Discussions
> > > > in
> > > > > > one week. Please let me know if you have any topic for
> discussion.
> > > > > >
> > > > > > Thanks,
> > > > > > Jammy
> > > > > > --
> > > > > > Linaro-open-discussions mailing list
> > > > > >
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > >
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > > > > > --
> > > > > > Linaro-open-discussions mailing list
> > > > > >
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> > > > > >
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
> > > >
> > > >
> >
>
> --
> Linaro-open-discussions mailing list
> https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
> https://op-lists.linaro.org/mailman/listinfo/linaro-open-discussions
>