Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote:
Thanks for getting back to me on this. I have done some changes to my ACPI tables and got your code working fine without this patch. In particular, I matched the proximity domain field with the NUMA node ID for each memory controller. If they differ, the code won't work (as it has the assumption that the proximity domain is the same as NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
from which the affinity/accessibility is set). This leaves me with a few questions regarding the design and implementation of the driver. I'd appreciate your input on that.
- What does a memory MSC correspond to? A class (with a unique ID)
or a component? From the code, it seems like it maps to a component to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
- Could we have a use case in which we have different class IDs
with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
- What should a component ID for a memory MSC be/represent? The
code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
- What should a class ID represent for a memory MSC? Which is
different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
- How would 4 memory MSCs (with different proximity domains) map to
classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
- How would 2 Memory MSCs with(in) the same proximity domain and/or
same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
- Should the ACPI/MPAM MSC's "identifier" field be mapped to class
IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-lists.linaro.org To unsubscribe send an email to linaro-open-discussions-leave@op-lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
Hi all,
(CC: +Russell, who commented he isn't subscribed to this list)
I don't have any topics I think need discussing.
On virtual-cpu-hotplug I'm chasing down some issue with x86, I hope to post this as an RFC once I reach the bottom of my testing list.
I've seen a couple of issues reported for the Qemu side of things. It would be good to have those in one place so we know when they're all sorted. Top of my list is Qemu isn't responding with PSCI_DENIED when CPUs are forbidden. ('SUCCESS' means you hit a 5 second timeout in the guest, on each CPU)
For MPAM I still need to look into Hesham's report that proximity-id != numa-id. Its very likely to be a bug in the driver.
I can't make next Tuesday's 11am (UK time) meeting. I can do 12 or 1pm on the same day.
Thanks,
James
On 19/01/2023 14:53, Joyce Qi via Linaro-open-discussions wrote:
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote:
Thanks for getting back to me on this. I have done some changes to my ACPI tables and got your code working fine without this patch. In particular, I matched the proximity domain field with the NUMA node ID for each memory controller. If they differ, the code won't work (as it has the assumption that the proximity domain is the same as NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
from which the affinity/accessibility is set). This leaves me with a few questions regarding the design and implementation of the driver. I'd appreciate your input on that.
- What does a memory MSC correspond to? A class (with a unique ID)
or a component? From the code, it seems like it maps to a component to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
- Could we have a use case in which we have different class IDs
with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
- What should a component ID for a memory MSC be/represent? The
code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
- What should a class ID represent for a memory MSC? Which is
different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
- How would 4 memory MSCs (with different proximity domains) map to
classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
- How would 2 Memory MSCs with(in) the same proximity domain and/or
same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
- Should the ACPI/MPAM MSC's "identifier" field be mapped to class
IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-lists.linaro.org To unsubscribe send an email to linaro-open-discussions-leave@op-lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi James,
On virtual-cpu-hotplug I'm chasing down some issue with x86, I hope to post this as an RFC once I reach the bottom of my testing list.
I've seen a couple of issues reported for the Qemu side of things. It would be good to have those in one place so we know when they're all sorted. Top of my list is Qemu isn't responding with PSCI_DENIED when CPUs are forbidden. ('SUCCESS' means you hit a 5 second timeout in the guest, on each CPU)
I would suggest to discuss these as part of the LOD mail-chain first before discussing these in the LOD meeting so that more meaning could be added to the discussions and time in the LOD meeting.
Is it possible to list down all the issues prior to everyone?
Thanks Salil
For MPAM I still need to look into Hesham's report that proximity-id != numa-id. Its very likely to be a bug in the driver.
I can't make next Tuesday's 11am (UK time) meeting. I can do 12 or 1pm on the same day.
Thanks,
James
On 19/01/2023 14:53, Joyce Qi via Linaro-open-discussions wrote:
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-
lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote:
Thanks for getting back to me on this. I have done some changes to my ACPI tables and got your code working fine without this patch. In particular, I matched the proximity domain field with the NUMA node ID for each memory controller. If they differ, the code won't work (as it has the assumption that the proximity domain is the same as NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
from which the affinity/accessibility is set). This leaves me with a few questions regarding the design and implementation of the driver. I'd appreciate your input on that.
- What does a memory MSC correspond to? A class (with a unique ID)
or a component? From the code, it seems like it maps to a component to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
- Could we have a use case in which we have different class IDs
with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
- What should a component ID for a memory MSC be/represent? The
code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
- What should a class ID represent for a memory MSC? Which is
different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
- How would 4 memory MSCs (with different proximity domains) map to
classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
- How would 2 Memory MSCs with(in) the same proximity domain and/or
same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
- Should the ACPI/MPAM MSC's "identifier" field be mapped to class
IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-
lists.linaro.org
To unsubscribe send an email to linaro-open-discussions-leave@op-
lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. -- Linaro-open-discussions mailing list -- linaro-open-discussions@op- lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
Hi James,Salil,
Ok, thanks for the update! I will postpone next meeting to Feb 3rd for more time to discuss through email before the meeting.
Any US colleagues may attend the meeting next time? I will book 10:00PM China time if needed.
@Russell, I remembered you have signed up the linaro-open-discussions@op-lists.linaro.org and I have approved it. Does it work now?
Thanks:) Joyce
在 2023年1月23日,下午8:13,Salil Mehta salil.mehta@huawei.com 写道:
Hi James,
On virtual-cpu-hotplug I'm chasing down some issue with x86, I hope to post this as an RFC once I reach the bottom of my testing list.
I've seen a couple of issues reported for the Qemu side of things. It would be good to have those in one place so we know when they're all sorted. Top of my list is Qemu isn't responding with PSCI_DENIED when CPUs are forbidden. ('SUCCESS' means you hit a 5 second timeout in the guest, on each CPU)
I would suggest to discuss these as part of the LOD mail-chain first before discussing these in the LOD meeting so that more meaning could be added to the discussions and time in the LOD meeting.
Is it possible to list down all the issues prior to everyone?
Thanks Salil
For MPAM I still need to look into Hesham's report that proximity-id != numa-id. Its very likely to be a bug in the driver.
I can't make next Tuesday's 11am (UK time) meeting. I can do 12 or 1pm on the same day.
Thanks,
James
On 19/01/2023 14:53, Joyce Qi via Linaro-open-discussions wrote:
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-
lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote:
Thanks for getting back to me on this. I have done some changes to my ACPI tables and got your code working fine without this patch. In particular, I matched the proximity domain field with the NUMA node ID for each memory controller. If they differ, the code won't work (as it has the assumption that the proximity domain is the same as NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
from which the affinity/accessibility is set). This leaves me with a few questions regarding the design and implementation of the driver. I'd appreciate your input on that.
- What does a memory MSC correspond to? A class (with a unique ID)
or a component? From the code, it seems like it maps to a component to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
- Could we have a use case in which we have different class IDs
with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
- What should a component ID for a memory MSC be/represent? The
code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
- What should a class ID represent for a memory MSC? Which is
different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
- How would 4 memory MSCs (with different proximity domains) map to
classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
- How would 2 Memory MSCs with(in) the same proximity domain and/or
same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
- Should the ACPI/MPAM MSC's "identifier" field be mapped to class
IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-
lists.linaro.org
To unsubscribe send an email to linaro-open-discussions-leave@op-
lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. -- Linaro-open-discussions mailing list -- linaro-open-discussions@op- lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
Hi Joyce,
I'm hoping my teammate in Bangalore will be able to join the meeting as he's been working on the virtual cpu hotplug stuff recently. I may join as well, if otherwise feasible, but that remains to be seen. If it turns out that I might be the only one joining from West coast, then it probably makes no point adjusting the meeting time...
Thanks, Ilkka
On Mon, 23 Jan 2023, Joyce Qi via Linaro-open-discussions wrote:
Hi James,Salil,
Ok, thanks for the update! I will postpone next meeting to Feb 3rd for more time to discuss through email before the meeting.
Any US colleagues may attend the meeting next time? I will book 10:00PM China time if needed.
@Russell, I remembered you have signed up the linaro-open-discussions@op-lists.linaro.org and I have approved it. Does it work now?
Thanks:) Joyce
在 2023年1月23日,下午8:13,Salil Mehta salil.mehta@huawei.com 写道:
Hi James,
On virtual-cpu-hotplug I'm chasing down some issue with x86, I hope to post this as an RFC once I reach the bottom of my testing list.
I've seen a couple of issues reported for the Qemu side of things. It would be good to have those in one place so we know when they're all sorted. Top of my list is Qemu isn't responding with PSCI_DENIED when CPUs are forbidden. ('SUCCESS' means you hit a 5 second timeout in the guest, on each CPU)
I would suggest to discuss these as part of the LOD mail-chain first before discussing these in the LOD meeting so that more meaning could be added to the discussions and time in the LOD meeting.
Is it possible to list down all the issues prior to everyone?
Thanks Salil
For MPAM I still need to look into Hesham's report that proximity-id != numa-id. Its very likely to be a bug in the driver.
I can't make next Tuesday's 11am (UK time) meeting. I can do 12 or 1pm on the same day.
Thanks,
James
On 19/01/2023 14:53, Joyce Qi via Linaro-open-discussions wrote:
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-
lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote: > Thanks for getting back to me on this. I have done some changes to > my ACPI tables and got your code working fine without this patch. In > particular, I matched the proximity domain field with the NUMA node > ID for each memory controller. If they differ, the code won't work > (as it has the assumption that the proximity domain is the same as > NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
> from which the affinity/accessibility is set). This leaves me with a > few questions regarding the design and implementation of the driver. > I'd appreciate your input on that.
> 1) What does a memory MSC correspond to? A class (with a unique ID) > or a component? From the code, it seems like it maps to a component > to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
> 2) Could we have a use case in which we have different class IDs > with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
> 3) What should a component ID for a memory MSC be/represent? The > code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
> 4) What should a class ID represent for a memory MSC? Which is > different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
> 5) How would 4 memory MSCs (with different proximity domains) map to > classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
> 6) How would 2 Memory MSCs with(in) the same proximity domain and/or > same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
> 7) Should the ACPI/MPAM MSC's "identifier" field be mapped to class > IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-
lists.linaro.org
To unsubscribe send an email to linaro-open-discussions-leave@op-
lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. -- Linaro-open-discussions mailing list -- linaro-open-discussions@op- lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
-- Linaro-open-discussions mailing list -- linaro-open-discussions@op-lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
On Mon, 23 Jan 2023 at 14:14, Joyce Qi joyce.qi@linaro.org wrote:
Hi James,Salil,
Ok, thanks for the update! I will postpone next meeting to Feb 3rd for more time to discuss through email before the meeting.
Any US colleagues may attend the meeting next time? I will book 10:00PM China time if needed.
This works for me as well - please let me know if you need any help.
Thanks, Lorenzo
@Russell, I remembered you have signed up the linaro-open-discussions@op-lists.linaro.org and I have approved it. Does it work now?
Thanks:) Joyce
在 2023年1月23日,下午8:13,Salil Mehta salil.mehta@huawei.com 写道:
Hi James,
On virtual-cpu-hotplug I'm chasing down some issue with x86, I hope to post this as an RFC once I reach the bottom of my testing list.
I've seen a couple of issues reported for the Qemu side of things. It would be good to have those in one place so we know when they're all sorted. Top of my list is Qemu isn't responding with PSCI_DENIED when CPUs are forbidden. ('SUCCESS' means you hit a 5 second timeout in the guest, on each CPU)
I would suggest to discuss these as part of the LOD mail-chain first before discussing these in the LOD meeting so that more meaning could be added to the discussions and time in the LOD meeting.
Is it possible to list down all the issues prior to everyone?
Thanks Salil
For MPAM I still need to look into Hesham's report that proximity-id != numa-id. Its very likely to be a bug in the driver.
I can't make next Tuesday's 11am (UK time) meeting. I can do 12 or 1pm on the same day.
Thanks,
James
On 19/01/2023 14:53, Joyce Qi via Linaro-open-discussions wrote:
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-
lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote: > Thanks for getting back to me on this. I have done some changes to > my ACPI tables and got your code working fine without this patch. In > particular, I matched the proximity domain field with the NUMA node > ID for each memory controller. If they differ, the code won't work > (as it has the assumption that the proximity domain is the same as > NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
> from which the affinity/accessibility is set). This leaves me with a > few questions regarding the design and implementation of the driver. > I'd appreciate your input on that.
> 1) What does a memory MSC correspond to? A class (with a unique ID) > or a component? From the code, it seems like it maps to a component > to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
> 2) Could we have a use case in which we have different class IDs > with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
> 3) What should a component ID for a memory MSC be/represent? The > code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
> 4) What should a class ID represent for a memory MSC? Which is > different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
> 5) How would 4 memory MSCs (with different proximity domains) map to > classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
> 6) How would 2 Memory MSCs with(in) the same proximity domain and/or > same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
> 7) Should the ACPI/MPAM MSC's "identifier" field be mapped to class > IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-
lists.linaro.org
To unsubscribe send an email to linaro-open-discussions-leave@op-
lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. -- Linaro-open-discussions mailing list -- linaro-open-discussions@op- lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
Hi Joyce, I wont be available on dates 31st Jan, 1st Feb, and 2nd Feb as I will be traveling.
Would it be possible to either prepone it or postpone it on 3rd Friday or so?
Many thanks Salil
-----Original Message----- From: Joyce Qi via Linaro-open-discussions <linaro-open-discussions@op- lists.linaro.org> Sent: Thursday, January 19, 2023 2:54 PM To: Jonathan Cameron via Linaro-open-discussions <linaro-open- discussions@op-lists.linaro.org> Subject: [Linaro-open-discussions] Re: Linaro-open-discussions Digest, Vol 28, Issue 3
Hi all,
Any topics needs to be discussed on next Tuesday?
Thanks:) Joyce
在 2023年1月17日,上午8:00,linaro-open-discussions-request@op-
lists.linaro.org 写道:
Send Linaro-open-discussions mailing list submissions to linaro-open-discussions@op-lists.linaro.org
To subscribe or unsubscribe via email, send a message with subject or body 'help' to linaro-open-discussions-request@op-lists.linaro.org
You can reach the person managing the list at linaro-open-discussions-owner@op-lists.linaro.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Linaro-open-discussions digest..."
Today's Topics:
- Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component (Hesham Almatary)
Message: 1 Date: Mon, 16 Jan 2023 11:31:06 +0000 From: Hesham Almatary hesham.almatary@huawei.com Subject: [Linaro-open-discussions] Re: [RFC PATCH 2/2] MPAM/resctrl: allocate a domain per component To: James Morse james.morse@arm.com Cc: linaro-open-discussions@op-lists.linaro.org, linuxarm@huawei.com Message-ID: 20230116113106.00001a2d@huawei.com Content-Type: text/plain; charset="US-ASCII"
Hello James,
Many thanks for clarifying things up, that definitely helps with my understanding.
On Thu, 12 Jan 2023 13:38:17 +0000 James Morse james.morse@arm.com wrote:
Hi Hesham,
On 12/01/2023 10:34, Hesham Almatary wrote:
Thanks for getting back to me on this. I have done some changes to my ACPI tables and got your code working fine without this patch. In particular, I matched the proximity domain field with the NUMA node ID for each memory controller. If they differ, the code won't work (as it has the assumption that the proximity domain is the same as NUMA id,
Right, if there is an extra level of indirection in there, its something I wasn't aware of. I'll need to dig into it. I agree this explains what you were seeing.
from which the affinity/accessibility is set). This leaves me with a few questions regarding the design and implementation of the driver. I'd appreciate your input on that.
- What does a memory MSC correspond to? A class (with a unique ID)
or a component? From the code, it seems like it maps to a component to me.
An MSC is the device, it has registers and generates interrupts. If its part of your memory controller, it gets described like that in the ACPI tables, which lets linux guess that this MSC (or the RIS within it) control some policy in the memory controller.
Components exist to group devices that should be configured the same. This happens where designs are sliced up, but this slicing makes no sense to the software. Classes are a group of components that do the same thing, but not to the same resource. e.g. they control memory controllers.
The ACPI tables should describe the MSC, its up to the driver to build the class and component structures from what it can infer from the other ACPI tables.
- Could we have a use case in which we have different class IDs
with the same class type? If yes could you please give an example?
Your L2 and L3 are both caches, but use the level number as the id. I doubt anyone builds a system with MSC on both, but its possible by the architecture, and we could expose both via resctrl.
- What should a component ID for a memory MSC be/represent? The
code assumes it's a (NUMA?) node ID.
The component-ids are some number that makes sense to linux, and matches something in the ACPI tables. These are exposed via the schema file to user-space. For the caches, its the cache-id property from the PPTT table. This is exposed to user-space via /sys/devices/system/cpu/cpu0/cache/index3/id or equivalent.
Its important that user-space can work out which CPUs share a component/domain in the schema. Using a sensible id is the pre-requisite for that.
Intel's memory bandwidth control appears to be implemented on the L3, so they re-use the id of the L3 cache. These seem to correspond to NUMA nodes already.
For MPAM - we have no idea if the memory controllers map 1:1 with any level in the cache. Instead, the driver expects to use the numa node number directly.
(I'll put this on the list of KNOWN_ISSUES, the Intel side of this ought to be cleaned up so it doesn't break if they build a SoC where L3 doesn't map 1:1 with Numa nodes. It looks like they are getting away with it because Atom doesn't support L3 or memory bandwidth)
That's very useful and informative. Some form of documentation (in KNOWN_ISSUES, comments, or a README) would be quite useful for such assumptions as the ACPI/MPAM spec doesn't mention that (i.e., the logical ID assignments from OS point of view).
- What should a class ID represent for a memory MSC? Which is
different from the class type itself.
The class id is private to the driver, for the caches it needs to be the cache level. Because of that memory is shoved at the end, on the assumption no-one has an L255 cache, and 'unknown' devices are shoved at the beginning... L0 caches probably do exist, but I doubt anyone would add an MSC to them.
Classes can't be arbitrarily created, as the resctrl picking code needs to know how they map to resctrl schemas, as we can't invent new schemas without messing up user-space.
- How would 4 memory MSCs (with different proximity domains) map to
classes and components?
Each MSC would be a device. There would be one device per component, because each proximity domain is different. They would all be the same class, as you'd described them all with a memory type in the ACPI tables.
If you see a problem with this, let me know! The folk who write the ACPI specs didn't find any systems where this would lead to problems... that doesn't mean you haven't build something that looks quite different.
- How would 2 Memory MSCs with(in) the same proximity domain and/or
same NUMA node work, if at all?
If you build this, I bet your hardware people say those two MSC must be programmed the same for the regulation to work. (if not - how is software expected to understand the hashing scheme used to map physical-addresses to memory controllers?!)
Each MSC would be a device. They would both be part of the same component as they have the same proximity domain.
Configuration is applied to the component, so each device/MSC within the component is always configured the same.
- Should the ACPI/MPAM MSC's "identifier" field be mapped to class
IDs or component IDs at all?
Classes, no - these are just for the driver to keep track of the groups. Components, probably ... but another number may make more sense. This should line up with something that is already exposed to user-space via sysfs.
Thanks,
James
Subject: Digest Footer
Linaro-open-discussions mailing list -- linaro-open-discussions@op-
lists.linaro.org
To unsubscribe send an email to linaro-open-discussions-leave@op-
lists.linaro.org
End of Linaro-open-discussions Digest, Vol 28, Issue 3
-- Linaro-open-discussions mailing list -- linaro-open-discussions@op- lists.linaro.org https://collaborate.linaro.org/display/LOD/Linaro+Open+Discussions+Home
linaro-open-discussions@op-lists.linaro.org