SlideShare a Scribd company logo
1 | ©2022 Flash Memory Summit. All Rights Reserved.
CXL – Industry Enablement
Willie Nelson
Technology Enabling Architect - Intel
August 2022
2 | ©2022 Flash Memory Summit. All Rights Reserved.
Introducing the CXL Consortium
CXL Board of Directors
200+ Member Companies
Industry Open Standard for
High Speed Communications
3 | ©2022 Flash Memory Summit. All Rights Reserved.
Growing Industry Momentum
• CXL Consortium showcased first public demonstrations of CXL
technology at SC’21
• View virtual and live demos from CXL Consortium members here:
https://www.computeexpresslink.org/videos
• Demos showcase CXL usages, including memory development, memory
expansion and memory disaggregation
4 | ©2022 Flash Memory Summit. All Rights Reserved.
Industry Focal Point
CXL is emerging as the industry focal point
for coherent IO
• CXL Consortium and OpenCAPI sign letter of
intent to transfer OpenCAPI specification and
assets to the CXL Consortium
• In February 2022, CXL Consortium and Gen-
Z Consortium signed agreement to transfer
Gen-Z specification and assets to CXL
Consortium
August 1, 2022, Flash Memory Summit
CXL Consortium and OpenCAPI Consortium
Sign Letter of Intent to Transfer OpenCAPI
Assets to CXL
5 | ©2022 Flash Memory Summit. All Rights Reserved.
CXL Specification Release Timeline
March 2019
CXL 1.0
Specification
Released
September 2019
CXL Consortium
Officially
Incorporates
CXL 1.1
Specification
Released
November 2020
CXL 2.0
Specification
Released
August 2022
CXL 3.0
Specification
Released
Press Release
August 2, 2022, Flash Memory Summit
CXL Consortium releases Compute Express
Link 3.0 specification to expand fabric
capabilities and management
Members: 130+
Members: 15+ Members: 200+
6 | ©2022 Flash Memory Summit. All Rights Reserved.
New Technology Enabling – Key Contributors
Revolutionary
New
Technology
HWDevelopmentTools (Analyzers,etc.)
SuccessfulNewTechnologyEnablingRequiresALLContributorstobeViableforIndustryAdoption
HWSilicon/ControllerVendors
SiIPProviders(incl.pre-sisimulation)
HardwareProduction ProductVendors
SWDevelopmentTools(testing, debug, perf.,etc.)
Device/UseCaseOSDrivers
OperatingSystemSupport
UseCaseApplications (tangiblebenefitsw/newtech)
Standards/Consortiums/etc…
Industry Adoption
7 | ©2022 Flash Memory Summit. All Rights Reserved.
Intel CXL Memory Enablement & Validation
DDR PCIe CXL Memory
POR Platform
Configurations
• Large matrix of POR
configurations
• “Open socket” (extensive
variety of technology and
use cases
• Plans to validate specific POR configurations of
CXL memory per platform, with several vendors
and modules – not exhaustive
Engagement Model • Direct engagement and
collaboration with Tier1
suppliers
• SIG-based engagement with
PCIe IHVs
• Targeted engagement with numerous CXL
memory device & module IHVs, as well as key
customers, plus multiple Consortium based
compliance workshops and various interactions
Validation Model • Early and exhaustive Host-
based validation spanning
electrical, protocol,
functional
• SIG-led compliance
workshops & plugfests
• Host PCIe validation focus
on PCIe channel, protocol
features/function
• Limited platform validation
with PCIe products
• Host validation focus on CXL channel, features &
function of CXL memory as part of platform’s
memory subsystem
• CXL memory device & module IHV validation focus
on device+media channel, function/features
• Long term plan: Consortium-led compliance
testing
Comparing CXL memory validation to DDR/PCIe efforts
Approach for CXL memory expected to evolve over generations to be PCIe-like
8 | ©2022 Flash Memory Summit. All Rights Reserved.
Industry CXL Memory HW Enabling & Validation
CPU Vendor focus:
• Work with device & module vendors to enable key
features
• Provided CXL vendors an open, bridge architecture
reference document as an initial guide, covering
Bridge/module operation/features recommendations
• Device/module platform integration (focused configs)
• For initial AIC CEM Modules – focused
validation of the media interface
• Validation:
• Host-side CXL functions
• Memory features – RAS, etc.
• CXL channel
• Specific configs and vendors (# of ports, capacity,
etc.)
• SW Enabling:
• Intel providing reference system FW/BIOS
• Part of the industry effort to develop an open-
source driver
• SW guide for type 3 devices
CXL
IP
Media
IP
DRAM
DRAM
DRAM
DRAM
DRAM
DRAM
DRAM
DRAM
DRAM
DRAM
CXL Memory Bridge
(aka controller, buffer)
CXL Memory Module*
Bridge-media channel
Host CXL Channel
CXL
IP
Controller/Module Vendor focus: (bridge
or module)
• Memory media interface, channel
electricals, media training/MRC
• CXL compliance and interoperability
testing
*Standardization of CXL memory
module form factors – EDSFF E3.s &
E1.s, PCI CEM and mezzanine in process
OEM/System provider focus:
• Device/module platform integration
• Configuration testing
• In-rack level testing
• Usage models testing/debug
• System Validation:
• SW integration including system FW/BIOS,
OS, generic driver
• Generate integrator list
Config-1………….. Config-N
A Massive Coordinated Industry Effort
CPU/Host
9 | ©2022 Flash Memory Summit. All Rights Reserved.
Q & A
Willie Nelson
Technology Enabling Architect - Intel
August 2022
10 | ©2022 Flash Memory Summit. All Rights Reserved.
CXL Delivers the Right Features & Architecture
CXL
Anopen industry-supported
cache-coherent interconnect
for processors, memory
expansion and accelerators
CoherentInterface
LeveragesPCIewith3mix-and-match
protocols
Low Latency
.Cacheand.MemorytargetedatnearCPU
cachecoherentlatency
AsymmetricComplexity
Easesburdensofcachecoherent
interfacedesigns
Challenges
Industrytrendsdrivingdemandforfasterdataprocessingandnext-gen
datacenterperformance
Increasingdemandforheterogeneouscomputingandserver
disaggregation
Needforincreasedmemorycapacityandbandwidth
Lackofopenindustrystandardtoaddressnext-geninterconnect
challenges
https://www.computeexpresslink.org/resource-library
11 | ©2022 Flash Memory Summit. All Rights Reserved.
Representative CXL Usages
Memory
CXL • CXL.io
• CXL.memory
PROTOCOLS
Memory
Memory
Memory
Memory
MemoryBuffer
Processor
DDR
DDR
• Memory BW expansion
• Memory capacity expansion
• Storage class memory
USAGES
AcceleratorswithMemory
CXL • CXL.io
• CXL.cache
• CXL.memory
PROTOCOLS
• GP GPU
• Dense computation
USAGES
HBM
Accelerator
Cache
Processor
DDR
DDR
CachingDevices/Accelerators
CXL • CXL.io
• CXL.cache
PROTOCOLS
• PGAS NIC
• NIC atomics
USAGES
Accelerator
NIC
Cache
Processor
DDR
DDR
TYPE 1 TYPE 2 TYPE 3
HBM
12 | ©2022 Flash Memory Summit. All Rights Reserved.
Usage Local Bandwidth or Capacity Expansion Memory Pooling
Main memory expansion Two-Tier Memory
Value Prop Scale performance or enable use of higher core counts via added
bandwidth and/or capacity
Flexible memory assignment, enabling:
- Lower total memory cost
- Platform SKU reduction & OpEx efficiency
CXL Memory Attributes Bandwidth and features similar
to direct attach DDR
Lower bandwidth, higher
latency vs. direct attach DDR
Bandwidth and features similar to direct
attach DDR, latency similar to remote socket
access
Software Considerations OS version must support CXL
memory.
CXL memory visible either in
same region as direct attach
DDR5 or as a separate region
OS version must support CXL
memory.
SW-visible as Persistent next-
tier memory
OS version must support CXL memory.
Additional software layer for orchestration of
pooled memory and multi-port controller
CXL Memory Overview
Pool
CPU
Direct Attach DDR5
EDSFF E3
or E1
PCI CEM/Custom Board
Pooled
Memory
Controller

More Related Content

PPTX
All Presentations during CXL Forum at Flash Memory Summit 22
PPTX
CXL Consortium Update
PDF
System Software Guide to CXL - Linux Kernel Meetup 2024.pdf
PDF
Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXL
PDF
intel-memverge-seminar-cxl-presentation-feb24-final-240214215332-ca83fba5.pdf
PPTX
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)
PPTX
Q1 Memory Fabric Forum: Building Fast and Secure Chips with CXL IP
PPTX
CXL Consortium Update: Advancing Coherent Connectivity
All Presentations during CXL Forum at Flash Memory Summit 22
CXL Consortium Update
System Software Guide to CXL - Linux Kernel Meetup 2024.pdf
Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXL
intel-memverge-seminar-cxl-presentation-feb24-final-240214215332-ca83fba5.pdf
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)
Q1 Memory Fabric Forum: Building Fast and Secure Chips with CXL IP
CXL Consortium Update: Advancing Coherent Connectivity

Similar to Intel: Industry Enablement of IO Technologies (20)

PPTX
Synopsys: Achieve First Pass Silicon Success with Synopsys CXL IP Solutions
PPTX
PPTX
CXL chapter1 and chapter 2 presentation.pptx
PDF
WN Memory Tiering WP Mar2023.pdf
PDF
Q1 Memory Fabric Forum: Breaking Through the Memory Wall
PPTX
MemVerge: Past Present and Future of CXL
PPTX
Marvell - Transforming Cloud Data Centers with CXL
PPTX
Compute Express Link: Advancing Coherent Connectivity
PPTX
Past Present and Future of CXL
PPTX
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and Switching
PPTX
Microchip: CXL Use Cases and Enabling Ecosystem
PPTX
SMART Modular: Memory Solutions with CXL
PDF
CXL Forum at ISC 23 - Speaker Invitation.pdf
PPTX
Intel: CXL Enabled Heterogeneous Active Memory Tiering
PDF
Memory-Fabric-Forum-at-OCP-Global-Summit-2024-–-Astera-and-Microsoft.pdf
PPTX
Arm: Enabling CXL devices within the Data Center with Arm Solutions
PPTX
Compute Express Link (CXL) – Everything You Ought To Know
PPTX
The State of CXL-related Activities within OCP
PPTX
XConn: Scalable Memory Expansion and Sharing for AI Computing with CXL Switches
PPTX
Astera Labs: Intelligent Connectivity for Cloud and AI Infrastructure
Synopsys: Achieve First Pass Silicon Success with Synopsys CXL IP Solutions
CXL chapter1 and chapter 2 presentation.pptx
WN Memory Tiering WP Mar2023.pdf
Q1 Memory Fabric Forum: Breaking Through the Memory Wall
MemVerge: Past Present and Future of CXL
Marvell - Transforming Cloud Data Centers with CXL
Compute Express Link: Advancing Coherent Connectivity
Past Present and Future of CXL
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and Switching
Microchip: CXL Use Cases and Enabling Ecosystem
SMART Modular: Memory Solutions with CXL
CXL Forum at ISC 23 - Speaker Invitation.pdf
Intel: CXL Enabled Heterogeneous Active Memory Tiering
Memory-Fabric-Forum-at-OCP-Global-Summit-2024-–-Astera-and-Microsoft.pdf
Arm: Enabling CXL devices within the Data Center with Arm Solutions
Compute Express Link (CXL) – Everything You Ought To Know
The State of CXL-related Activities within OCP
XConn: Scalable Memory Expansion and Sharing for AI Computing with CXL Switches
Astera Labs: Intelligent Connectivity for Cloud and AI Infrastructure
Ad

More from Memory Fabric Forum (20)

PPTX
H3 Platform CXL Solution_Memory Fabric Forum.pptx
PDF
Q1 Memory Fabric Forum: ZeroPoint. Remove the waste. Release the power.
PPTX
Q1 Memory Fabric Forum: Using CXL with AI Applications - Steve Scargall.pptx
PPTX
Q1 Memory Fabric Forum: Memory expansion with CXL-Ready Systems and Devices
PPTX
Q1 Memory Fabric Forum: About MindShare Training
PPTX
Q1 Memory Fabric Forum: CXL-Related Activities within OCP
PDF
Q1 Memory Fabric Forum: CXL Controller by Montage Technology
PDF
Q1 Memory Fabric Forum: Teledyne LeCroy | Austin Labs
PDF
Q1 Memory Fabric Forum: SMART CXL Product Lineup
PDF
Q1 Memory Fabric Forum: CXL Form Factor Primer
PDF
Q1 Memory Fabric Forum: Memory Fabric in a Composable System
PPTX
Q1 Memory Fabric Forum: Big Memory Computing for AI
PDF
Q1 Memory Fabric Forum: Micron CXL-Compatible Memory Modules
PPTX
Q1 Memory Fabric Forum: Compute Express Link (CXL) 3.1 Update
PPTX
Q1 Memory Fabric Forum: Advantages of Optical CXL​ for Disaggregated Compute ...
PPTX
Q1 Memory Fabric Forum: XConn CXL Switches for AI
PDF
Q1 Memory Fabric Forum: VMware Memory Vision
PPTX
MemVerge: Memory Expansion Without Breaking the Budget
PPTX
Micron - CXL Enabling New Pliability in the Modern Data Center.pptx
PPTX
Photowave Presentation Slides - 11.8.23.pptx
H3 Platform CXL Solution_Memory Fabric Forum.pptx
Q1 Memory Fabric Forum: ZeroPoint. Remove the waste. Release the power.
Q1 Memory Fabric Forum: Using CXL with AI Applications - Steve Scargall.pptx
Q1 Memory Fabric Forum: Memory expansion with CXL-Ready Systems and Devices
Q1 Memory Fabric Forum: About MindShare Training
Q1 Memory Fabric Forum: CXL-Related Activities within OCP
Q1 Memory Fabric Forum: CXL Controller by Montage Technology
Q1 Memory Fabric Forum: Teledyne LeCroy | Austin Labs
Q1 Memory Fabric Forum: SMART CXL Product Lineup
Q1 Memory Fabric Forum: CXL Form Factor Primer
Q1 Memory Fabric Forum: Memory Fabric in a Composable System
Q1 Memory Fabric Forum: Big Memory Computing for AI
Q1 Memory Fabric Forum: Micron CXL-Compatible Memory Modules
Q1 Memory Fabric Forum: Compute Express Link (CXL) 3.1 Update
Q1 Memory Fabric Forum: Advantages of Optical CXL​ for Disaggregated Compute ...
Q1 Memory Fabric Forum: XConn CXL Switches for AI
Q1 Memory Fabric Forum: VMware Memory Vision
MemVerge: Memory Expansion Without Breaking the Budget
Micron - CXL Enabling New Pliability in the Modern Data Center.pptx
Photowave Presentation Slides - 11.8.23.pptx
Ad

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Tartificialntelligence_presentation.pptx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
cuic standard and advanced reporting.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Big Data Technologies - Introduction.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Electronic commerce courselecture one. Pdf
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPT
Teaching material agriculture food technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Empathic Computing: Creating Shared Understanding
Unlocking AI with Model Context Protocol (MCP)
Tartificialntelligence_presentation.pptx
Machine learning based COVID-19 study performance prediction
Group 1 Presentation -Planning and Decision Making .pptx
Network Security Unit 5.pdf for BCA BBA.
cuic standard and advanced reporting.pdf
MYSQL Presentation for SQL database connectivity
Big Data Technologies - Introduction.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Diabetes mellitus diagnosis method based random forest with bat algorithm
Advanced methodologies resolving dimensionality complications for autism neur...
Electronic commerce courselecture one. Pdf
SOPHOS-XG Firewall Administrator PPT.pptx
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Teaching material agriculture food technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
The Rise and Fall of 3GPP – Time for a Sabbatical?
Mobile App Security Testing_ A Comprehensive Guide.pdf
Empathic Computing: Creating Shared Understanding

Intel: Industry Enablement of IO Technologies

  • 1. 1 | ©2022 Flash Memory Summit. All Rights Reserved. CXL – Industry Enablement Willie Nelson Technology Enabling Architect - Intel August 2022
  • 2. 2 | ©2022 Flash Memory Summit. All Rights Reserved. Introducing the CXL Consortium CXL Board of Directors 200+ Member Companies Industry Open Standard for High Speed Communications
  • 3. 3 | ©2022 Flash Memory Summit. All Rights Reserved. Growing Industry Momentum • CXL Consortium showcased first public demonstrations of CXL technology at SC’21 • View virtual and live demos from CXL Consortium members here: https://www.computeexpresslink.org/videos • Demos showcase CXL usages, including memory development, memory expansion and memory disaggregation
  • 4. 4 | ©2022 Flash Memory Summit. All Rights Reserved. Industry Focal Point CXL is emerging as the industry focal point for coherent IO • CXL Consortium and OpenCAPI sign letter of intent to transfer OpenCAPI specification and assets to the CXL Consortium • In February 2022, CXL Consortium and Gen- Z Consortium signed agreement to transfer Gen-Z specification and assets to CXL Consortium August 1, 2022, Flash Memory Summit CXL Consortium and OpenCAPI Consortium Sign Letter of Intent to Transfer OpenCAPI Assets to CXL
  • 5. 5 | ©2022 Flash Memory Summit. All Rights Reserved. CXL Specification Release Timeline March 2019 CXL 1.0 Specification Released September 2019 CXL Consortium Officially Incorporates CXL 1.1 Specification Released November 2020 CXL 2.0 Specification Released August 2022 CXL 3.0 Specification Released Press Release August 2, 2022, Flash Memory Summit CXL Consortium releases Compute Express Link 3.0 specification to expand fabric capabilities and management Members: 130+ Members: 15+ Members: 200+
  • 6. 6 | ©2022 Flash Memory Summit. All Rights Reserved. New Technology Enabling – Key Contributors Revolutionary New Technology HWDevelopmentTools (Analyzers,etc.) SuccessfulNewTechnologyEnablingRequiresALLContributorstobeViableforIndustryAdoption HWSilicon/ControllerVendors SiIPProviders(incl.pre-sisimulation) HardwareProduction ProductVendors SWDevelopmentTools(testing, debug, perf.,etc.) Device/UseCaseOSDrivers OperatingSystemSupport UseCaseApplications (tangiblebenefitsw/newtech) Standards/Consortiums/etc… Industry Adoption
  • 7. 7 | ©2022 Flash Memory Summit. All Rights Reserved. Intel CXL Memory Enablement & Validation DDR PCIe CXL Memory POR Platform Configurations • Large matrix of POR configurations • “Open socket” (extensive variety of technology and use cases • Plans to validate specific POR configurations of CXL memory per platform, with several vendors and modules – not exhaustive Engagement Model • Direct engagement and collaboration with Tier1 suppliers • SIG-based engagement with PCIe IHVs • Targeted engagement with numerous CXL memory device & module IHVs, as well as key customers, plus multiple Consortium based compliance workshops and various interactions Validation Model • Early and exhaustive Host- based validation spanning electrical, protocol, functional • SIG-led compliance workshops & plugfests • Host PCIe validation focus on PCIe channel, protocol features/function • Limited platform validation with PCIe products • Host validation focus on CXL channel, features & function of CXL memory as part of platform’s memory subsystem • CXL memory device & module IHV validation focus on device+media channel, function/features • Long term plan: Consortium-led compliance testing Comparing CXL memory validation to DDR/PCIe efforts Approach for CXL memory expected to evolve over generations to be PCIe-like
  • 8. 8 | ©2022 Flash Memory Summit. All Rights Reserved. Industry CXL Memory HW Enabling & Validation CPU Vendor focus: • Work with device & module vendors to enable key features • Provided CXL vendors an open, bridge architecture reference document as an initial guide, covering Bridge/module operation/features recommendations • Device/module platform integration (focused configs) • For initial AIC CEM Modules – focused validation of the media interface • Validation: • Host-side CXL functions • Memory features – RAS, etc. • CXL channel • Specific configs and vendors (# of ports, capacity, etc.) • SW Enabling: • Intel providing reference system FW/BIOS • Part of the industry effort to develop an open- source driver • SW guide for type 3 devices CXL IP Media IP DRAM DRAM DRAM DRAM DRAM DRAM DRAM DRAM DRAM DRAM CXL Memory Bridge (aka controller, buffer) CXL Memory Module* Bridge-media channel Host CXL Channel CXL IP Controller/Module Vendor focus: (bridge or module) • Memory media interface, channel electricals, media training/MRC • CXL compliance and interoperability testing *Standardization of CXL memory module form factors – EDSFF E3.s & E1.s, PCI CEM and mezzanine in process OEM/System provider focus: • Device/module platform integration • Configuration testing • In-rack level testing • Usage models testing/debug • System Validation: • SW integration including system FW/BIOS, OS, generic driver • Generate integrator list Config-1………….. Config-N A Massive Coordinated Industry Effort CPU/Host
  • 9. 9 | ©2022 Flash Memory Summit. All Rights Reserved. Q & A Willie Nelson Technology Enabling Architect - Intel August 2022
  • 10. 10 | ©2022 Flash Memory Summit. All Rights Reserved. CXL Delivers the Right Features & Architecture CXL Anopen industry-supported cache-coherent interconnect for processors, memory expansion and accelerators CoherentInterface LeveragesPCIewith3mix-and-match protocols Low Latency .Cacheand.MemorytargetedatnearCPU cachecoherentlatency AsymmetricComplexity Easesburdensofcachecoherent interfacedesigns Challenges Industrytrendsdrivingdemandforfasterdataprocessingandnext-gen datacenterperformance Increasingdemandforheterogeneouscomputingandserver disaggregation Needforincreasedmemorycapacityandbandwidth Lackofopenindustrystandardtoaddressnext-geninterconnect challenges https://www.computeexpresslink.org/resource-library
  • 11. 11 | ©2022 Flash Memory Summit. All Rights Reserved. Representative CXL Usages Memory CXL • CXL.io • CXL.memory PROTOCOLS Memory Memory Memory Memory MemoryBuffer Processor DDR DDR • Memory BW expansion • Memory capacity expansion • Storage class memory USAGES AcceleratorswithMemory CXL • CXL.io • CXL.cache • CXL.memory PROTOCOLS • GP GPU • Dense computation USAGES HBM Accelerator Cache Processor DDR DDR CachingDevices/Accelerators CXL • CXL.io • CXL.cache PROTOCOLS • PGAS NIC • NIC atomics USAGES Accelerator NIC Cache Processor DDR DDR TYPE 1 TYPE 2 TYPE 3 HBM
  • 12. 12 | ©2022 Flash Memory Summit. All Rights Reserved. Usage Local Bandwidth or Capacity Expansion Memory Pooling Main memory expansion Two-Tier Memory Value Prop Scale performance or enable use of higher core counts via added bandwidth and/or capacity Flexible memory assignment, enabling: - Lower total memory cost - Platform SKU reduction & OpEx efficiency CXL Memory Attributes Bandwidth and features similar to direct attach DDR Lower bandwidth, higher latency vs. direct attach DDR Bandwidth and features similar to direct attach DDR, latency similar to remote socket access Software Considerations OS version must support CXL memory. CXL memory visible either in same region as direct attach DDR5 or as a separate region OS version must support CXL memory. SW-visible as Persistent next- tier memory OS version must support CXL memory. Additional software layer for orchestration of pooled memory and multi-port controller CXL Memory Overview Pool CPU Direct Attach DDR5 EDSFF E3 or E1 PCI CEM/Custom Board Pooled Memory Controller

Editor's Notes

  • #5: CXL Consortium will archive the Gen-Z specification for five years. Gen-Z specification can be found on the CXL Consortium website.
  • #7: On this slide I will talk about what we anticipate the growth path of CXL memory to be. We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor. Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory .. As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play. Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers... It is different than local because it is not contained within the node itself but it can span multiple nodes. This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access. By this phase, products are expected to be in the EDSFF formfactors. ********************************************************************** Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors. Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages. The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later. What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth.. When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth Local means the CXL link is attached to the CPU.. When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes. So that is the differentiation between memory pooling which requires switch or multi-port controllers. Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there? When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add. Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs.. Crawl, walk and run Where memory is, where CXL will explode.. Growth path CEM now, EDSFF in GNR Early systems providing b/w or capacity expansion.. Later on moving to memory pooling Two-tier is optane/storage class mem
  • #8: As far as what we, Intel is doing to enable and validate CXL memory… Our desire is to evolve our approach over generations to be more like what we do for PCIe.. Today for DDR our engagement model is closed, for PCIe it is open and for CXL we can’t start open but that is the direction we will evolve towards.. we are starting out with plans to validated focused POR configurations of CXL memory per platform, with several vendors and modules.. It is by no means an exhaustive approach.. With the hope of in the future getting closer to the “open socket” approach we have with PCIe In terms of our industry engagement model.. We are starting out engaging with numerous CXL memory device and module IHVs as well as key customers.. again, with the hope that long term we can get closer to what we do with the PCIe model, where there will be CXL consortium-based engagements with CXL vendors. Finally, for CXL memory validation our current plan is to treat it as part of the platform’s memory subsystem with the long-term plan to participate in consortium-led compliance testing.
  • #9: With this slide I want to emphasize that Intel has a role to play in the CXL memory enabling and validation but to be successful as an industry, this must be a coordinated effort with CXL vendors, OEMs and system providers We Intel are only the CPU.. We work closely with device and module vendors to enable key features.. And earlier on we provided CXL vendors an open, bridge architecture reference document as an initial guide. On the HW side.. We have plans to validate focused configurations and validate available CXL memory modules as part of the platform’s memory subsystem and For initial AIC CEM Modules, Intel will do limited validation of the media interface.. And on the SW side we provide reference system FW and BIOS, are actively participating in the industry effort to develop an open-source drive and we developed and shared a software guide for type 3 devices for OS and module vendors to use. When you start connecting devices on the CXL bus, then we rely on CXL vendors to test those down-stream devices.. We support standardization of the overall CXL memory module form factors to include EDSFF and PCIe CEM… but we look to CXL vendors to work down at the buffer and module level and define and validate the buffer as well as memory media interface, channel electricals and media training.. As well as perform CXL compliance and perform interoperability testing on their devices. The OEMs and system providers are in the best position to integrate the CXL memory devices and platforms as well as SW and test a larger, more diverse number of configurations. They can do in-rack level testing and focus on testing and debugging a variety of usage models… and as a result, generate personalized integrator lists. There are a lot of pieces to developing a healthy CXL memory ecosystem and it is a coordinated industry effort that we are happy to be part of! *************************************************** Bridge/module operation/features – also part of Intel platform validation Intel is supportive of the EDSFF form factor as the standard for CXL memory. Having standardization at the memory module is what Intel believes in.. Going down to the memory buffer, there are too many variables.. It could hinder innovation.. It is too late, we already missed rev 1 of all this stuff. Plethora of devices and configs.. Things that we can control, we will test, but we are not making the device. We will help enable the ecosystem… When you look at EDSFF FF.. The FF we want standardized.. We don’t want to standardize the memory controller.. There are consortiums looking at other form-factors.. Maybe something on CXL Memory and Industry expectations : Some way to educate on closed slot/open slot.. That we are not going to do everything What we are expecting memory vendors to do We expect the industry to test things.. Intel/We are only the CPU.. When you start connecting devices on the PCIe/CXL bus , then we don’t test those down-stream devices.. Expectation setting.. We can only do so much of this.. You “vendor” need to do your piece Add a block OEMs/system providers to validate the full config.
  • #11: On this slide I will talk about what we anticipate the growth path of CXL memory to be. We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor. Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory .. As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play. Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers... It is different than local because it is not contained within the node itself but it can span multiple nodes. This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access. By this phase, products are expected to be in the EDSFF formfactors. ********************************************************************** Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors. Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages. The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later. What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth.. When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth Local means the CXL link is attached to the CPU.. When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes. So that is the differentiation between memory pooling which requires switch or multi-port controllers. Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there? When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add. Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs.. Crawl, walk and run Where memory is, where CXL will explode.. Growth path CEM now, EDSFF in GNR Early systems providing b/w or capacity expansion.. Later on moving to memory pooling Two-tier is optane/storage class mem
  • #12: On this slide I will talk about what we anticipate the growth path of CXL memory to be. We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor. Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory .. As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play. Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers... It is different than local because it is not contained within the node itself but it can span multiple nodes. This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access. By this phase, products are expected to be in the EDSFF formfactors. ********************************************************************** Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors. Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages. The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later. What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth.. When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth Local means the CXL link is attached to the CPU.. When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes. So that is the differentiation between memory pooling which requires switch or multi-port controllers. Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there? When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add. Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs.. Crawl, walk and run Where memory is, where CXL will explode.. Growth path CEM now, EDSFF in GNR Early systems providing b/w or capacity expansion.. Later on moving to memory pooling Two-tier is optane/storage class mem
  • #13: On this slide I will talk about what we anticipate the growth path of CXL memory to be. We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor. Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory .. As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play. Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers... It is different than local because it is not contained within the node itself but it can span multiple nodes. This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access. By this phase, products are expected to be in the EDSFF formfactors. ********************************************************************** Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors. Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages. The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later. What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth.. When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth Local means the CXL link is attached to the CPU.. When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes. So that is the differentiation between memory pooling which requires switch or multi-port controllers. Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there? When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add. Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs.. Crawl, walk and run Where memory is, where CXL will explode.. Growth path CEM now, EDSFF in GNR Early systems providing b/w or capacity expansion.. Later on moving to memory pooling Two-tier is optane/storage class mem