SlideShare a Scribd company logo
I/O Scalability in Xen

Kevin Tian kevin.tian@intel.com
Eddie Dong eddie.dong@intel.com
Yang Zhang yang.zhang@intel.com


Sponsored by:

                &       &
Agenda

Overview of I/O Scalability Issues
• Excessive Interrupts Hurt
• I/O NUMA Challenge


Proposals
• Soft interrupt throttling in Xen
• Interrupt-Less NAPI (ILNAPI)
• Host I/O NUMA
• Guest I/O NUMA




                                     2
Retrospect…


2009 Xen Summit (Eddie Dong, …)
       Extending I/O Scalability in Xen


Covered topics
• VNIF: multiple TX/RX tasklets, notification frequency
• VT-d: vEOI optimization, vIntr delivery
• SR-IOV: adaptive interrupt coalescing (AIC)



                 Interrupt is the hotspot!


                                                          3
New Challenges Always Exist

Interrupt overhead is increasingly high
• One 10G Niantic NIC may incur 512k intr/s
 •   64 (VFs + PF) x 8000 intr/s
 •   Similar for dom0 when multiple queues are used

• 40G NIC is coming


Prevalent NUMA architecture (even on 2-node low end server)
• The DMA distance to memory node matters (I/O NUMA)
• w/o I/O NUMA awareness, DMA accesses may be suboptimal

      Need breakthrough in software architecture


                                                              4
Excessive Interrupts Hurt! (SR-IOV Rx Netperf)
350.00%                                                          10000


                                                                 9000
                            CPU% (ITR=8000)
300.00%                     CPU% (ITR=4000)
                            CPU% (ITR=2000)
                                                                 8000
                            CPU% (ITR=1000)
                            BW (ITR=8000)
250.00%                     BW (ITR=4000)
                                                                 7000
                            BW (ITR=2000)
                            BW (ITR=1000)
                                                                 6000
200.00%

                                                                 5000

150.00%
                                                                 4000


                                                                 3000
100.00%


                                                                 2000

 50.00%
                                                                 1000


  0.00%                                                          0
  CPU%    1vm   2vm   3vm         4vm         5vm   6vm   7vm   Mb/s




                                                                       5
Excessive Interrupts Hurt!
350.00%                                                                              10000



                                CPU% (ITR=8000)                                      9000
300.00%                         CPU% (ITR=4000)
                                CPU% (ITR=2000)
                                CPU% (ITR=1000)                                      8000
                                BW (ITR=8000)
                                BW (ITR=4000)
250.00%                         BW (ITR=2000)                                        7000
                                BW (ITR=1000)
                       Bandwidth is not
                                Linear (CPU% (ITR=8000))
                                                                 CPU% increases
                      saturated with low                                             6000
200.00%
                                                                   fast with high
                        interrupt rate!                           interrupt rate!    5000

150.00%
                                                                                     4000


                                                                                     3000
100.00%


                                                                                     2000

 50.00%
                                                                                     1000


  0.00%                                                                              0
  CPU%    1vm   2vm       3vm          4vm                 5vm     6vm      7vm     Mb/s




                                                                                           6
Excessive Interrupts Hurt! (Cont.)

Excessive VM-exits (7vm as example)
                External Interrupts                 35k/s
                    APIC Access                     49k/s
                Interrupt Window                    7k/s
Excessive context switches
•   “Tackling the Management Challenges of Server
         Consolidation on Multi-core System”,
         Hui Lv, Xen Summit 2011 SC

Excessive ISR/softirq overhead both
in Xen and guest


Similar impact for dom0 using multi-queue NIC


                                                            7
NUMA Status in Xen
                                        Host CPU/Memory NUMA
              Processor Nodes
                                        • Administrable based on
                                          capacity plan


                                        Guest CPU/Memory NUMA
                       Integrated       • Not supported
                      PCI-e devices
     IOH/PCH
                                        • But extensively discussed


                                        Lack of manageability for
                          Memory
                        Memory Buffer
                                        • Host I/O NUMA
I/O Devices
                                        • Guest I/O NUMA



                                                                      8
NUMA Related Structures

An integral combo for CPU, memory and I/O devices
• System Resource Affinity Table (SRAT)
 •   Associates CPUs and memory ranges, with proximity domain

• System Locality Distance Table (SLIT)
 •   Distance among proximity domains

• _PXM (Proximity) object
 •   Standard way to describe proximity info for I/O devices



Solely acquiring _PXM info of I/O devices is not enough to
construct I/O NUMA knowledge!




                                                                9
Host I/O NUMA Issues

No host I/O NUMA awareness in Dom0
•   Dom0 owns the majority of I/O devices
•   Dom0 memory is first allocated by skipping DMA zone
•   DMA memory is reallocated for continuity later
•   Above allocations are made within node_affinity mask round-robin
    •   No consideration on actual I/O NUMA topology


Complex and confusing if dom0 handles host I/O NUMA itself
•   Implicates physical CPU/Memory awareness in dom0 too
    •   Virtual NUMA vs. Host NUMA?

Xen however has no knowledge of _PXM()


                                                                       10
Guest I/O NUMA Issues

Guest needs I/O NUMA awareness to handle assigned devices
• Guest NUMA is the premise
Guest NUMA is not upstream yet!
• Extensive talks in previous Xen summits
 •   “VM Memory Allocation Schemes and PV NUMA Guests”, Dulloor Rao
 •   “Xen Guest NUMA: General Enabling Part”, Jun Nakajima

• Already extensive discussions and works…
• Now time to push into upstream!
No I/O NUMA information exposed to guest
Lack of I/O NUMA awareness in device assignment process



                                                                      11
Proposals




Per-interrupt overhead has been studied extensively!


   Now we want to reduce the interrupt number!




                                                       12
The Effect of Dynamic Interrupt Rate
   A manual tweak on ITR based on VM number (8000 / vm_num)
350.00%                                                                       10000


                                                                              9000
                             CPU% (ITR=8000)
300.00%                      CPU% (ITR=1000)
                             CPU% (dynamic ITR)                               8000
                             BW (ITR=8000)
250.00%                      BW (ITR=1000)
                             BW (dynamic ITR)                                 7000
                             Linear (CPU% (ITR=8000))
                             Linear (CPU% (ITR=1000))                         6000
200.00%                      Linear (CPU% (dynamic ITR))

                                                                              5000

150.00%
                                                                              4000


100.00%                                                                       3000


                                                                              2000
 50.00%
                                                                              1000


  0.00%                                                                       0
  CPU%    1vm   2vm    3vm          4vm                    5vm   6vm   7vm   Mb/s




                                                                                      13
Software Interrupt Throttling in Xen


Throttle virtual interrupts based on administrative policies
• Based on shared resources (e.g. bandwidth/VM_number)
• Based on priority and SLAs
• Apply to both PV and HVM guests


Fewer virtual interrupts reduces guest ISR/softirq overhead
It may further throttle physical interrupts too!
• If the device doesn’t trigger a new interrupt when an earlier
  request is still pending




                                                                  14
Interrupt-Less NAPI (ILNAPI)

NAPI itself doesn’t eliminate interrupts
• NAPI logic is scheduled by rx interrupt handler
 •   Mask interrupt when NAPI is scheduled
 •   Unmask interrupt when NAPI completes current poll
What about scheduling NAPI w/o interrupts?
• If we can piggyback NAPI schedule on other events…
 •   System calls, other interrupts, scheduling, …
• Internal NAPI schedule overhead is much less than a heavy
  device->Xen->VM interrupt path
Yes, that’s … “Interrupt-Less NAPI (ILNAPI)”



                                                              15
Interrupt-Less NAPI (Cont.)

          Net Core            Syscall   ILNAPI_HIGH watermark:
                               ISR      •   When there’re too many
                                            notifications within the guest
  NAPI          Event Pool
                             Schedule
                                        •   Serve as the high watermark for
                                …           NAPI schedule frequency



   Poll              ISR                ILNAPI_LOW watermark:
                                        •   Activated when there’re insufficient
    IXGBEVF driver
                                            notifications
                                        •   Serve as the low water mark to
                 IRQ                        ensure a reasonable traffic
                                        •   May move back to interrupt-driven
                                            manner
   IXGBE NIC




                                                                               16
Interrupt-Less NAPI (Cont.)
350.00%                                                                   10000


                                                                          9000
                            CPU% (ITR=8000)
300.00%                     CPU% (ITR=1000)
                            CPU% (ILNAPI)                                 8000
                            BW (ITR=8000)
                            BW (ITR=1000)
250.00%                     BW (ILNAPI)                                   7000
                            Linear (CPU% (ITR=8000))
                            Linear (CPU% (ITR=1000))
                            Linear (CPU% (ILNAPI))                        6000
200.00%

                                                                          5000

150.00%
                                                                          4000


                                                                          3000
100.00%


                                                                          2000

 50.00%
                                                                          1000


  0.00%                                                                   0
 CPU%     1vm   2vm   3vm       4vm                    5vm   6vm   7vm   Mb/s




                                                                           17
Interrupt-Less NAPI (Cont.)

Watermarks can be adaptively chosen by the driver
• Based on bandwidth/buffer estimation


Or an enlightened scheme:
• Xen may provide guidance through shared buffer
 •   Resource utilization (e.g. VM number)
 •   Administrative policies
 •   SLA requirements

• ILNAPI can be turned on/off dynamically under Xen’s control
 •   E.g. in case where latency is much concerned




                                                                18
Proposals




   We need close the Xen architecture gaps for
    both host I/O NUMA and guest I/O NUMA!




                                                 19
Host I/O NUMA


Give Xen full NUMA information:
• Xen already sees SRAT/SLIT
• New hypercall to convey I/O proximity info (_PXM) from
  Dom0
 •   Xen need extend _PXM to all child devices

• Extend DMA reallocation hypercall to carry device ID
 •   May need Xen version for set_dev_node

• Xen reallocates DMA memory based on proximity info
CPU access in dom0 remains NUMA-unaware…
• E.g. the communication between backend/frontend driver



                                                           20
Guest I/O NUMA


Okay, let’s help guest NUMA support in Xen! 


IOMMU may also spans nodes
• ACPI defines Remapping Hardware Status Affinity (RHSA)
 •   The association between IOMMU and proximity domain

• Allocate remapping table based on RHSA and proximity
  domain info




                                                           21
Guest I/O NUMA (Cont.)


Make up guest I/O NUMA awareness
• Construct _PXM method for assigned devices in DM
 •   Based on guest NUMA info (SRAT/SLIT)

• Extend control panel to favor I/O NUMA
 •   Assign devices which are in same proximity domain as specified nodes of
     the guest
 •   Or, affine guest to the node where assigned device is affined
 •   The policy for SR-IOV may be more constrained
     •   E.g. all guests sharing same SR-IOV device run on same node

 •   Warn user when optimal placement can’t be assured




                                                                               22
Summary


I/O scalability is always challenging every time when we re-
examine it! 


Excessive interrupts hurt I/O scalability, but there’re some
means both in Xen and in guest to mitigate it!


CPU/Memory NUMA has been well managed in Xen, but I/O
NUMA awareness is still not in place!




                                                               23
Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS
AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES
NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR
IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL
PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO
FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR
INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER
INTELLECTUAL PROPERTY RIGHT.
Intel may make changes to specifications, product descriptions, and
plans at any time, without notice.
All dates provided are subject to change without notice.
Intel is a trademark of Intel Corporation in the U.S. and other
countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2007, Intel Corporation. All rights are protected.




                                                                      24
25

More Related Content

What's hot (20)

PPTX
Hyper V And Scvmm Best Practis
Blauge
 
PDF
Nakajima numa-final
The Linux Foundation
 
PPTX
PV-Drivers for SeaBIOS using Upstream Qemu
The Linux Foundation
 
PDF
XS Boston 2008 Self IO Emulation
The Linux Foundation
 
PDF
Ian Pratt Usenix 08 Keynote
The Linux Foundation
 
PDF
XS Boston 2008 Project Status
The Linux Foundation
 
PDF
XS Boston 2008 Fault Tolerance
The Linux Foundation
 
PDF
Minimizing I/O Latency in Xen-ARM
The Linux Foundation
 
PDF
Track A-Shmuel Panijel, Windriver
chiportal
 
PDF
Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O de...
Ryousei Takano
 
PDF
Graphics virtualization
The Linux Foundation
 
PDF
Bare-Metal Hypervisor as a Platform for Innovation
The Linux Foundation
 
PDF
XS Japan 2008 Ganeti English
The Linux Foundation
 
PDF
XS Oracle 2009 Error Detection
The Linux Foundation
 
PPTX
Presentation power vm common 2012
solarisyougood
 
PPTX
Presentation power vm virtualization without limits
solarisyougood
 
PDF
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
ITband
 
PPTX
Hypervisors
Inzemamul Haque
 
PDF
Windows Server 2012 Hyper-V Networking Evolved
Microsoft TechNet - Belgium and Luxembourg
 
PPTX
VMware vSphere 4.1 deep dive - part 1
Louis Göhl
 
Hyper V And Scvmm Best Practis
Blauge
 
Nakajima numa-final
The Linux Foundation
 
PV-Drivers for SeaBIOS using Upstream Qemu
The Linux Foundation
 
XS Boston 2008 Self IO Emulation
The Linux Foundation
 
Ian Pratt Usenix 08 Keynote
The Linux Foundation
 
XS Boston 2008 Project Status
The Linux Foundation
 
XS Boston 2008 Fault Tolerance
The Linux Foundation
 
Minimizing I/O Latency in Xen-ARM
The Linux Foundation
 
Track A-Shmuel Panijel, Windriver
chiportal
 
Cooperative VM Migration for a virtualized HPC Cluster with VMM-bypass I/O de...
Ryousei Takano
 
Graphics virtualization
The Linux Foundation
 
Bare-Metal Hypervisor as a Platform for Innovation
The Linux Foundation
 
XS Japan 2008 Ganeti English
The Linux Foundation
 
XS Oracle 2009 Error Detection
The Linux Foundation
 
Presentation power vm common 2012
solarisyougood
 
Presentation power vm virtualization without limits
solarisyougood
 
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
ITband
 
Hypervisors
Inzemamul Haque
 
Windows Server 2012 Hyper-V Networking Evolved
Microsoft TechNet - Belgium and Luxembourg
 
VMware vSphere 4.1 deep dive - part 1
Louis Göhl
 

Viewers also liked (12)

PDF
Skylark: Easy Cloud Computing
The Linux Foundation
 
PPT
Engaging the Xen Developer Comminity
The Linux Foundation
 
PDF
Xen in Linux 3.x (or PVOPS)
The Linux Foundation
 
PPT
XCP Project Update
The Linux Foundation
 
PPTX
Xen.org: The past, the present and exciting Future
The Linux Foundation
 
PPT
Xenalyze: Finding meaning in the chaos
The Linux Foundation
 
PDF
Windsor: Domain 0 Disaggregation for XenServer and XCP
The Linux Foundation
 
PPSX
Redesigning Xen Memory Sharing (Grant) Mechanism
The Linux Foundation
 
PDF
Xen io
wangyuanzhf
 
PPT
Hardware accelerated Virtualization in the ARM Cortex™ Processors
The Linux Foundation
 
PDF
Xen @ Google, 2011
The Linux Foundation
 
PDF
Kvm performance optimization for ubuntu
Sim Janghoon
 
Skylark: Easy Cloud Computing
The Linux Foundation
 
Engaging the Xen Developer Comminity
The Linux Foundation
 
Xen in Linux 3.x (or PVOPS)
The Linux Foundation
 
XCP Project Update
The Linux Foundation
 
Xen.org: The past, the present and exciting Future
The Linux Foundation
 
Xenalyze: Finding meaning in the chaos
The Linux Foundation
 
Windsor: Domain 0 Disaggregation for XenServer and XCP
The Linux Foundation
 
Redesigning Xen Memory Sharing (Grant) Mechanism
The Linux Foundation
 
Xen io
wangyuanzhf
 
Hardware accelerated Virtualization in the ARM Cortex™ Processors
The Linux Foundation
 
Xen @ Google, 2011
The Linux Foundation
 
Kvm performance optimization for ubuntu
Sim Janghoon
 
Ad

Similar to I/O Scalability in Xen (20)

PDF
Extending Io Scalability
The Linux Foundation
 
PDF
XS Oracle 2009 CVF
The Linux Foundation
 
PPTX
Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16G...
Emulex Corporation
 
PPTX
Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16G...
Emulex Corporation
 
PDF
Fordele ved POWER7 og AIX, IBM Power Event
IBM Danmark
 
PDF
021413 aix trends_jay_kruemcke
Jay Kruemcke
 
PDF
IBM System z - zEnterprise a future platform for enterprise systems
IBM Sverige
 
DOC
Powersaving with VT and IPMI
Frank Martin
 
PDF
Big Data Smarter Networks
DataWorks Summit
 
PDF
Intel - Challenges and Opportunities in Cloud-Based Genomics Analytics
IntelHealthcare
 
PDF
Hadoop on a personal supercomputer
Paul Dingman
 
PPTX
Windows server 2008 r2
PTS-Ian
 
PDF
Xen arm
guestc21e5a9
 
PPTX
ISBI MPI Tutorial
Daniel Blezek
 
PPTX
Why hitachi virtual storage platform does so well in a mainframe environment ...
Hitachi Vantara
 
PDF
Fremtidens platform til koncernsystemer (IBM System z)
IBM Danmark
 
PDF
The Explosion of Petascale in the Race to Exascale
Intel IT Center
 
ODP
Apache con 2013-hadoop
Steve Watt
 
PDF
V Smp Foundation 2.0 V1.6
jstemler
 
Extending Io Scalability
The Linux Foundation
 
XS Oracle 2009 CVF
The Linux Foundation
 
Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16G...
Emulex Corporation
 
Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16G...
Emulex Corporation
 
Fordele ved POWER7 og AIX, IBM Power Event
IBM Danmark
 
021413 aix trends_jay_kruemcke
Jay Kruemcke
 
IBM System z - zEnterprise a future platform for enterprise systems
IBM Sverige
 
Powersaving with VT and IPMI
Frank Martin
 
Big Data Smarter Networks
DataWorks Summit
 
Intel - Challenges and Opportunities in Cloud-Based Genomics Analytics
IntelHealthcare
 
Hadoop on a personal supercomputer
Paul Dingman
 
Windows server 2008 r2
PTS-Ian
 
Xen arm
guestc21e5a9
 
ISBI MPI Tutorial
Daniel Blezek
 
Why hitachi virtual storage platform does so well in a mainframe environment ...
Hitachi Vantara
 
Fremtidens platform til koncernsystemer (IBM System z)
IBM Danmark
 
The Explosion of Petascale in the Race to Exascale
Intel IT Center
 
Apache con 2013-hadoop
Steve Watt
 
V Smp Foundation 2.0 V1.6
jstemler
 
Ad

More from The Linux Foundation (20)

PDF
ELC2019: Static Partitioning Made Simple
The Linux Foundation
 
PDF
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
The Linux Foundation
 
PDF
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
The Linux Foundation
 
PDF
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
The Linux Foundation
 
PDF
XPDDS19 Keynote: Unikraft Weather Report
The Linux Foundation
 
PDF
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
The Linux Foundation
 
PDF
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
The Linux Foundation
 
PDF
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
The Linux Foundation
 
PDF
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
The Linux Foundation
 
PPTX
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
The Linux Foundation
 
PPTX
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
The Linux Foundation
 
PDF
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
The Linux Foundation
 
PDF
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
The Linux Foundation
 
PDF
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
The Linux Foundation
 
PDF
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
The Linux Foundation
 
PDF
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
The Linux Foundation
 
PDF
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
The Linux Foundation
 
PDF
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
The Linux Foundation
 
PDF
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
The Linux Foundation
 
PDF
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
The Linux Foundation
 
ELC2019: Static Partitioning Made Simple
The Linux Foundation
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
The Linux Foundation
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
The Linux Foundation
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
The Linux Foundation
 
XPDDS19 Keynote: Unikraft Weather Report
The Linux Foundation
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
The Linux Foundation
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
The Linux Foundation
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
The Linux Foundation
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
The Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
The Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
The Linux Foundation
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
The Linux Foundation
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
The Linux Foundation
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
The Linux Foundation
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
The Linux Foundation
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
The Linux Foundation
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
The Linux Foundation
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
The Linux Foundation
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
The Linux Foundation
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
The Linux Foundation
 

Recently uploaded (20)

PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
Python basic programing language for automation
DanialHabibi2
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
Python basic programing language for automation
DanialHabibi2
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 

I/O Scalability in Xen

  • 1. I/O Scalability in Xen Kevin Tian [email protected] Eddie Dong [email protected] Yang Zhang [email protected] Sponsored by: & &
  • 2. Agenda Overview of I/O Scalability Issues • Excessive Interrupts Hurt • I/O NUMA Challenge Proposals • Soft interrupt throttling in Xen • Interrupt-Less NAPI (ILNAPI) • Host I/O NUMA • Guest I/O NUMA 2
  • 3. Retrospect… 2009 Xen Summit (Eddie Dong, …) Extending I/O Scalability in Xen Covered topics • VNIF: multiple TX/RX tasklets, notification frequency • VT-d: vEOI optimization, vIntr delivery • SR-IOV: adaptive interrupt coalescing (AIC) Interrupt is the hotspot! 3
  • 4. New Challenges Always Exist Interrupt overhead is increasingly high • One 10G Niantic NIC may incur 512k intr/s • 64 (VFs + PF) x 8000 intr/s • Similar for dom0 when multiple queues are used • 40G NIC is coming Prevalent NUMA architecture (even on 2-node low end server) • The DMA distance to memory node matters (I/O NUMA) • w/o I/O NUMA awareness, DMA accesses may be suboptimal Need breakthrough in software architecture 4
  • 5. Excessive Interrupts Hurt! (SR-IOV Rx Netperf) 350.00% 10000 9000 CPU% (ITR=8000) 300.00% CPU% (ITR=4000) CPU% (ITR=2000) 8000 CPU% (ITR=1000) BW (ITR=8000) 250.00% BW (ITR=4000) 7000 BW (ITR=2000) BW (ITR=1000) 6000 200.00% 5000 150.00% 4000 3000 100.00% 2000 50.00% 1000 0.00% 0 CPU% 1vm 2vm 3vm 4vm 5vm 6vm 7vm Mb/s 5
  • 6. Excessive Interrupts Hurt! 350.00% 10000 CPU% (ITR=8000) 9000 300.00% CPU% (ITR=4000) CPU% (ITR=2000) CPU% (ITR=1000) 8000 BW (ITR=8000) BW (ITR=4000) 250.00% BW (ITR=2000) 7000 BW (ITR=1000) Bandwidth is not Linear (CPU% (ITR=8000)) CPU% increases saturated with low 6000 200.00% fast with high interrupt rate! interrupt rate! 5000 150.00% 4000 3000 100.00% 2000 50.00% 1000 0.00% 0 CPU% 1vm 2vm 3vm 4vm 5vm 6vm 7vm Mb/s 6
  • 7. Excessive Interrupts Hurt! (Cont.) Excessive VM-exits (7vm as example) External Interrupts 35k/s APIC Access 49k/s Interrupt Window 7k/s Excessive context switches • “Tackling the Management Challenges of Server Consolidation on Multi-core System”, Hui Lv, Xen Summit 2011 SC Excessive ISR/softirq overhead both in Xen and guest Similar impact for dom0 using multi-queue NIC 7
  • 8. NUMA Status in Xen Host CPU/Memory NUMA Processor Nodes • Administrable based on capacity plan Guest CPU/Memory NUMA Integrated • Not supported PCI-e devices IOH/PCH • But extensively discussed Lack of manageability for Memory Memory Buffer • Host I/O NUMA I/O Devices • Guest I/O NUMA 8
  • 9. NUMA Related Structures An integral combo for CPU, memory and I/O devices • System Resource Affinity Table (SRAT) • Associates CPUs and memory ranges, with proximity domain • System Locality Distance Table (SLIT) • Distance among proximity domains • _PXM (Proximity) object • Standard way to describe proximity info for I/O devices Solely acquiring _PXM info of I/O devices is not enough to construct I/O NUMA knowledge! 9
  • 10. Host I/O NUMA Issues No host I/O NUMA awareness in Dom0 • Dom0 owns the majority of I/O devices • Dom0 memory is first allocated by skipping DMA zone • DMA memory is reallocated for continuity later • Above allocations are made within node_affinity mask round-robin • No consideration on actual I/O NUMA topology Complex and confusing if dom0 handles host I/O NUMA itself • Implicates physical CPU/Memory awareness in dom0 too • Virtual NUMA vs. Host NUMA? Xen however has no knowledge of _PXM() 10
  • 11. Guest I/O NUMA Issues Guest needs I/O NUMA awareness to handle assigned devices • Guest NUMA is the premise Guest NUMA is not upstream yet! • Extensive talks in previous Xen summits • “VM Memory Allocation Schemes and PV NUMA Guests”, Dulloor Rao • “Xen Guest NUMA: General Enabling Part”, Jun Nakajima • Already extensive discussions and works… • Now time to push into upstream! No I/O NUMA information exposed to guest Lack of I/O NUMA awareness in device assignment process 11
  • 12. Proposals Per-interrupt overhead has been studied extensively! Now we want to reduce the interrupt number! 12
  • 13. The Effect of Dynamic Interrupt Rate A manual tweak on ITR based on VM number (8000 / vm_num) 350.00% 10000 9000 CPU% (ITR=8000) 300.00% CPU% (ITR=1000) CPU% (dynamic ITR) 8000 BW (ITR=8000) 250.00% BW (ITR=1000) BW (dynamic ITR) 7000 Linear (CPU% (ITR=8000)) Linear (CPU% (ITR=1000)) 6000 200.00% Linear (CPU% (dynamic ITR)) 5000 150.00% 4000 100.00% 3000 2000 50.00% 1000 0.00% 0 CPU% 1vm 2vm 3vm 4vm 5vm 6vm 7vm Mb/s 13
  • 14. Software Interrupt Throttling in Xen Throttle virtual interrupts based on administrative policies • Based on shared resources (e.g. bandwidth/VM_number) • Based on priority and SLAs • Apply to both PV and HVM guests Fewer virtual interrupts reduces guest ISR/softirq overhead It may further throttle physical interrupts too! • If the device doesn’t trigger a new interrupt when an earlier request is still pending 14
  • 15. Interrupt-Less NAPI (ILNAPI) NAPI itself doesn’t eliminate interrupts • NAPI logic is scheduled by rx interrupt handler • Mask interrupt when NAPI is scheduled • Unmask interrupt when NAPI completes current poll What about scheduling NAPI w/o interrupts? • If we can piggyback NAPI schedule on other events… • System calls, other interrupts, scheduling, … • Internal NAPI schedule overhead is much less than a heavy device->Xen->VM interrupt path Yes, that’s … “Interrupt-Less NAPI (ILNAPI)” 15
  • 16. Interrupt-Less NAPI (Cont.) Net Core Syscall ILNAPI_HIGH watermark: ISR • When there’re too many notifications within the guest NAPI Event Pool Schedule • Serve as the high watermark for … NAPI schedule frequency Poll ISR ILNAPI_LOW watermark: • Activated when there’re insufficient IXGBEVF driver notifications • Serve as the low water mark to IRQ ensure a reasonable traffic • May move back to interrupt-driven manner IXGBE NIC 16
  • 17. Interrupt-Less NAPI (Cont.) 350.00% 10000 9000 CPU% (ITR=8000) 300.00% CPU% (ITR=1000) CPU% (ILNAPI) 8000 BW (ITR=8000) BW (ITR=1000) 250.00% BW (ILNAPI) 7000 Linear (CPU% (ITR=8000)) Linear (CPU% (ITR=1000)) Linear (CPU% (ILNAPI)) 6000 200.00% 5000 150.00% 4000 3000 100.00% 2000 50.00% 1000 0.00% 0 CPU% 1vm 2vm 3vm 4vm 5vm 6vm 7vm Mb/s 17
  • 18. Interrupt-Less NAPI (Cont.) Watermarks can be adaptively chosen by the driver • Based on bandwidth/buffer estimation Or an enlightened scheme: • Xen may provide guidance through shared buffer • Resource utilization (e.g. VM number) • Administrative policies • SLA requirements • ILNAPI can be turned on/off dynamically under Xen’s control • E.g. in case where latency is much concerned 18
  • 19. Proposals We need close the Xen architecture gaps for both host I/O NUMA and guest I/O NUMA! 19
  • 20. Host I/O NUMA Give Xen full NUMA information: • Xen already sees SRAT/SLIT • New hypercall to convey I/O proximity info (_PXM) from Dom0 • Xen need extend _PXM to all child devices • Extend DMA reallocation hypercall to carry device ID • May need Xen version for set_dev_node • Xen reallocates DMA memory based on proximity info CPU access in dom0 remains NUMA-unaware… • E.g. the communication between backend/frontend driver 20
  • 21. Guest I/O NUMA Okay, let’s help guest NUMA support in Xen!  IOMMU may also spans nodes • ACPI defines Remapping Hardware Status Affinity (RHSA) • The association between IOMMU and proximity domain • Allocate remapping table based on RHSA and proximity domain info 21
  • 22. Guest I/O NUMA (Cont.) Make up guest I/O NUMA awareness • Construct _PXM method for assigned devices in DM • Based on guest NUMA info (SRAT/SLIT) • Extend control panel to favor I/O NUMA • Assign devices which are in same proximity domain as specified nodes of the guest • Or, affine guest to the node where assigned device is affined • The policy for SR-IOV may be more constrained • E.g. all guests sharing same SR-IOV device run on same node • Warn user when optimal placement can’t be assured 22
  • 23. Summary I/O scalability is always challenging every time when we re- examine it!  Excessive interrupts hurt I/O scalability, but there’re some means both in Xen and in guest to mitigate it! CPU/Memory NUMA has been well managed in Xen, but I/O NUMA awareness is still not in place! 23
  • 24. Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel may make changes to specifications, product descriptions, and plans at any time, without notice. All dates provided are subject to change without notice. Intel is a trademark of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others. Copyright © 2007, Intel Corporation. All rights are protected. 24
  • 25. 25