SlideShare a Scribd company logo
Experiences in providing secure multi-tenant
Lustre access to OpenStack
Peter Clapham <pc7@sanger.ac.uk>
Wellcome Trust Sanger Institute
Sanger Science
Scientific Research Programmes
Core Facilities
HPC and Cloud computing are
complementary
Traditional HPC
● Highest possible performance
● A mature and centrally managed
compute platform
● High performance Lustre
filesystems for data intensive
analysis
Flexible Compute
● Full segregation of projects ensures
data security
● Developers no longer tied to a single
stack
● Reproducibility through containers /
images and infrastructure-as-code
But there’s a catch or two...
• Large number of traditional/legacy pipelines
• They require a performant shared POSIX filesystem, while cloud workloads
support object stores
• We do not always have the source code or expertise to migrate
• We need multi-gigabyte per second performance
• The tenant will have root
• and could impersonate any user, but Lustre trusts the client’s identity assertions,
just like NFSv3
• The solution must be simple for the tenant and administrator
Lustre hardware
6+ year old hardware
• 4x Lustre object storage servers
• Dual Intel E5620 @ 2.40GHz
• 256GB RAM
• Dual 10G network
• lustre: 2.9.0.ddnsec2
• https://jira.hpdd.intel.com/browse/LU-9289 (landed in 2.10)
• OSTs from DDN SFA-10k
• 300x SATA, 7200rpm , 1TB spindles
We have seen this system reach 6 GByte/second in production
Experiences in Providing Secure Mult-Tenant Lustre Access to OpenStack
Lustre 2.9 features
• Each tenant’s I/O can be squashed to their own unique UID/GID
• Each tenant is restricted to their own subdirectory of the Lustre filesystem
It might be possible to treat general access outside of OpenStack as a
separate tenant with:
• a UID space reserved for a number of OpenStack tenants
• only a subdirectory exported for standard usage
Public network
tcp32769 Provider Network
Lustre router
Lustre serverstcp0
Tenant network
tcp32770 Provider Network
Logical layout
Tenant network
Lustre server
Per-tenant UID mapping
Allows UIDs from a set of NIDs to be mapped to another set of UIDs
These commands are run on the MGS:
lctl nodemap_add ${TENANT_NAME}
lctl nodemap_modify --name ${TENANT_NAME} --property trusted --value 0
lctl nodemap_modify --name ${TENANT_NAME} --property admin --value 0
lctl nodemap_modify --name ${TENANT_NAME} --property squash_uid --value ${TENANT_UID}
lctl nodemap_modify --name ${TENANT_NAME} --property squash_gid --value ${TENANT_UID}
lctl nodemap_add_idmap --name ${TENANT_NAME} --idtype uid --idmap 1000:${TENANT_UID}
Lustre server:
Per-tenant subtree restriction
Constrains client access to a subdirectory of a filesystem.
mkdir /lustre/secure/${TENANT_NAME}
chown ${TENANT_NAME} /lustre/secure/${TENANT_NAME}
Set the subtree root directory for the tenant:
lctl set_param -P nodemap.${TENANT_NAME}.fileset=/${TENANT_NAME}
Lustre server:
Map nodemap to network
Add the tenant network range to the Lustre nodemap
lctl nodemap_add_range --name ${TENANT_NAME} --range 
[0-255].[0-255].[0-255].[0-255]@tcp${TENANT_UID}
And this command adds a route via a Lustre network router. This is run on all
MDS and OSS (or the route added to
/etc/modprobe.d/lustre.conf)
lnetctl route add --net tcp${TENANT_UID} --gateway ${LUSTRE_ROUTER_IP}@tcp
In the same way a similar command is needed on each client using TCP
Openstack:
Network configuration
neutron net-create LNet-1 --shared --provider:network_type vlan 
--provider:physical_network datacentre --provider:segmentation_id 
${TENANT_PROVIDER_VLAN_ID}
neutron subnet-create --enable-dhcp --dns-nameserver 172.18.255.1 --no-gateway 
--name LNet-subnet-1 --allocation-pool start=172.27.202.17,end=172.27.203.240 
172.27.202.0/23 ${NETWORK_UUID}
openstack role create LNet-1_ok
For each tenant user that needs to create instances attached to this Lustre network:
openstack role add --project ${TENANT_UUID} --user ${USER_UUID} ${ROLE_ID}
Openstack policy
Simplify automation by minimal change to /etc/neutron/policy.json
"get_network": "rule:get_network_local"
/etc/neutron/policy.d/get_networks_local.json then defines the new
rule:
{
"get_network_local": "rule:admin_or_owner or rule:external or
rule:context_is_advsvc or rule:show_providers or ( not
rule:provider_networks and rule:shared )"
}
Openstack policy
/etc/neutron/policy.d/provider.json is used to define networks and their
mapping to roles, and allow access to the provider network.
{
"net_LNet-1": "field:networks:id=d18f2aca-163b-4fc7-a493-237e383c1aa9",
"show_LNet-1": "rule:net_LNet-1 and role:LNet-1_ok",
"net_LNet-2": "field:networks:id=169b54c9-4292-478b-ac72-272725a26263",
"show_LNet-2": "rule:net_LNet-2 and role:LNet-2_ok",
"provider_networks": "rule:net_LNet-1 or rule:net_LNet-2",
"show_providers": "rule:show_LNet-1 or rule:show_LNet-2"
}
Restart Neutron - can be disruptive!
Physical router configuration
• Repurposed Nova compute node
• RedHat 7.3
• Lustre 2.9.0.ddnsec2
• Mellanox ConnectX-4 (2*25GbE)
• Dual Intel E5-2690 v4 @ 2.60GHz
• 512 GB RAM
Connected in a single rack so packets from other racks will have to
transverse the spine. No changes from default settings.
Client virtual machines
• 2 CPU
• 4 GB RAM
• CentOS Linux release 7.3.1611 (Core)
• Lustre: 2.9.0.ddnsec2
• Two NICs
• Tenant network
• Tenant-specific Lustre provider network
Filesets and uid mapping have no effect
Instance size has little effect
Single client read performance
Single client write performance
Multiple VMs, aggregate write
performance, metal LNet routers
Multiple VMs, aggregate read
performance, metal LNet routers
Virtualised Lustre routers
• We could see that bare metal Lustre routers gave acceptable
performance
• We wanted to know if we could virtualise these routers
• Each tenant could have their own set of virtual routers
• Fault isolation
• Ease of provisioning routers
• No additional cost
• Increases east-west traffic, but that’s OK.
Public network
Tenant network
tcp32769 Provider Network
Lustre router
Lustre servers
tcp0
Provider Network
Tenant network
Lustre router
tcp32770 Provider Network
Logical layout
Improved security
As each tenant has its own set of Lustre routers:
• The traffic to a different tenant does not go to a shared router
• A Lustre router could be compromised without directly compromising
another tenant’s data - the filesystem servers will not route data for
@tcp1 to the router @tcp2
• Either a second Lustre router or the Lustre servers would need to be
compromised to intercept or reroute the data
Port security...
The routed Lustre provider network ( tcp32769 etc) required that port
security was disabled on the virtual Lustre router ports.
neutron port-list | grep 172.27.70.36 | awk '{print $2}'
08a1808a-fe4a-463c-b755-397aedd0b36c
neutron port-update --no-security-groups 08a1808a-fe4a-463c-b755-397aedd0b36c
neutron port-update 08a1808a-fe4a-463c-b755-397aedd0b36c 
--port-security-enabled=False
http://kimizhang.com/neutron-ml2-port-security/
We would need to have iptables inside the instance rather than rely on
iptables in the ovs/hypervisor. The tests do not include this.
Virtual Lnet routers:
Sequential performance
Virtual Lnet routers:
Random Performance
Asymmetric routing?
http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.rpf.html
Public network
Lustre routers
Lustre servers
tcp0
Tenant network
tcp32770 Provider Network
Hostile system
(e.g. laptop)
tcp0
tcp0
Conclusions
• Follow our activities on http://hpc-news.sanger.ac.uk
• Isolated POSIX islands can be deployed to OpenStack with Lustre 2.9+
• Performance is acceptable
• Lustre routers require little CPU and memory
• Physical routers work and can give good locality for network usage
• Virtual routers work, can scale and give additional security benefits
• Next steps:
• Improve configuration automation
• Understand the neutron port security issue
• Improve network performance (MTU, OpenVSwitch etc).
Acknowledgements
DDN: Sébastien Buisson, Thomas Favre-Bulle, Richard Mansfield, James Coomer
Sanger Informatics Systems Group: Pete Clapham, James Beal, John Constable,
Helen Cousins, Brett Hartley, Dave Holland, Jon Nicholson, Matthew Vernon

More Related Content

What's hot (20)

PDF
Neutron: br-ex is now deprecated! what is modern way?
Akihiro Motoki
 
PDF
Jose Selvi - Side-Channels Uncovered [rootedvlc2018]
RootedCON
 
PPTX
Neutron behind the scenes
inbroker
 
PDF
Introduction to Software Defined Networking and OpenStack Neutron
Sana Khan
 
PPTX
Grehack2013-RuoAndo-Unraveling large scale geographical distribution of vulne...
Ruo Ando
 
PDF
Mininet introduction
Vipin Gupta
 
PPTX
OpenStack Networking and Automation
Adam Johnson
 
PDF
L3HA-VRRP-20141201
Manabu Ori
 
PPTX
Open stack Architecture and Use Cases
Ahmad Tfaily
 
PPTX
OpenStack Neutron behind the Scenes
Anil Bidari ( CEO , Cloud Enabled)
 
PDF
OpenStack Neutron Havana Overview - Oct 2013
Edgar Magana
 
PDF
Kubernetes networking - basics
Juraj Hantak
 
PPTX
Mininet demo
Momina Masood
 
PDF
OpenStack Neutron Tutorial
mestery
 
PDF
OpenStack Networking
Ilya Shakhat
 
PDF
Open stack networking_101_update_2014
yfauser
 
PDF
OpenStack networking-sfc flow 분석
Yongyoon Shin
 
PDF
OpenStack DVR_What is DVR?
Yongyoon Shin
 
PPTX
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
OpenStack Korea Community
 
PDF
Whats new in neutron for open stack havana
Kamesh Pemmaraju
 
Neutron: br-ex is now deprecated! what is modern way?
Akihiro Motoki
 
Jose Selvi - Side-Channels Uncovered [rootedvlc2018]
RootedCON
 
Neutron behind the scenes
inbroker
 
Introduction to Software Defined Networking and OpenStack Neutron
Sana Khan
 
Grehack2013-RuoAndo-Unraveling large scale geographical distribution of vulne...
Ruo Ando
 
Mininet introduction
Vipin Gupta
 
OpenStack Networking and Automation
Adam Johnson
 
L3HA-VRRP-20141201
Manabu Ori
 
Open stack Architecture and Use Cases
Ahmad Tfaily
 
OpenStack Neutron behind the Scenes
Anil Bidari ( CEO , Cloud Enabled)
 
OpenStack Neutron Havana Overview - Oct 2013
Edgar Magana
 
Kubernetes networking - basics
Juraj Hantak
 
Mininet demo
Momina Masood
 
OpenStack Neutron Tutorial
mestery
 
OpenStack Networking
Ilya Shakhat
 
Open stack networking_101_update_2014
yfauser
 
OpenStack networking-sfc flow 분석
Yongyoon Shin
 
OpenStack DVR_What is DVR?
Yongyoon Shin
 
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
OpenStack Korea Community
 
Whats new in neutron for open stack havana
Kamesh Pemmaraju
 

Similar to Experiences in Providing Secure Mult-Tenant Lustre Access to OpenStack (20)

PDF
Minimal OpenStack LinuxCon NA 2015
Sean Dague
 
PPTX
Hands-on Lab: Test Drive Your OpenStack Network
PLUMgrid
 
PPTX
Cloud computing and OpenStack
Edgar Magana
 
PPTX
Openstack meetup: Bootstrapping OpenStack to Corporate IT
Mirantis
 
PDF
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
markmcclain
 
PPTX
OpenStack Quantum
openstackindia
 
PPTX
Couch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean Winn
Trevor Roberts Jr.
 
PPTX
Nuage meetup - Flexible and agile Software Defined Networking (SDN)
SDN_Paris
 
PDF
RedHat OpenStack Platform Overview
indevlab
 
PDF
OpenStack Summit Paris - Neutron & Nuage Networks in Private Cloud Environments
Jonas Vermeulen
 
TXT
Havana版 RDO-QuickStart-2 Answer File(answer2.txt)
VirtualTech Japan Inc.
 
TXT
Havana版 RDO-QuickStart-1 Answer File(answer1.txt)
VirtualTech Japan Inc.
 
PPTX
NTT SIC marketplace slide deck at Tokyo Summit
Toshikazu Ichikawa
 
PDF
Open nebula froscon
OpenNebula Project
 
TXT
Havana版 RDO-QuickStart-3 Answer File(RDO-QuickStart-3.txt)
VirtualTech Japan Inc.
 
PDF
OpenStack: Security Beyond Firewalls
Giuseppe Paterno'
 
PDF
Openstack: security beyond firewalls
GARL
 
PDF
Loadays 2013 OpenNebula Fundamentals
OpenNebula Project
 
PDF
OpenStack Tokyo Meeup - Gluster Storage Day
Dan Radez
 
PPT
Openstack presentation
Sankalp Jain
 
Minimal OpenStack LinuxCon NA 2015
Sean Dague
 
Hands-on Lab: Test Drive Your OpenStack Network
PLUMgrid
 
Cloud computing and OpenStack
Edgar Magana
 
Openstack meetup: Bootstrapping OpenStack to Corporate IT
Mirantis
 
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
markmcclain
 
OpenStack Quantum
openstackindia
 
Couch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean Winn
Trevor Roberts Jr.
 
Nuage meetup - Flexible and agile Software Defined Networking (SDN)
SDN_Paris
 
RedHat OpenStack Platform Overview
indevlab
 
OpenStack Summit Paris - Neutron & Nuage Networks in Private Cloud Environments
Jonas Vermeulen
 
Havana版 RDO-QuickStart-2 Answer File(answer2.txt)
VirtualTech Japan Inc.
 
Havana版 RDO-QuickStart-1 Answer File(answer1.txt)
VirtualTech Japan Inc.
 
NTT SIC marketplace slide deck at Tokyo Summit
Toshikazu Ichikawa
 
Open nebula froscon
OpenNebula Project
 
Havana版 RDO-QuickStart-3 Answer File(RDO-QuickStart-3.txt)
VirtualTech Japan Inc.
 
OpenStack: Security Beyond Firewalls
Giuseppe Paterno'
 
Openstack: security beyond firewalls
GARL
 
Loadays 2013 OpenNebula Fundamentals
OpenNebula Project
 
OpenStack Tokyo Meeup - Gluster Storage Day
Dan Radez
 
Openstack presentation
Sankalp Jain
 
Ad

More from inside-BigData.com (20)

PDF
Major Market Shifts in IT
inside-BigData.com
 
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
PPTX
Transforming Private 5G Networks
inside-BigData.com
 
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
inside-BigData.com
 
PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
inside-BigData.com
 
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
inside-BigData.com
 
PDF
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com
 
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
PDF
Machine Learning for Weather Forecasts
inside-BigData.com
 
PPTX
HPC AI Advisory Council Update
inside-BigData.com
 
PDF
Fugaku Supercomputer joins fight against COVID-19
inside-BigData.com
 
PDF
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
inside-BigData.com
 
PDF
State of ARM-based HPC
inside-BigData.com
 
PDF
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
inside-BigData.com
 
PDF
Scaling TCO in a Post Moore's Era
inside-BigData.com
 
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
inside-BigData.com
 
PDF
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
PDF
Overview of HPC Interconnects
inside-BigData.com
 
Major Market Shifts in IT
inside-BigData.com
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
Transforming Private 5G Networks
inside-BigData.com
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
inside-BigData.com
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
inside-BigData.com
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
inside-BigData.com
 
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
Machine Learning for Weather Forecasts
inside-BigData.com
 
HPC AI Advisory Council Update
inside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
inside-BigData.com
 
State of ARM-based HPC
inside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
inside-BigData.com
 
Scaling TCO in a Post Moore's Era
inside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
inside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
Overview of HPC Interconnects
inside-BigData.com
 
Ad

Recently uploaded (20)

PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
PPTX
CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
pcprocore
 
PDF
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
PDF
Redefining Work in the Age of AI - What to expect? How to prepare? Why it mat...
Malinda Kapuruge
 
PPTX
𝙳𝚘𝚠𝚗𝚕𝚘𝚊𝚍—Wondershare Filmora Crack 14.0.7 + Key Download 2025
sebastian aliya
 
PPSX
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
PPTX
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
PDF
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
PDF
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
PDF
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
PPTX
Practical Applications of AI in Local Government
OnBoard
 
PPTX
01_Approach Cyber- DORA Incident Management.pptx
FinTech Belgium
 
PDF
The Growing Value and Application of FME & GenAI
Safe Software
 
PDF
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
PDF
5 Things to Consider When Deploying AI in Your Enterprise
Safe Software
 
PDF
From Chatbot to Destroyer of Endpoints - Can ChatGPT Automate EDR Bypasses (1...
Priyanka Aash
 
PPTX
reInforce 2025 Lightning Talk - Scott Francis.pptx
ScottFrancis51
 
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
PDF
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
PDF
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
pcprocore
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
Redefining Work in the Age of AI - What to expect? How to prepare? Why it mat...
Malinda Kapuruge
 
𝙳𝚘𝚠𝚗𝚕𝚘𝚊𝚍—Wondershare Filmora Crack 14.0.7 + Key Download 2025
sebastian aliya
 
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
Paycifi - Programmable Trust_Breakfast_PPTXT
FinTech Belgium
 
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
Practical Applications of AI in Local Government
OnBoard
 
01_Approach Cyber- DORA Incident Management.pptx
FinTech Belgium
 
The Growing Value and Application of FME & GenAI
Safe Software
 
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
5 Things to Consider When Deploying AI in Your Enterprise
Safe Software
 
From Chatbot to Destroyer of Endpoints - Can ChatGPT Automate EDR Bypasses (1...
Priyanka Aash
 
reInforce 2025 Lightning Talk - Scott Francis.pptx
ScottFrancis51
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
The Future of Product Management in AI ERA.pdf
Alyona Owens
 

Experiences in Providing Secure Mult-Tenant Lustre Access to OpenStack

  • 1. Experiences in providing secure multi-tenant Lustre access to OpenStack Peter Clapham <[email protected]> Wellcome Trust Sanger Institute
  • 2. Sanger Science Scientific Research Programmes Core Facilities
  • 3. HPC and Cloud computing are complementary Traditional HPC ● Highest possible performance ● A mature and centrally managed compute platform ● High performance Lustre filesystems for data intensive analysis Flexible Compute ● Full segregation of projects ensures data security ● Developers no longer tied to a single stack ● Reproducibility through containers / images and infrastructure-as-code
  • 4. But there’s a catch or two... • Large number of traditional/legacy pipelines • They require a performant shared POSIX filesystem, while cloud workloads support object stores • We do not always have the source code or expertise to migrate • We need multi-gigabyte per second performance • The tenant will have root • and could impersonate any user, but Lustre trusts the client’s identity assertions, just like NFSv3 • The solution must be simple for the tenant and administrator
  • 5. Lustre hardware 6+ year old hardware • 4x Lustre object storage servers • Dual Intel E5620 @ 2.40GHz • 256GB RAM • Dual 10G network • lustre: 2.9.0.ddnsec2 • https://jira.hpdd.intel.com/browse/LU-9289 (landed in 2.10) • OSTs from DDN SFA-10k • 300x SATA, 7200rpm , 1TB spindles We have seen this system reach 6 GByte/second in production
  • 7. Lustre 2.9 features • Each tenant’s I/O can be squashed to their own unique UID/GID • Each tenant is restricted to their own subdirectory of the Lustre filesystem It might be possible to treat general access outside of OpenStack as a separate tenant with: • a UID space reserved for a number of OpenStack tenants • only a subdirectory exported for standard usage
  • 8. Public network tcp32769 Provider Network Lustre router Lustre serverstcp0 Tenant network tcp32770 Provider Network Logical layout Tenant network
  • 9. Lustre server Per-tenant UID mapping Allows UIDs from a set of NIDs to be mapped to another set of UIDs These commands are run on the MGS: lctl nodemap_add ${TENANT_NAME} lctl nodemap_modify --name ${TENANT_NAME} --property trusted --value 0 lctl nodemap_modify --name ${TENANT_NAME} --property admin --value 0 lctl nodemap_modify --name ${TENANT_NAME} --property squash_uid --value ${TENANT_UID} lctl nodemap_modify --name ${TENANT_NAME} --property squash_gid --value ${TENANT_UID} lctl nodemap_add_idmap --name ${TENANT_NAME} --idtype uid --idmap 1000:${TENANT_UID}
  • 10. Lustre server: Per-tenant subtree restriction Constrains client access to a subdirectory of a filesystem. mkdir /lustre/secure/${TENANT_NAME} chown ${TENANT_NAME} /lustre/secure/${TENANT_NAME} Set the subtree root directory for the tenant: lctl set_param -P nodemap.${TENANT_NAME}.fileset=/${TENANT_NAME}
  • 11. Lustre server: Map nodemap to network Add the tenant network range to the Lustre nodemap lctl nodemap_add_range --name ${TENANT_NAME} --range [0-255].[0-255].[0-255].[0-255]@tcp${TENANT_UID} And this command adds a route via a Lustre network router. This is run on all MDS and OSS (or the route added to /etc/modprobe.d/lustre.conf) lnetctl route add --net tcp${TENANT_UID} --gateway ${LUSTRE_ROUTER_IP}@tcp In the same way a similar command is needed on each client using TCP
  • 12. Openstack: Network configuration neutron net-create LNet-1 --shared --provider:network_type vlan --provider:physical_network datacentre --provider:segmentation_id ${TENANT_PROVIDER_VLAN_ID} neutron subnet-create --enable-dhcp --dns-nameserver 172.18.255.1 --no-gateway --name LNet-subnet-1 --allocation-pool start=172.27.202.17,end=172.27.203.240 172.27.202.0/23 ${NETWORK_UUID} openstack role create LNet-1_ok For each tenant user that needs to create instances attached to this Lustre network: openstack role add --project ${TENANT_UUID} --user ${USER_UUID} ${ROLE_ID}
  • 13. Openstack policy Simplify automation by minimal change to /etc/neutron/policy.json "get_network": "rule:get_network_local" /etc/neutron/policy.d/get_networks_local.json then defines the new rule: { "get_network_local": "rule:admin_or_owner or rule:external or rule:context_is_advsvc or rule:show_providers or ( not rule:provider_networks and rule:shared )" }
  • 14. Openstack policy /etc/neutron/policy.d/provider.json is used to define networks and their mapping to roles, and allow access to the provider network. { "net_LNet-1": "field:networks:id=d18f2aca-163b-4fc7-a493-237e383c1aa9", "show_LNet-1": "rule:net_LNet-1 and role:LNet-1_ok", "net_LNet-2": "field:networks:id=169b54c9-4292-478b-ac72-272725a26263", "show_LNet-2": "rule:net_LNet-2 and role:LNet-2_ok", "provider_networks": "rule:net_LNet-1 or rule:net_LNet-2", "show_providers": "rule:show_LNet-1 or rule:show_LNet-2" } Restart Neutron - can be disruptive!
  • 15. Physical router configuration • Repurposed Nova compute node • RedHat 7.3 • Lustre 2.9.0.ddnsec2 • Mellanox ConnectX-4 (2*25GbE) • Dual Intel E5-2690 v4 @ 2.60GHz • 512 GB RAM Connected in a single rack so packets from other racks will have to transverse the spine. No changes from default settings.
  • 16. Client virtual machines • 2 CPU • 4 GB RAM • CentOS Linux release 7.3.1611 (Core) • Lustre: 2.9.0.ddnsec2 • Two NICs • Tenant network • Tenant-specific Lustre provider network
  • 17. Filesets and uid mapping have no effect Instance size has little effect Single client read performance
  • 18. Single client write performance
  • 19. Multiple VMs, aggregate write performance, metal LNet routers
  • 20. Multiple VMs, aggregate read performance, metal LNet routers
  • 21. Virtualised Lustre routers • We could see that bare metal Lustre routers gave acceptable performance • We wanted to know if we could virtualise these routers • Each tenant could have their own set of virtual routers • Fault isolation • Ease of provisioning routers • No additional cost • Increases east-west traffic, but that’s OK.
  • 22. Public network Tenant network tcp32769 Provider Network Lustre router Lustre servers tcp0 Provider Network Tenant network Lustre router tcp32770 Provider Network Logical layout
  • 23. Improved security As each tenant has its own set of Lustre routers: • The traffic to a different tenant does not go to a shared router • A Lustre router could be compromised without directly compromising another tenant’s data - the filesystem servers will not route data for @tcp1 to the router @tcp2 • Either a second Lustre router or the Lustre servers would need to be compromised to intercept or reroute the data
  • 24. Port security... The routed Lustre provider network ( tcp32769 etc) required that port security was disabled on the virtual Lustre router ports. neutron port-list | grep 172.27.70.36 | awk '{print $2}' 08a1808a-fe4a-463c-b755-397aedd0b36c neutron port-update --no-security-groups 08a1808a-fe4a-463c-b755-397aedd0b36c neutron port-update 08a1808a-fe4a-463c-b755-397aedd0b36c --port-security-enabled=False http://kimizhang.com/neutron-ml2-port-security/ We would need to have iptables inside the instance rather than rely on iptables in the ovs/hypervisor. The tests do not include this.
  • 27. Asymmetric routing? http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.rpf.html Public network Lustre routers Lustre servers tcp0 Tenant network tcp32770 Provider Network Hostile system (e.g. laptop) tcp0 tcp0
  • 28. Conclusions • Follow our activities on http://hpc-news.sanger.ac.uk • Isolated POSIX islands can be deployed to OpenStack with Lustre 2.9+ • Performance is acceptable • Lustre routers require little CPU and memory • Physical routers work and can give good locality for network usage • Virtual routers work, can scale and give additional security benefits • Next steps: • Improve configuration automation • Understand the neutron port security issue • Improve network performance (MTU, OpenVSwitch etc).
  • 29. Acknowledgements DDN: Sébastien Buisson, Thomas Favre-Bulle, Richard Mansfield, James Coomer Sanger Informatics Systems Group: Pete Clapham, James Beal, John Constable, Helen Cousins, Brett Hartley, Dave Holland, Jon Nicholson, Matthew Vernon