4. What Do We Hear From Our Customers?
Low Risk – Nobody ever got fired for buying EMC Nutanix
Freedom to Choose – HW, Hypervisor, Tools
Runs All of My Workloads (point solutions / silos)
Future Proof Design
Resilient/Self Healing - Because I’m not always in my DC
5. Simple to Learn / Easy to Deploy and Manage
Quick Time to Resolution
Delivers Natively Integrated Platform Capabilities
Seamless Extension to Clouds and to the Edge
Accelerates My Digital Transformation Journey
What Do We Hear From Our Customers?
7. Nutanix: Undisputed Market Leadership
| 7
Gartner
Magic Quadrant for HCI, 2021
Forrester
Wave HCI, 2020
Gartner Magic Quadrant for Hyperconverged Infrastructure Software
from October 2021
The Forrester Wave™: Hyperconverged Infrastructure, Q3
2020, Forrester Research, Inc.
Gartner MQ: Nutanix has
maintained our leaders
position since Gartner began
tracking HCI
Forrester Wave: Nutanix is top
ranked in both current offering
& market presence and product
strategy
8. “ “ “
“ “ “ “
Nutanix – fantastic
company providing
world-class support.
Gartner Peer Reviews Happy Customers
Best decision we’ve
made, would do it again!
Great product, most
importantly, great
support.
The best
technology
investment we have
made in long time.
Migrating to
hyperconverged
saved us more than
30% in opex.
HCI so easy a
caveman can do it...
Nutanix
Just works.
Innovation. Nutanix
presents the most robust
and scalable
hyperconverged
infrastructure.
“
| 8
12. Supporting All of Your Applications
A single platform eliminating the need for point solutions and removing silos
13. Nutanix Powers All Workloads and Use Cases
VDI
Enterprise
Applications
Collaboration,
Messaging, & UC
Remote and
Branch Office
Dev/Test Mission-Critical
Workloads
Big Data
14. Unified Platform For All App Needs
Nutanix Cloud Platform
ASE DB2
VM Services File Services Backup Services Block Services
Distributed Cloud
Public Cloud Private Cloud
Object Services
18. Industry Leading Support
|
20
Over 130 Countries
24x7x365
“Follow the Sun” Support
Proactive Support with Pulse
<30 min Mission Critical
Support Response Time
9 WW Support Centers
97% Customer Satisfaction
+90 Net Promoter Score for > 7
Years
19. Proactive Support Tools
|
21
What if your
platform
automatically
created a
support ticket
for you?
Event driven
Automatically creates cases for critical issues
Real-Time Discoveries & Recommendations
To optimize and prioritize focus
• 30% faster resolution
• Automated troubleshooting
• A proactive early warning system
Works in the background
Collects key environmental data – improves
Supportability
Cluster health check tool – validates setup
300+ checks built-in, additions every 2 months
INSIGHTS
ALERTS
PULSE / PULSE HD
NCC
20. What if the support
person you called
was the same person
that solved your issue
on the 1st call?
|
22
21. Nutanix NPS as compared to our peers
Nutanix VMware Microsoft Cisco EMC Hpe Dell NetApp
0
10
20
30
40
50
60
70
80
90
100
90
39
34 33
31
29
24
13
Series 1
• NTNX Average NPS over 90 for the past 7 years
• Received CEM Pro certification for all our Global Support
Centers: all our SREs complete the program to gain the critical soft
skills they need to deliver an outstanding customer experience
• Average Tech Vendor NPS is below 30
NPS Source: Temkin Group Tech Vendor NPS Benchmarks 2020
x3
Best Practices in Knowledge
Management
Outstanding Customer
Services and Support
2013-2020
Best Support Website
2016, 2018, 2020, 2021
22. What if you loved your
support experience?
|
25
24. Web-Scale Engineering… A
Different Approach
Design Goals
• Fractional consumption and predictable scale
• No single point of failure
• Distributed everything
• Always-on systems
• Extensive automation and rich analytics
Fundamental Assumptions
• Unbranded x86 servers: fail-fast systems
• No special purpose appliances
• All intelligence and services in software
• Linear, predictable scale-out
26. Nutanix Web-Scale Architecture
Eliminates SAN
and NAS arrays
Tier 1 Workloads
(running on all nodes)
Nutanix Controller VM
(one per node)
Node 2 Node N
Node 1
Local + Remote
(Flash + HDD)
Distributed Storage Fabric
Intelligent tiering, VM-centric management and more…
✔ Snapshots ✔ Clones ✔ Compression ✔ Deduplication
Hypervisor
Acropolis App Mobility Fabric
Workload Mobility
✔ Locality ✔ Tiering ✔ DR ✔ Resilience
Hypervisor Hypervisor
X86 X86 X86
Distributed & scalable cluster, where all resources are actively used and can benefits to all workloads
Performance Resiliency Scalable
27. vSAN Architecture Fundamentals
• Runs on any standard x86 server
• Integrated into hypervisor
• Pools HDD/SDD into single Cluster-wide
shared datastore
• Easily scalable
• Managed through VM storage policies
vSphere vSAN
SSD SSD SSD SSD SSD SSD
vSAN Datastore
…
Enterprise Storage in a Native vSphere Architecture
28. vSAN Architecture – Host
Caching Tier (Hybrid AND All-Flash)
• 1 SSD for caching
• Does not contribute to capacity
….
Capacity Tier
• 1-7 SSDs or HDDs
• Min (1) per host
• Max (5) per host
• All-flash or Hybrid
Connected via 10GbE network
Client Cache
• Up to 1GB per host
Host 1 Host 2 Host 3 Host 4 Host 64
Disk Group
29. vSAN Architecture –Objects / Components
Disk Group
Host 1 Host 2 Host 3 Host 4
vSAN Objects/Components
• VM Home
• VM Swap
• VMDK
• Snapshot Delta Disk
• Snapshot Memory Delta
…. Host 64
• Min (1) per host
• Max (5) per host
• All-flash or Hybrid
Connected via 10GbE network
Note: Max size of a component is 255GB
before will be broken multiple components
33. Ever Increasing Flash Capacity
|
36
High Capacity Flash
• SSDs exceeding the capacity of
HDDs
• $/TB trending down
34. vSAN’s Legacy Approach To Flash
• VSAN uses flash for:
– Write Buffer
– Read Cache
– Even All-flash maintains this architecture
• No intelligent data tiering
– Same approach as traditional storage
• Cache Flash does not contribute to capacity
• 600GB Write Buffer Limitation
– Regardless of SSD size
• Architecture is not optimized for all-flash
– No data locality
– Cannot pin VM to flash
– Not future proof (NVMe / 3D Xpoint)
SSD PCIe Ultra DIMM SSD PCIe Ultra DIMM
Caching
Hybrid All-Flash
VMware Virtual SAN 6.0 Architectures
Data
Persistence
Read and Write Cache Writes cached first, Reads go direct to capacity tier
Capacity Tier
Flash Devices
Reads go directly to capacity tier
Capacity Tier
SAS/NL SAS/SAT/Direct-attached JBOD
40k IOPS per Host 90k IOPS per Host
+
Sub-millisecond latency
Virtual
SAN
35. Nutanix Intelligent Data Tiering
Automatic Performance Optimization
Hot Data SSD
• Random data
• Persistent tier
• Maximum performance
Leverage multiple tiers of storage
Continuously monitors data access patterns
Optimally places data for best performance
No user intervention required
VM Flash Mode – Pin VM/VMDK To Flash Tier
Cold Data HDD
• Sequential data
• Highest capacity
• Most economical
SSD SSD
MapReduce
PCIe PCIe
Cold
Data
|
Hot
Data
HDD HDD
37. New Flash Technologies Driving Performance
NVMe
• Replaces SATA and SAS with a PCIe-based standard
• Fabric topologies replace networks
3D Xpoint (Optane)
• New nonvolatile storage memory from Intel and Micron
• Much greater performance than today’s flash (1,000X)
• Ultra Low Latency (<10us)
• Network latency becomes very important
38. All-Flash SAN: Long I/O Data Path
App Krnl PCI NIC Network NIC PCI Krnl PCI Ctrlr SSD
App Krnl PCI Ctrlr SSD
SAN I/O Data Path
HCI Has a Shorter Data Path … But Not All HCI Are Created Equal
39. Legacy vSAN Has Distributed Reads Across Network
Data is written to any 2/3 nodes (FTT=1/2)
No Guarantee that the VM resides on Node that
contains a data copy.
No Concept of Data Locality
Read IO round robins between primary and
replica(s)
Reads traverse the network negating any perceived
value of In-kernel design
Inconsistent performance, particularly at scale
This design does not effectively utilize ultra low
latency flash (3D Xpoint)
=Not a future proof architecture
40. Nutanix Has Data Locality by Design
Keep primary data copy on the
same node as VM
2nd/3rd (RF2/RF3) copies are
distributed throughout the cluster
All read operations localized on
same node
If the VM moves, all new data is
written locally
Reads of remote data trigger ILM
to transparently re-localize data
Reduces network chattiness
significantly
42. Nutanix Modern Self-Healing Systems
Self-healing system
Fault isolation with distributed recovery
• SSD or HDD Failure = offline drive
• Parity rebuilt throughout cluster
• Larger the cluster, faster the recovery
– Typically min rather than hours/days
• Don’t need to replace drive until capacity
needed = no fire drill
• SSD/HDD replacement adds capacity back to the
cluster
43. With vSAN Disk Failures Are Still A Fire Drill Event
Hybrid Systems
• SSD Failure takes entire disk group down with it
• HDD Failure puts disk group in degraded state
– Disk rebuilt from single source (other disk group) = slow
– Rebuild hits SSD cache drive and impacts performance
– Larger cluster size doesn’t increase rebuild perf
All-Flash Systems
• Any SSD Failure takes entire disk group down with it
• With single disk group per node, SSD failure=node failure
2/3-Node Clusters
• Have no self-healing capabilities
– Only 2 node cluster with at least 3 Disk Group/node can rebuild resiliency
on remaining host
• SSD/HDD must be replaced to initiate recovery
45. VMware Strategy: Mitigate Rebuild… # Disk Groups
Mitigate long rebuild times
Mitigate max 600GB write cache size
46. Increasing Disk Group Count Kills Data Efficiency
Þ Potential for same redundant
blocks to exist 4 times on
same node
Þ 8 node cluster could have 32
copies of same redundant
data even though dedupe is
turned on
Þ 20 node cluster could have 80
copies of same redundant
data even though dedupe is
turned on
When DEDUPE is applied at a Disk Group Level…
47. VSAN Disk Groups and Memory
|
50
https://kb.vmware.com/s/article/2113954
CMD: esxcli vsan debug memory list
3 Disk Groups
36GB Reserved
RAM
2 Disk Groups
27GB Reserved RAM
1 Disk Groups
17GB Reserved RAM
Max is higher
and varies by
workload
49. Host downtime – the good
ESX / vSAN Hypervisor / AOS
VM
W
VM
• If one host fail or disconnect, he is marked as “absent” – timer begins.
• During that period, data are not redundant, no recovery actions until host
returns or 60 mins elapses
• If host come back before 1 hour, only “Durability Component” needs to be
merged to original replica component.
• Synchronization is limited by 1-to-1 disk relationship for each component and
uses the cache.
• If one host fail or disconnect, cluster advises remaining nodes of the failure.
• There is no timeout, rebuild begins immediately.
• Recovery is not single source or target, uses power of entire cluster
• When VM writes new data, AOS natively store redundant copy on remaining
hosts & disks as any situation. Data keeps fully protected.
• When VM writes new data, vSAN use a “Durability Component” to store 2nd copy –
the replica contains only Delta change from original component. Only new data
are redundant, but all data are not fully protected. • If host come back, Cassandra cleanup over protected data – it becomes
opportunity for native data rebalance.
50. Host downtime – the bad
ESX / vSAN Hypervisor / AOS
VM
W
VM
• If one host fail or disconnect, he is marked as “absent” – timer begins.
• During that period, data are not redundant, no recovery actions until host
returns or 60 mins elapses
• After 1 hour, rebuild start to recreate replica component.
• Synchronization is limited by 1-to-1 disk relationship for each component and
burdens the cache. It has higher bandwidth usage & performance impact
compared to Delta ReSync.
• If one host fail or disconnect, cluster advises remaining nodes of the failure.
• There is no timeout, rebuild begins immediately.
• Recovery is not single source or target, uses power of entire cluster = faster
rebuild.
• When VM writes new data, AOS natively store redundant copy on remaining
hosts & disks as any situation. Data keeps fully protected.
• When VM writes new data, vSAN use a “Durability Component” to store 2nd
copy – the replica contains only Delta change from original component. Only
new data are redundant, but all data are not fully protected. • The key is no wait for rebuild and all data is treated equally, re-protected
regardless of whether it’s been modified or not.
51. Host downtime – the ugly
ESX / vSAN Hypervisor / AOS
VM
W
VM
• If one host fail or disconnect, he is marked as “absent” – timer begins.
• During that period, data are not redundant, no recovery actions until host
returns or 60 mins elapses
• If another failure occurs (i.e. a disk failure) before rebuild time-out (1h), data &
VM become unavailable.
• If one host fail or disconnect, cluster advises remaining nodes of the failure.
• There is no timeout, rebuild begins immediately.
• Recovery is not single source or target, uses power of entire cluster.
• When VM writes new data, AOS natively store redundant copy on remaining
hosts & disks as any situation. Data keeps fully protected.
• When VM writes new data, vSAN use a “Durability Component” to store 2nd
copy – the replica contains only Delta change from original component. Only
new data are redundant, but all data are not fully protected. • If another failure occur (i.e. a disk failure), it has no impact on VM because
data was already redundant.
XXXXXXXXXX
• Another rebuild process starts immediately to restore resiliency.
53. vSAN 3-Node Has No Self Healing
• VSAN maintains a copy of the storage
object on 2 nodes in the cluster and a
witness on the 3rd node
• In the event of a failure (disk or node), the
2nd copy cannot be rebuilt on a node that
contains a witness for that storage object
• So the data cannot automatically recover
• An admin must replace/fix the failure to
initiate recovery
• Cluster is exposed that entire time with
only a single copy of the data
56. vSAN Requires 20-30% Slack Space
Larger clusters at sized
at this reserve capacity
will be in constant
rebalance beyond 80%
Node Failure recovery
will be impacted as
well for clusters
beyond 80%
Note: vSAN will start an IO intensive rebalance when cluster or individual disk capacity exceeds 80%
58. |
62
Real World Example
Nutanix vSAN/VxRAIL
# of Nodes 4 4
# of Drives / Node 6 6
Capacity / Drive (TB) 3.84 3.84
RAW Drive Capacity 92.16 92.16
# of Disk Groups 0 2
Cache Drives / Node 0 2
Total Cache Drives 0 8
RAW Cache Drive Capacity 0 30.72
Available RAW Capacity 92.16 61.44
Advertised Capacity (RF2/FTT=1) 46.08 30.72
% Oplog /Operations Reserve required 5 10
Usable Capacity 43.78 27.65
How much less capacity does vSAN provide in a 4 node AF cluster?
• 4 Nodes of All Flash
• 6 SSDs per Node
• vSAN Configured with 2 disk groups/node
• Each disk group requires cache drive
• Cache drive does not contribute to capacity
• vSAN 10% Operation Reserve – maximum
80% usable space
44TB 28TB
Nutanix vSAN
60. Space Efficiency mechanisms
|
64
Deduplication
Compression
Erasure Coding
vSAN AOS
• Configured at cluster level
• Available only on AF config
• Enabling/Disabling need to recreate Disk Group => Data
migration
• Fast but low efficiency algorithm used
• Configured at storage container level
• Enabling/Disabling applied live to new data. Post-processed on
existing data to preserve performance.
• Fast algorithm in-line for performance, better algorithm applied
post-processing to increase efficiency while been impact less on
performance
• Configured at cluster level, require Compression enabled
• Available only on AF config
• Enabling/Disabling need to recreate Disk Group => Data
migration
• Deduplication table spread across all disk of the Disk Group =>
any disk become a SPOF for the DG
• Deduplication local to a Disk Group, redundant data can exist
across DG => lower efficiency
• Configured at storage container level, independent but
compatible with compression
• Enabling/Disabling applied live to new data. Post-processed on
existing data to preserve performance.
• Deduplication global to whole cluster => better efficiency
• Reducing bandwidth usage during Resync between host &
Replication between clusters.
• Configured at VM level (storage policy)
• Available only on AF config
• Fix strip size
• Apply to all data of a VM => better efficiency & bigger impact
on performance
• Configured at storage container level
• Strip size adjusted to cluster size to protect against host failure
without recalculating the full strip => faster reprotection
• Apply only on cold data => lower efficiency but impact less on
performance
64. Has The Approach To Management Evolved?
Traditional Virtualized Infrastructure vSAN/VxRAIL
Sizing
Provisioning
HA?
Day 2 Ops
Upgrades
Scalability
Does Legacy Off-Cluster Management Make Sense Going Forward?
65. Nutanix Delivers Natively Integrated Management
Purpose-built for HCI
Single pane of glass
Single vendor support
Always-on management during
planned maintenance or
unplanned disruption
Automated lifecycle
management
Leaner stack
GUI CLI API
Distributed Control Plane
with Web-Scale Attributes
Simple Available Easy to Use
{ }
67. Have Management Tools Changed?
Traditional Virtualized Infrastructure vSAN/VxRAIL
Does Bolting-on Legacy Tools to a New Architecture Change the Experience?
68. Introducing Prism: The Answer to Your Frustrations
SECURITY
STORAGE
VIRTUALIZATION
APPS
DATA PROTECTION
SELF SERVICE
SERVER
69. N U T A N I X A D V A N T A G E | C O N F I D E N T I A L
Infrastructure Management and IT Operations into one UI
|
76
API
/
GUI
Multi-Hypervisor Full-Stack Management
Personalized Insights
Predictive Optimization & Remediation
Comprehensive search
Customizable
Dashboards & Reports
Cluster
Management
Storage &
Network
Management
VM
Management
Self-Service
(RBAC)
Capacity forecast Capacity Optimization
Anomaly Detection
Just-in-time planning
Operations Automation
Codeless Task
Automation
Action Gallery
Centralized
Upgrades
Application Insights
Application
Discovery
Application & Non-AOS
VM Monitoring
Financial Resource Insights
VM Cost Metering Chargeback Budgeting
PRISM
71. Has The Upgrade Experience Changed?
Traditional Virtualized Infrastructure vSAN/VxRAIL
1. vMotion VMs to other cluster nodes
2. Place Node in Maintenance Mode
3. Download Patches
4. Apply Patches
5. Restart Host
6. Take Node Out of Maintenance Mode
7. vMotion VMs back
8. Repeat…
1. vMotion VMs to other cluster nodes
2. Place Node in Maintenance Mode
3. Download Patches
4. Apply Patches
5. Restart Host
6. Take Node Out of Maintenance Mode
7. vMotion VMs back
8. Repeat…
Moving VMs and restarting hosts does not eliminate the risk and babysitting
72. Challenges with Maintenance Mode
Have to choose
between long
data evacuation
time vs risk of
DU/DL
Typically used
for Node
Eviction
73. Nutanix Delivers One-click Upgrades
Serial Reboots
What is it
Benefits
• Automatically upgrade Nutanix software,
Hypervisors and firmware non-disruptively
with no manual intervention
• Nodes upgraded in parallel
• Automatic sequencing of reboots
• Done in minutes with zero touch
• No downtime while upgrade happens
– (CVM Autopathing)
• As easy as upgrading iOS
Memory
CPU
Memory
CPU
Memory
CPU
Memory
CPU
Parallel
Upgrades
74. LCM: Non-Disruptive Upgrades | 81
AHV
AOS Prism Files Objects Calm Foundation
Easy to Use Removes Complexity Scalable
Software & Firmware Full Stack Upgrades
Automatic Dependency Management
Unified Upgrade Process
HDD SDD
BMC Expander
NIC
Boot
Drive
HBA
BIOS
75. |
82
What if you
could update all
of your
infrastructure
across multiple
clusters in
different
locations in a
single click?
81. What is Cloud Foundation?
|
88
According to VMware…
Bundling of products,
not natively integrated
Break Down Silos?
Workload Domains Create More
Silos
82. VCF still maintains complexity…
| 89
• Environment sizing activities
• SQL Server for vRA
• Passwords…14 of them
• 6 different consoles
• Lengthy Upgrades
• A giant pile of DNS entries
• Can’t mix VCF for VSAN and
VCF on VxRail
• Likely requires Professional
Services for each component
83. AHV
Consider a Full Stack comparison
VS.
HCI
Management
Analytics
Orchestration
Automation
Virtualization
Networking
Tanzu
SRM + vS Replication
Various Updaters
vR Operations
vCenter
vR Automation
vSAN
NSX + vRNI
HCI
Management
Analytics
Orchestration
Automation
Virtualization
Networking
Acropolis OS
Prism
Central
Prism Pro
Calm
Karbon
Flow
Log Insights
ESXi
Leap
Objects
Files
File Services
DPp + 3rd
party
LCM
84. Lifecycle Management still painful
Infrastructure Admins
SDDC
Mgr
VxRail
Mgr
vCenter
NSX Mgr
vRLCM
vCenter
NSX Mgr
vRLCM
SDDC
Mgr • Layers of tools
• Tool SILOs that HAVE TO exist
• Complexity in alignment
• Upgrades are sequential: 4.0 ->
4.0.0.1 -> 4.0.1 -> 4.0.1.1 …
• Lacks Cloud like agility
• All or nothing upgrades
… …
VUM /
vLCM
85. Nutanix Simplicity
|
92
Compute Storage Networking LCM Automation
Nutanix
Inspired Cloud
Single Tool…Single UI…Unified Process
vCenter
Legacy Inspired
Cloud
VUM /
vLCM
Hybrid Cloud
NAS / Object
vRLCM
SDDC
Mgr
NSX
Mgr
AHV / ESXi
Firmware
Files
Karbon Calm
Objects
NCC Foundation
Flow
AOS
… …
…
86. |
93
• From hardware firmware and software
components to full-stack solution upgrade
• Can updates all at once or granularly for
every components
• All dependencies are automatically
handled
• Single-source of truth for Nutanix software
and system upgrades
LCM: worry-free upgrade for the whole stack
87. Top 10 things customers should know
1. Legacy innovation. Digital transformation helps
achieve measurable improvements in the business.
vSAN maintains legacy principles through its caching
design, emphasis on hardware, and disparate
management tools. It only advances more of the
same instead of innovation.
6. Hides resource consumption. VMware makes claims
that vSAN is more resource efficient compared to our
CVM. The vSAN service isn’t free, it may come close or
exceed similar resources as the Nutanix CVM,
depending on the chosen architecture. All this just to
provide a single service, storage.
2. Data reliability concerns. Modern innovative
platforms should not put customers in situations
where their data is left in a semi-protected state
waiting to restore resiliency. vSAN could wait up to an
hour before beginning recovery. How important is
being exposed for an hour?
7. Management overhead. New modern platforms
should provide customers with a vision of operating
like a cloud. This comes through reducing panes of
glass, tooling or operational activities. Adopting vSAN
keeps the legacy innovators dilemma in place, lacking
a path to true simplicity.
3. Increased operational risk. Modern systems should
improve on existing processes and help reduce
impacts to the business. vSAN can impact data
resiliency when experiencing unplanned
environmental events. It doesn’t fully protect all your
data despite recent updates to the product.
8. Professional Services. Products should transform
customers through their simplicity. VMware products
are historically complex, requiring expensive
professional services engagements to implement.
Adopting vSAN maintains platform complexity.
4. Lacks vision. A true visionary product improves or
replaces existing technology. Moving SAN functions
into a hypervisor isn’t as transformational as a
Nutanix highly distributed platform. The coarse vSAN
architecture does not efficiently scale, reducing
workloads to a limited number cluster resources.
9. Small cluster challenges. 2 and 3 node clusters
cannot rebuild resilience in the event of a host loss
which puts data at risk. Moreover, for remote sites,
that requires an urgent on-site intervention.
5. Platform design weakness. A platform should take
full advantage of new disruptive technology. vSAN
dedicates expensive storage media to singular
functions within the cluster. This is how legacy storage
vendors solved problems years ago.
10. Storage availability concerns. Modern data centers
should allow the focus to be on applications. vSAN
keeps customers focused on hardware, a single failed
disk can potentially remove part or all the storage on
a single host.
#8:Outstanding support is a hallmark of Nutanix. We are extremely proud of the services that we provide to our customers, and encourage any organization interested in Nutanix to inquire about our support services and learn how we can be your strategic IT partner for your most critical projects.
Highlights
An audited Net Promoter Score (NPS) of 90+ for five consecutive years. NPS measures the likelihood of they recommending Nutanix to a peer. The average software company has an NPS of 30. A score of 90+ is virtually unheard of.
What drives Nutanix support excellence:
All Nutanix support engineers are full-time employees. You will never talk to a contract support person
Nutanix does not employ a tiered support model, with different levels of support engineers. The site reliability engineer (SRE) that takes the initial support request will handle the case until resolution. Customers are never waiting for handoffs within the support team.
Only Nutanix customers decide when a case is resolved to their satisfaction. Nutanix does not close a support ticket on our own.
Nutanix SRE teams leverage Nutanix Pulse to proactively address support issues; sometimes without the customer calling or submitting a ticket
#12:AHV is key to our overall strategy for competing against VMware based HCI (VSAN & vxRail). Customers running VMW on Nutanix will only get further entrenched in the VMW ecosystem, making it more likely for them to move off Nutanix to a VMW based HCI solution. AHV should be positioned as not just an option but as the best option for customers as it delivers operational simplicity and cost reduction without compromise of performance or functionality.
AHV should always be positioned as part of the full Nutanix offering vs being compared to only a competing hypervisor platform. E.g. Make the distinction between vSphere vs Nutanix - Only when comparing the full stack of hypervisor+supporting management software can the value of going “all in” with Nutanix be communicated.
Leverage Flow and Security as part of the play - Flow ONLY works with AHV and VMW uses their network security product NSX as a differentiator. We can counter with Flow and our focus on the primary use case of application security via microsegmentation.
AHV FAQ: https://docs.google.com/document/d/1PV8gwzFSA4Pc7K38wsRdV__0-azd2sv1mUKsHLQMhKM/preview
#15:We report use cases each earnings release and Enterprise Apps is consistently about half.
Not just SAP, but other enterprise applications and use cases.
Highlight a couple, including other business-critical databases, VDI, extends to remote sites, and more.
#16:-------VIDEO SCRIPT------
We have a unified platform to deliver these different storage services.
We have historically had storage for VMs as well as for containers.
In just a few clicks, we can enable file services to users, object services to applications, block services to physical servers, and backup services to integrate with backup solutions.
And all these storage services are delivered and managed in the same way across multiple clusters, sites, and across clouds.
----------------------------------
Eliminate storage silos
#18:Complicated portfolio, means competition within
Way too many solutions means that DellEMC isn’t really looking to consolidate and simplify as much as they are so sell you something to make a profit.
Consolidation has been promised, but has yet to be seen
Dell currently looking to sell off Vmware stake, what will that do for interop between companies?
#23:CEM Pro (Customer Experience Management Professional) from CRMI
TSIA star award 2017
#27:Webscale engineering
Competitors designed by legacy storage engineers, using same principles of legacy storage approach, legacy design mentality applied to new technology. Not going to get a modern experience
Nutanix was designed by former Oracle/Google employees
Massively scalable systems with no SPOF
Extensive automation & self healing makes Nutanix systems easier to manage
Same principles as modern cloud providers utilized by Nutanix
Purpose: Talk about where and how web-scale IT originated, and what some of the common are between different web-scale data centers
Key Points:
Web companies like Google and Facebook started pushing the limits of existing infrastructure systems and processes in ways that traditional businesses did not. They needed infrastructure that could support their business requirements (rapid application development cycles, scale on demand, cost containment). They tried using existing infrastructure solutions, but quickly realized that legacy infra was a poor fit for their needs.
Over time, these companies developed an alternate approach to IT that enabled them to get past limitations in infrastructure. Some common traits of web-scale IT:
Infrastructure built from commodity server hardware pooled together using intelligent software. This allows customers to start small and scale one server at a time – true scale-out
The software in the system is distributed across all the nodes. You don’t have central metadata servers or name nodes. You don’t see controller bottlenecks
Embarrassingly parallel operations – everything in the system, including storage functions like deduplication and metadata management and system cleanup, is distributed across all nodes. There are no hotspots or bottlenecks, allowing for massive scale
Compute and storage sit very close to each other. Data does not have to go back and forth between storage and compute over a network. Data has gravity, so co-locating storage and compute eliminates network bottlenecks and system slowdown
Heavy automation eliminates the need for expensive, error-prone manual operations. You don’t
#50:Despite VMware claims that CVMs have a disadvantage vs vSANs in-kernel architecture, they somewhat have to eat their own words as disk groups have similar memory requirement to our CVMs.
Memory consumption of disk groups:
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-07EFD36A-F844-4E7D-830D-3863E4AA617C.html
https://kb.vmware.com/s/article/2113954
Identical blocks that span disk groups will NOT be deduplicated, deduplication is restricted to blocks within a given disk group, this means more raw capacity will be required for multiple disk groups.
Enabling of dedupe/compression has a documented metadata overhead of 5% of raw capacity.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-2285B446-46BF-429C-A1E7-BEE276ED40F7.html
All hosts in the cluster must be configured with the same type of disk groups (Node types can be mixed in Nutanix clusters)
https://kb.vmware.com/s/article/2113954
#72:Main Point:
Prism, like every component in Nutanix, is distributed and highly available. It is not an after thought and is built into the product. You don’t have to manage the management solution.
Beautiful and elegant, built on HTML5
Easy to use and provides end to end datacenter management
In addition every functionality in Prism is exposed through CLI and REST APIs
This helps integrate with 3rd party management solutions like Windows Azure Pack and vROPS
#75:We can argue that with vCenter + Prism it’s not single pain of glass. Yes but:
it helps removing dependency between vSphere and AOS,
the UI & management is hypervisor agnostic => same experience across different Hypervisors
Leverage your technical skillz even when migration hypervisor
#76:-------VIDEO SCRIPT------
Prism provides a true single UI for infrastructure management and IT operations.
Natively it offers full stack management unified across multiple hypervisors, multiple clusters, and multiple sites.
For IT operations, many other solutions have capacity planning, dashboards, and reports capabilities. But they are generic views and analyses based on global usage from other customers.
We provide a more accurate view of IT operations, with customizable dashboards and reports, understandable advanced searches, but also analyses based on the behavior of your infrastructure, such as anomaly detection, workload resizing recommendations, and capacity forecasts with just-in-time planning.
And to go even further, we can also provide operations automation, visibility up to applications and financial management.
These smart operations capabilities dramatically reduce management time, improve responsiveness, and bring more value to the business.
----------------------------------
Prism Pro + Insight providing more value
No customizable dashboard & advanced search, what if.
Insight recently available to customers but used internally by support since many years
#80:Operational simplicity is a pillar of web-scale IT
Nearly impossible to get zero-downtime upgrades with minimal manual process on traditional infrastructure
#87:This summarized view of Nutanix Enterprise Cloud highlights both the integrated nature of the solution, as well as the deliberate design to protect a customer’s freedom to choose the right technology for their business needs – including the best hardware platform, the most suitable hypervisor, and the right cloud environment.
All managed from a single console, Nutanix Prism.
#88:This summarized view of Nutanix Enterprise Cloud highlights both the integrated nature of the solution, as well as the deliberate design to protect a customer’s freedom to choose the right technology for their business needs – including the best hardware platform, the most suitable hypervisor, and the right cloud environment.
All managed from a single console, Nutanix Prism.
#89:Limited vsan ready nodes is Fujitsu only per the FAQ for VCF
#91:HCI is fast and easy to install due to a thoughtfully packaged architecture with an integrated full-stack installer.
Not only does this allow you to deploy new HCI clusters quickly, it also automates the deployment at multiple geographically distributed sites simultaneously. This is particularly helpful for large datacenter deployments and for ROBO deployments.
Integrated full stack deployments, including third-party hypervisors, dramatically decreases the amount of time and effort needed for deploying new infrastructure resources.
#95:This summarized view of Nutanix Enterprise Cloud highlights both the integrated nature of the solution, as well as the deliberate design to protect a customer’s freedom to choose the right technology for their business needs – including the best hardware platform, the most suitable hypervisor, and the right cloud environment.
All managed from a single console, Nutanix Prism.