NPP Training series – How does it work – CVM – Software Defined

To continue NPP training series here is my next topic:  How does it work – CVM – Software Defined

If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V
I/O Path Overview
Drive Breakdown

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it. Also, he just rocks…other than being a Sea Hawks Fan :).

Software-Defined
Nutanix CVM

As mentioned before (likely numerous times), the Nutanix platform is a software-based solution which ships as a bundled software + hardware appliance.  The controller VM or what we call the Nutanix CVM is where the vast majority of the Nutanix software and logic sits and was designed from the beginning to be an extensible and pluggable architecture. A key benefit to being software-defined and not relying upon any hardware offloads or constructs is around extensibility.  As with any product life-cycle, advancements and new features will always be introduced.

By not relying on any custom ASIC/FPGA or hardware capabilities, Nutanix can develop and deploy these new features through a simple software update.  This means that the deployment of a new feature (e.g., deduplication) can be deployed by upgrading the current version of the Nutanix software.  This also allows newer generation features to be deployed on legacy hardware models. For example, say you’re running a workload running an older version of Nutanix software on a prior generation hardware platform (e.g., 2400).  The running software version doesn’t provide deduplication capabilities which your workload could benefit greatly from.  To get these features, you perform a rolling upgrade of the Nutanix software version while the workload is running, and you now have deduplication.  It’s really that easy.

Similar to features, the ability to create new “adapters” or interfaces into Distributed Storage Fabric is another key capability.  When the product first shipped, it solely supported iSCSI for I/O from the hypervisor, this has now grown to include NFS and SMB for Hyper-V.  In the future, there is the ability to create new adapters for various workloads and hypervisors (HDFS, etc.).

And again, all of this can be deployed via a software update. This is contrary to most legacy infrastructures, where a hardware upgrade or software purchase is normally required to get the “latest and greatest” features.  With Nutanix, it’s different. Since all features are deployed in software, they can run on any hardware platform, any hypervisor, and be deployed through simple software upgrades.

The following figure shows a logical representation of what this software-defined controller framework (Nutanix CVM) looks like:Nutanix CVMNext up, NPP Training Series – How does it all work – Disk Balancing

Until next time, Rob…

Nutanix NOS 4.5 Released…

Hi all…It’s been a few weeks since my last blog post. I’ve been busy with some travel to Microsoft Technology Centers and working on the Nutanix Ready Program.  Yesterday, Nutanix released NOS 4.5.  This exciting upgrade adds some great features..  Sit back and get ready to enjoy the ride…release notes below.

customLogo NOS 4.5

Table 1. Terminology Updates
New Terminology Formerly Known As
Acropolis base software Nutanix operating system, NOS
Acropolis hypervisor, AHV Nutanix KVM hypervisor
Acropolis API Nutanix API and Acropolis API
Acropolis App Mobility Fabric Acropolis virtualization management and administration
Acropolis Distributed Storage Fabric, DSF Nutanix Distributed Filesystem (NDFS)
Prism Element Web console (for cluster management); also known as the Prism web console; a cluster managed by Prism Central
Prism Central Prism Central (for multicluster management)
Block fault tolerance Block awareness

What’s New in Acropolis base software 4.5

Bandwidth Limit on Schedule

  • The bandwidth throttling policy provides you with an option to set the maximum limit of the network bandwidth. You can specify the policy depending on the usage of your network.

Note: You can configure bandwidth throttling only while updating the remote site. This option is not available during the configuration of remote site.

Cloud Connect for Azure

  • The cloud connect feature for Azure enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Microsoft Azure cloud. Once configured through the Prism web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured. This feature is currently supported for ESXi hypervisor environments only.

Common Access Card Authentication

  • You can configure two-factor authentication for web console users that have an assigned role and use a Common Access Card (CAC).

Default Container and Storage Pool Upon Cluster Creation

  • When you create a cluster, the Acropolis base software automatically creates a container and storage pool for you.

Erasure Coding

  • Complementary to deduplication and compression, erasure coding increases the effective or usable cluster storage capacity. [FEAT-1096]

Hyper-V Configuration through Prism Web Console

  • After creating a Nutanix Hyper-V cluster environment, you can use the Prism web console to join the hosts to the domain, create the Hyper-V failover cluster, and also enable Kerberos.

Image Service Now Available in the Prism Web Console

  • The Prism web console Image Configuration workflow enables a user to upload ISO or disk images (in ESXi or Hyper-V format) to a Nutanix AHV cluster by specifying a remote repository URL or by uploading a file from a local machine.

MPIO Access to iSCSI Disks (Windows Guest VMs)

  • Acropolis base software 4.5 feature to help enforce access control to volume groups and expose volume group disks as dual namespace disks.

Network Mapping

  • Network mapping allows you to control network configuration for the VMs when they are started on the remote site. This feature enables you to specify network mapping between the source cluster and the destination cluster. The remote site wizard includes an option to create one or more network mappings and allows you to select source and destination network from the drop-down list. You can also modify or remove network mappings as part of modifying the remote sites.

Nutanix Cluster Check

  • Acropolis base software 4.5 includes Nutanix Cluster Check (NCC) 2.1, which includes many new checks and functionality.
  • NCC 2.1 Release Notes

NX-6035C Clusters Usable as a Target for Replication

  • You can use a Nutanix NX-6035C cluster as a target for Nutanix native replication and snapshots, created by source Nutanix clusters in your environment. You can configure the NX-6035C as a target for snapshots, set a longer retention policy than on the source cluster (for example), and restore snapshots to the source cluster as needed. The source cluster hypervisor environment can be AHV, Hyper-V, or ESXi. See Nutanix NX-6035C Replication Target in Notes and Cautions.

Note: You cannot use an NX-6035C cluster as a backup target with third-party backup software.

Prism Central Can Now Be Deployed on the Acropolis Hypervisor (AHV)

  • Nutanix has introduced a Prism Central OVA which can be deployed on an AHV cluster by leveraging Image Service features. See the Web Console Guide for installation details.
  • Prism Central 4.5 Release Notes

Prism Central Scalability

  • By increasing memory capacity to 16GB and expanding its virtual disk to 260GB, Prism Central can support a maximum of 100 clusters and 10000 VMs (across all the clusters and assuming each VM has an average of two virtual disks). Please contact Nutanix support if you decide to change the configuration of the Prism Central VM.
  • Prism Central 4.5 Release Notes
  • Prism Central Scalability, Compatibility and Deployment

Simplified Add Node Workflow

  • This release leverages Foundation 3.0 imaging capabilities and automates the manual steps previously required for expanding a cluster through the Prism web console.

SNMP

  • The Nutanix SNMP MIB database includes the following changes:
    • The database includes tables for monitoring hypervisor instances and virtual machines.
    • The service status table named serviceStatusTable is obsolete. Analogous information is available in a new table named controllerStatusTable. The new table has a smaller number of MIB fields for displaying the status of only essential services in the Acropolis base software.
    • The disk status table (diskStatusTable), storage pool table (storagePoolInformationTable), and cluster information table include one or more new MIB fields.
  • The SNMP feature also includes the following enhancements:
    • From the web console, you can trigger test alerts that are sent to all configured SNMP trap receivers.
    • SNMP service logs are now written to the following log file: /home/nutanix/data/logs/snmp_manager.out

Support for Minor Release Upgrades for ESXi Hosts

  • Acropolis base software 4.5 enables you to patch upgrade ESXi hosts with minor release versions of ESXi host software through the Controller VM cluster command. Nutanix qualifies specific VMware updates and provides a related JSON metadata upgrade file for one-click upgrade, but now customers can patch hosts by using the offline bundle and md5sum checksum available from VMware, and using the Controller VM cluster command.

Note: Nutanix supports the ability to patch upgrade ESXi hosts with minor versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those minor releases. Please see the the Nutanix hypervisor support statement in our Support FAQ.

VM High Availability in Acropolis

  • In case of a node failure, VM High Availability (VM-HA) ensures that VMs running on the node are automatically restarted on the remaining nodes within the cluster. VM-HA can optionally be configured to reserve spare failover capacity. This capacity reservation can be distributed across the nodes in chunks known as “segments” to provide better overall resource utilization.

Windows Guest VM Failover Clustering

  • Acropolis base software 4.5 supports configuring Windows guest VMs as a failover cluster. This clustering type enables applications on a failed VM to fail over to and run on another guest VM on the same or different host. This release supports this feature on Hyper-V hosts with in-guest VM iSCSI and SCSI 3 Persistent Reservation (PR).

Tech Preview Features

Note: Do not use tech preview features on production systems or storage used or data stored on production systems.

File Level Restore

  • The file level restore feature allows a virtual machine user to restore a file within a virtual machine from the Nutanix protected snapshot with minimal Nutanix administrator intervention.

Note: This feature should be used only after upgrading all nodes in the cluster to Acropolis base software 4.5.

What’s New in Prism Central

Prism Central for Acropolis Hypervisor (AHV)

Nutanix has introduced a Prism Central VM which is compatible with AHV to enable multicluster management in this environment. Prism Central now supports all three major hypervisors: AHV, Hyper-V, and ESXi.

Prism Central Scalability

The Prism Central VM requires these resources to support the clusters and VMs indicated in the table.

 
Prism Central vCPU
Prism Central Memory (GB, default) Total Storage Required for Prism Central VM (GB) Clusters Supported VMs Supported (across all clusters) Virtual disks per VM
4 8 256 50 5000 2

Release Notes | NCC 2.1

Learn More About NCC Health Checks

You can learn more about the Nutanix Cluster Check (NCC) health checks on the Nutanix support portal. The portal includes a series of Knowledge Base articles describing most NCC health checks run by the ncc health_checks command.

What’s New in NCC 2.1

NCC 2.1 includes support for:

  • Acropolis base software 4.5 or later
  • NOS 4.1.3 or later only
  • All Nutanix NX Series models
  • Dell XC Series of Web-scale Converged Appliances

Tech Preview Features

The following features are available as a Tech Preview in NCC 2.1.

Run NCC health checks in parallel

  • You can specify the number of NCC health checks to run in parallel to reduce the amount of time it takes for all checks to complete. For example, the command ncc health_checks run_all –parallel=25 will run 25 of the health checks in parallel.

Use npyscreen to display NCC status

  • You can specify npyscreen as part of the ncc command to display status to the terminal window. Specify –npyscreen=true as part of the ncc health_checks command.

New Checks in This Release

Check Name Description KB Article
check_disks Check whether disks are discoverable by the host. Pass if the disks are discovered. KB 2712
check_pending_reboot Check if host has pending reboots. Pass if host does not have pending reboots. KB 2713
check_storage_heavy_node Verify that nodes such as the storage-heavy NX-6025C are running a service VM and no guest VMs.
Verify that nodes such as the storage-heavy NX-6025C are runningthe Acropolis hypervisor only.
KB 2726
KB 2727
check_utc_clock Check if UTC clock is enabled. KB 2711
cluster_version_check Verifiy that the cluster is running a released version of NOS or the Acropolis base software. This check returns an INFO status and the version if the cluster is running a pre-release version. KB 2720
compression_disabled_check Verify if compression is enabled. KB 2725
data_locality_check Check if VMs that are part of a cluster with metro availability are in two different datastores (that is, fetching local data). KB 2732
dedup_and_compression_enabled_containers_check Checks if any container have deduplication and compression enabled together. KB 2721
dimm_same_speed_check Check that all DIMMs have the same speed. KB 2723
esxi_ivybridge_performance_degradation_check Check for the Ivy Bridge performance degradation scenario on ESXi clusters. KB 2729
gpu_driver_installed_check Check the version of the installed GPU driver. KB 2714
quad_nic_driver_version_check Check the version of the installed quad port NIC driver version. KB 2715
vmknics_subnet_check Check if any vmknics have same subnet (different subnets are not supported). KB 2722

Foundation Release 3.0

This release includes the following enhancements and changes:

  • A major new implementation that allows for node imaging and cluster creation through the Controller VM for factory-prepared nodes on the same subnet. This process significantly reduces network complications and simplifies the workflow. (The existing workflow remains for imaging bare metal nodes.) The new implementation includes the following enhancements:
    • A Java aplet that automatically discovers factory-prepared nodes on the subnet and allows you to select the first one to image.
    • A simplified GUI to select and configure the nodes, define the cluster, select the hypervisor and Acropolis base software versions to use, and monitor the imaging and cluster creation process.

Customers may create a cluster using the new Controller VM-based implementation in Foundation 3.0. Imaging bare metal nodes is still restricted to Nutanix sales engineers, support engineers, and partners.

  • The new implementation is incorporated in the Acropolis base software version 4.5 to allow for node imaging when adding nodes to an existing cluster through the Prism GUI.
  • The cluster creation workflow does not use IPMI, and for both cluster creation and bare-metal imaging, the host operating system install is done within an “installer VM” in Phoenix.
  • To see the progress of a host operating system installation, point a VNC console at the node’s Controller VM IP address on port 5901.
  • Foundation no longer offers the option to run diagnostics.py as a post-imaging test.  Should you wish to run this test, you can download it from the Tools & Firmware page on the Nutanix support portal.
  • There is no Foundation upgrade path to the new Controller VM implementation; you must download the Java aplet from the Foundation 3.0 download page on the support portal. However, you can upgrade Foundation 2.1.x to 3.0 for the bare metal workflow as follows:
      • Copy the Foundation tarball (foundation-version#.tar.gz) from the support portal to /home/nutanix in your VM.
      • Navigate to /home/nutanix.
      • Enter the following five commands:
        • $ sudo service foundation_service stop
        • $ rm -rf foundation
        • $ tar xzf foundation-version#.tar.gz
        • $ sudo yum install python-scp
        • $ sudo service foundation_service restart
    • If the first command (foundation_service stop) is skipped or the commands are not run in order, the user may get bizarre errors after upgrading. To fix this situation, enter the following two commands:
  • $ sudo pkill -9 foundation
  • $ sudo service foundation_service restart

Release Notes for each of these products is located at:

  • Acropolis base software 4.5:  https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Acr_v4_5.html
  • Prism Central 4.5: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Prism_Central_v4_5.html
  • Nutanix Cluster Check(NCC) 2.1: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-NCC:rel_Release_Notes-NCC_v2_1.html
  • Foundation 3.0: https://portal.nutanix.com/#/page/docs/details?targetId=Field_Installation_Guide-v3_0:fie_release_notes_foundation_v3_0_r.html

Download URLs:

Until next time, Rob…

NPP Training series – Drive Breakdown

To continue NPP training series here is my next topic:  Drive Breakdown
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V
I/O Path Overview

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.

Drive Breakdown

In this section I’ll cover how the various storage devices (SSD / HDD) are broken down, partitioned and utilized by the Nutanix platform. NOTE: All of the capacities used are in Base2 Gibibyte (GiB) instead of the Base10 Gigabyte (GB).  Formatting of the drives with a filesystem and associated overheads has also been taken into account.

SSD Devices

SSD devices store a few key items which are explained in greater detail above:

  • Nutanix Home (CVM core)
  • Cassandra (metadata storage) – MORE
  • OpLog (persistent write buffer) – MORE
  • Extent Store (persistent storage) – MORE

Below we show an example of the storage breakdown for a Nutanix node’s SSD(s):
NDFS_SSD_breakdown3 Drive Breakdown
NOTE: The sizing for OpLog is done dynamically as of release 4.0.1 which will allow the extent store portion to grow dynamically.  The values used are assuming a completely utilized OpLog.  Graphics and proportions aren’t drawn to scale.  When evaluating the Remaining GiB capacities do so from the top down.  For example the Remaining GiB to be used for the OpLog calculation would be after Nutanix Home and Cassandra have been subtracted from the formatted SSD capacity. Most models ship with 1 or 2 SSDs, however the same construct applies for models shipping with more SSD devices. For example, if we apply this to an example 3060 or 6060 node which has 2 x 400GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~440GiB of Extent Store SSD capacity per node.  Storage for Cassandra is a minimum reservation and may be larger depending on the quantity of data.
NDFS_SSD_3060_2 Drive Breakdown
For a 3061 node which has 2 x 800GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~1.1TiB of Extent Store SSD capacity per node.
NDFS_SSD_3061v2 Drive Breakdown

HDD Devices

Since HDD devices are primarily used for bulk storage, their breakdown is much simpler:

  • Curator Reservation (Curator storage) – MORE
  • Extent Store (persistent storage)

NDFS_HDD_breakdown Drive Breakdown
For example, if we apply this to an example 3060 node which has 4 x 1TB HDDs this would give us 80GiB reserved for Curator and ~3.4TiB of Extent Store HDD capacity per node.
NDFS_HDD_3060 Drive Breakdown
NOTE: the above values are accurate as of 4.0.1 and may vary by release.
Next up, I figured we would look at some of the cool software technologies that run on our CVM (Controller Virtual Machine), next up Elastic Dedupe Engine.

Until next time, Rob

Nutanix Community Edition – Public Beta – Now Available – Build Your Own Nutanix Test Lab for Free

nutanix-community-edition_w_500

Nutanix Community Edition

Another very exciting announcement was Nutanix Community Edition (CE) on June 9th, 2015 at our Inaugural .NEXT conference. So, what is it?…..Our website describes it the best “Community Edition is a 100% software solution enabling technology enthusiasts to easily evaluate the latest hyperconvergence technology at zero cost.”  In other words, you can use your own hardware to test out Nutanix.  Very cool.  This is great for building a lab and just gaining understanding of hyperconvergence hands on.
Nutanix is offering a hardware compatibility list (HCL) to users that includes the minimum requirements to run the software; essentially, any standard x86 server can be used….
And to quote our CEO and co-founder Dheeraj Pandey,
“From our very first software release in 2012, Nutanix has been dedicated to open architectures and technologies, offering unprecedented customer choice and flexibility,” “Community Edition is the next step in democratizing hyperconverged infrastructure technology, enabling anyone to experience the transformative benefits of our software. Only by eliminating the requirement for proprietary hardware and embracing off-the-shelf platforms can the next revolution of datacenter technologies be fully realized.”
As the name implies, the support for the CE will come from the community through Nutanix’s NEXT online portal. Users will be able to log in, ask questions and get answers from the community.
CE also allow you to also check our new hypervisor based on KVM and Acropolis. Check out Josh Odger’s Blog to learn more about Acropolis.
Join the beta…And don’t forget my NPP training series that helps you with all the concepts around hyperconvergence.
Currently, I am getting started with Nutanix CE installation and will be posting my experiences in a later blog post with how I build my Nutanix Lab @ Home. 🙂

Until next time….Rob

NPP Training series – I/O Path Overview

To continue NPP training series, here is my next topic:  I/O Path Overview
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean-to it.

IO Path Overview

The Nutanix IO path is composed of the following high-level components:
NDFS_IO_basev5 IO Path

OpLog

  • Key Role: Persistent write buffer
  • Description: The Oplog is similar to a filesystem journal and is built to handle bursty writes, coalesce them and then sequentially drain the data to the extent store.  Upon a write the OpLog is synchronously replicated to another n number of CVM’s OpLog before the write is acknowledged for data availability purposes.  All CVM OpLogs partake in the replication and are dynamically chosen based upon load.  The OpLog is stored on the SSD tier on the CVM to provide extremely fast write I/O performance, especially for random I/O workloads.  For sequential workloads the OpLog is bypassed and the writes go directly to the extent store.  If data is currently sitting in the OpLog and has not been drained, all read requests will be directly fulfilled from the OpLog until they have been drain where they would then be served by the extent store/content cache.  For containers where fingerprinting (aka Dedupe) has been enabled, all write I/Os will be fingerprinted using a hashing scheme allowing them to be deduped based upon fingerprint in the content cache.

Extent Store

  • Key Role: Persistent data storage
  • Description: The Extent Store is the persistent bulk storage of NDFS and spans SSD and HDD and is extensible to facilitate additional devices/tiers.  Data entering the extent store is either being A) drained from the OpLog or B) is sequential in nature and has bypassed the OpLog directly.  Nutanix ILM will determine tier placement dynamically based upon I/O patterns and will move data between tiers.

Content Cache

  • Key Role: Dynamic read cache
  • Description: The Content Cache (aka “Elastic Dedupe Engine”) is a deduped read cache which spans both the CVM’s memory and SSD.  Upon a read request of data not in the cache (or based upon a particular fingerprint) the data will be placed in to the single-touch pool of the content cache which completely sits in memory where it will use LRU until it is ejected from the cache.  Any subsequent read request will “move” (no data is actually moved, just cache metadata) the data into the memory portion of the multi-touch pool which consists of both memory and SSD.  From here there are two LRU cycles, one for the in-memory piece upon which eviction will move the data to the SSD section of the multi-touch pool where a new LRU counter is assigned.  Any read request for data in the multi-touch pool will cause the data to go to the peak of the multi-touch pool where it will be given a new LRU counter.  Fingerprinting is configured at the container level and can be configured via the UI.  By default fingerprinting is disabled.
  • Below we show a high-level overview of the Content Cache:

CC_Pools IO Path

Extent Cache

  • Key Role: In-memory read cache
  • Description: The Extent Cache is an in-memory read cache that is completely in the CVM’s memory.  This will store non-fingerprinted extents for containers where fingerprinting and dedupe disabled.

Drive Breakdown

In this section I’ll cover how the various storage devices (SSD / HDD) are broken down, partitioned and utilized by the Nutanix platform. NOTE: All of the capacities used are in Base2 Gibibyte (GiB) instead of the Base10 Gigabyte (GB).  Formatting of the drives with a filesystem and associated overheads has also been taken into account.

SSD Devices

SSD devices store a few key items which are explained in greater detail above:

  • Nutanix Home (CVM core)
  • Cassandra (metadata storage) – MORE
  • OpLog (persistent write buffer)
  • Extent Store (persistent storage)

Below we show an example of the storage breakdown for a Nutanix node’s SSD(s):
NDFS_SSD_breakdown3 IO PathNOTE: The sizing for OpLog is done dynamically as of release 4.0.1 which will allow the extent store portion to grow dynamically.  The values used are assuming a completely utilized OpLog.  Graphics and proportions aren’t drawn to scale.  When evaluating the Remaining GiB capacities do so from the top down.

For example the Remaining GiB to be used for the OpLog calculation would be after Nutanix Home and Cassandra have been subtracted from the formatted SSD capacity. Most models ship with 1 or 2 SSDs, however the same construct applies for models shipping with more SSD devices. For example, if we apply this to an example 3060 or 6060 node which has 2 x 400GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~440GiB of Extent Store SSD capacity per node.  Storage for Cassandra is a minimum reservation and may be larger depending on the quantity of data.
NDFS_SSD_3060_2 IO Path
For a 3061 node which has 2 x 800GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~1.1TiB of Extent Store SSD capacity per node.
NDFS_SSD_3061v2 IO Path

HDD Devices

Since HDD devices are primarily used for bulk storage, their breakdown is much simpler:

  • Curator Reservation (Curator storage) – MORE
  • Extent Store (persistent storage)

NDFS_HDD_breakdown IO Path
For example, if we apply this to an example 3060 node which has 4 x 1TB HDDs this would give us 80GiB reserved for Curator and ~3.4TiB of Extent Store HDD capacity per node.
NDFS_HDD_3060 IO PathFor a 6060 node which has 4 x 4TB HDDs this would give us 80GiB reserved for Curator and ~14TiB of Extent Store HDD capacity per node.
NDFS_HDD_6060 IO PathStatistics and technical specifications: opportunites-digitales.com/avis-expressvpn/
NOTE: the above values are accurate as of 4.0.1 and may vary by release.
Next up, Drive Breakdown on Nutanix

Until next time, Rob….

NPP Training series – Data Structure on Nutanix with Hyper-V

To continue NPP training series here is my next topic: Data Structure on Nutanix with Hyper-V
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

To give credit due, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it and have updated the graphics for Hyper-V.

Data Structure on Nutanix

The NDFS (Nutanix Distributed Filesystem) is composed of the following high-level structs:

Storage Pool

  • Key Role: Group of physical devices
  • Description: A storage pool is a group of physical storage devices including PCIe SSD (Solid State Drive), SSD, and HDD (Hard Disk Drive) devices for the cluster.  The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales.  In most configurations only a single storage pool is leveraged.

Container

  • Key Role: Group of VMs/files
  • Description: A container is a logical segmentation of the Storage Pool and contains a group of VM (Virtual Machine) or files (vDisks).  Some configuration options (eg. (RF) Resiliency Factor) are configured at the container level, however are applied at the individual VM/file level.  Containers typically have a 1 to 1 mapping with a datastore (SMB Share(s)).

vDisk

  • Key Role: vDisk
  • Description: A vDisk is any file over 512KB on NDFS including VM hard disks.  vDisks are composed of extents which are grouped and stored on disk as an extent group.

Below we show how these map between NDFS and the Hyper-V:
SP_structure Data Structure

Extent

  • Key Role: Logically contiguous data
  • Description: A extent is a 1MB piece of logically contiguous data which consists of n number of contiguous blocks (varies depending on guest OS block size). Extents are written/read/modified on a sub-extent basis (aka slice) for granularity and efficiency.  An extent’s slice may be trimmed when moving into the cache depending on the amount of data being read/cached.

Extent Group

  • Key Role: Physically contiguous stored data
  • Description: A extent group is a 1MB or 4MB piece of physically contiguous stored data.  This data is stored as a file on the storage device owned by the CVM (Controller Virtual Machine).  Extents are dynamically distributed among extent groups to provide data striping across nodes/disks to improve performance.

Below we show how these structs relate between the various filesystems:
NDFS_DataLayout_Text Data StructureHere is another graphical representation of how these units are logically related:
NDFS_DataStructure3 Data StructureNext up, I/O Path Overview

Until next time, Rob…

NPP Training series – Cluster Components with Hyper-V

To continue NPP training series here is my next topic: Cluster Components

If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V
I/O Path Overview

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.

Cluster Components

The Nutanix platform is composed of the following high-level components:

NDFS_Cluster Components

Cassandra

  • Key Role: Distributed metadata store
  • Description: Cassandra stores and manages all of the cluster metadata in a distributed ring like manner based upon a heavily modified Apache Cassandra.  The Paxos algorithm is utilized to enforce strict consistency.  This service runs on every node in the cluster.  Cassandra is accessed via an interface called Medusa.

Medusa

  • Key Role: Abstraction layer
  • Description: Medusa is the Nutanix abstraction layer that sits in front of the cluster’s distributed metadata database, which is managed by Cassandra..

Zookeeper

  • Key Role: Cluster configuration manager
  • Description: Zeus stores all of the cluster configuration including hosts, IPs, state, etc. and is based upon Apache Zookeeper.  This service runs on three nodes in the cluster, one of which is elected as a leader.  The leader receives all requests and forwards them to the peers.  If the leader fails to respond a new leader is automatically elected.   Zookeeper is accessed via an interface called Zeus.

Zeus

  • Key Role:  Library interface
  • Description: Zeus is the Nutanix library interface that all other components use to access the cluster configuration, such as IP addresses. Currently implemented using Zookeeper, Zeus is responsible for critical, cluster-wide data such as cluster configuration and leadership locks.

Stargate

  • Key Role: Data I/O manager
  • Description: Stargate is responsible for all data management and I/O operations and is the main interface from Hyper-V (via SMB 3.0).  This service runs on every node in the cluster in order to serve localized I/O.

Curator

  • Key Role: Map reduce cluster management and cleanup
  • Description: Curator is responsible for managing and distributing tasks throughout the cluster including disk balancing, proactive scrubbing, and many more items.  Curator runs on every node and is controlled by an elected Curator Master who is responsible for the task and job delegation.  There are two scan types for Curator, a full scan which occurs around every 6 hours and a partial scan which occurs every hour.

Prism

  • Key Role: UI and API
  • Description: Prism is the management gateway for component and administrators to configure and monitor the Nutanix cluster.  This includes Ncli, the HTML5 UI and REST API.  Prism runs on every node in the cluster and uses an elected leader like all components in the cluster.

prism1 Cluster Components prism2 Cluster Components

Genesis

  • Key Role: Cluster component & service manager
  • Description:  Genesis is a process which runs on each node and is responsible for any services interactions (start/stop/etc.) as well as for the initial configuration. Genesis is a process which runs independently of the cluster and does not require the cluster to be configured/running.  The only requirement for genesis to be running is that Zookeeper is up and running.  The cluster_init and cluster_status pages are displayed by the genesis process.

Chronos

  • Key Role: Job and Task scheduler
  • Description: Chronos is responsible for taking the jobs and tasks resulting from a Curator scan and scheduling/throttling tasks among nodes.  Chronos runs on every node and is controlled by an elected Chronos Master who is responsible for the task and job delegation and runs on the same node as the Curator Master.

Cerebro

  • Key Role: Replication/DR manager
  • Description: Cerebro is responsible for the replication and DR capabilities of DFS(Distributed Storage Fabric).  This includes the scheduling of snapshots, the replication to remote sites, and the site migration/failover.  Cerebro runs on every node in the Nutanix cluster and all nodes participate in replication to remote clusters/sites.

Pithos

  • Key Role: vDisk configuration manager
  • Description: Pithos is responsible for vDisk (DFS file) configuration data.  Pithos runs on every node and is built on top of Cassandra.

Next up, Data Structures which comprises high level structs for Nutanix Distributed Filesystem

Until next time, Rob….

NPP Training series – Cluster Architecture with Hyper-V

To continue NPP training series here is my next topic: Cluster Architecture

To give credit, some of this content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.

Cluster Architecture

The Nutanix solution is a converged storage + compute solution which leverages local components and creates a distributed platform for virtualization aka virtual computing platform. The solution is a bundled hardware + software appliance which houses 2 (6000/7000 series) or 4 nodes (1000/2000/3000/3050 series) in a 2U footprint. Each node runs an industry standard hypervisor (ESXi, KVM, Hyper-V currently) and the Nutanix Controller VM (CVM).  The Nutanix CVM is what runs the Nutanix software and serves all of the I/O operations for the hypervisor and all VMs running on that host.  For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d).  In the case of Hyper-V the storage devices are passed through to the CVM. Below is an example of what a typical node logically looks like:

NDFS_NodeDetail2 Cluster Architecture

Together, a group of Nutanix Nodes forms a distributed platform called the Distributed Storage Fabric (DFS).  DFS appears to the Hyper-V like any centralized storage array, however all of the I/Os are handled locally to provide the highest performance.  More detail on how these nodes form a distributed system can be found below. Below is an example of how these Nutanix nodes form NDFS and then presented up to Hyper-V via SMB 3.0 Share(s):

dsf_overview Cluster Architecture

DFS uses a software-defined, shared-nothing, scale-out approach to storage that eliminates the need for you to deploy a separate SAN along with its performance bottlenecks and scalability limitations. DFS leverages local SSD for fast VM performance and consolidates high capacity HDDs for cost-effective storage capacity.

The application data is intelligently placed in the appropriate storage tier, balancing storage performance and capacity needs. The environment’s noisy VMs on different hosts won’t impact the performance for any workloads—fulfilling key performance requirements for hybrid deployments.
Here are the key points with Hyper-V on Nutanix:

  • Hypervisor sees the Distributed Storage Fabric (DFS) as one or more SMB 3.0 file shares
  • Supports features like snapshots, dedupe, compression web-scale out, and disaster recovery
  • Locally shared storage is comprised of both flash and spinning disks
  • Variety of models (compute heavy, storage heavy, etc.)
  • Mix and match models within the same cluster
  • Pay as you grow – Start small and linearly scale your Microsoft infrastructure in minutes without the scalability shortcomings of traditional servers and storage.

Next up in the NPP Training series – Cluster Components