Exchange Server 2016 RTM Released: Forged in the cloud. Built for Web-Scale

Exchange Server 2016 is here and available to download!!!

What sets this version of Exchange apart from the past, is that it was forged in the cloud. This release brings the Exchange bits that already power millions of Office 365 mailboxes to your on-premises environment. And deploying Exchange 2016 on Nutanix, you can truly create the ultimate email web-scale environment.

Email remains the backbone of business communication and the one that workers consider the most essential tool for getting things done. Because of this, it’s vital to have a modern messaging infrastructure that meets today’s business expectations of scale. With the volume of email and other communications continuing to grow, people need tools that help them focus on what’s most important in their inboxes, schedules and interactions with others at work. And as the quantity of email data grows, so do the demands on IT to manage, preserve and protect it. This is why Web-Scale so important in an Exchange 2016 environment.

Web-Scale Fundamentals  
Exchange Server 2016
To help you meet these challenges with Exchange Server, Microsoft has deepened the integration between Exchange and other Office products, so your organization can be more productive and collaborate more effectively. They’ve made it easier to manage your email with new ways to focus on what’s important, work more efficiently, and accomplish more with your devices. Microsoft has also simplified the Exchange architecture and introduced additional recovery features.

Exchange 2016 builds on and improves features introduced in Exchange 2013, including Data Loss Prevention, Managed Availability, automatic recovery from storage failures, and the web-based Exchange admin center.

  • Better collaboration: Exchange 2016 includes a new approach to attachments that simplifies document sharing and eliminates version control headaches. In Outlook 2016 or Outlook on the web, you can now attach a document as a link to SharePoint 2016 (currently in preview) or OneDrive for Business instead of a traditional attachment, providing the benefits of coauthoring and version control.
    Exchange Server 2016
  • Improved Outlook web experience: Continuing our effort to provide you with a first class web experience across devices, Microsoft has made significant updates to Outlook on the web. New features include: Sweep, Pin, Undo, inline reply, a new single-line inbox view, improved HTML rendering, new themes, emojis, and more.
    Exchange Server 2016
  • Search: A lightning-fast search architecture delivers more accurate and complete results. Outlook 2016 is optimized to use the power of the Exchange 2016 back-end to help you find things faster, across old mail and new. Search also gets more intelligent with Search suggestions, People suggestions, search refiners, and the ability to search for events in your Calendar.
    Exchange Server 2016
  • Greater extensibility:  An expanded Add-In model for Outlook desktop and Outlook on the web allows developers to build features right into the Outlook experience. Add-ins can now integrate with UI components in new ways: as highlighted text in the body of a message or meeting, in the right-hand task pane when composing or reading a message or meeting, and as a button or a dropdown option in the Outlook ribbon.
    Exchange Server 2016
  • eDiscovery: Exchange 2016 has a revamped eDiscovery pipeline that is significantly faster and more scalable. Reliability is improved due to a new search architecture that is asynchronous and distributes the work across multiple servers with better fault tolerance. You also have the ability to search, hold and export content from public folders.
  • Simplified architecture: One Role…!  Exchange 2016’s architecture reflects the way we deploy Exchange in Office 365 and is an evolution and refinement of Exchange 2013. A combined mailbox and client access server role makes it easier to plan and scale your on-premises and hybrid deployments. Coexistence with Exchange 2013 is simplified, and namespace planning is easier.
  • High availability: Automated repair improvements such as database divergence detection make Exchange easier than ever to run in a highly available way. Stability and performance enhancements from Office 365, many of which were so useful that Microsoft shipped them in Exchange 2013 Cumulative Updates, are also baked into the product.

That’s just quick list of highlights; I encourage you to get a full view of what’s new by reviewing the Exchange 2016 documentation on TechNet.
Or, if you are in the mood for something more bite-sized, check out these short demo videos in which a few members of the Exchange team show off their favorite features:

Exchange 2016 will follow the same servicing rhythm as Exchange 2013, with Cumulative Updates (CUs) released approximately every three months that contain bug fixes, product refinements, and selected new investments from Office 365. The first CU is expected to arrive in the first quarter of 2016.

Until next time, Rob….

Nutanix NOS 4.5 Released…

Hi all…It’s been a few weeks since my last blog post. I’ve been busy with some travel to Microsoft Technology Centers and working on the Nutanix Ready Program.  Yesterday, Nutanix released NOS 4.5.  This exciting upgrade adds some great features..  Sit back and get ready to enjoy the ride…release notes below.

customLogo NOS 4.5

Table 1. Terminology Updates
New Terminology Formerly Known As
Acropolis base software Nutanix operating system, NOS
Acropolis hypervisor, AHV Nutanix KVM hypervisor
Acropolis API Nutanix API and Acropolis API
Acropolis App Mobility Fabric Acropolis virtualization management and administration
Acropolis Distributed Storage Fabric, DSF Nutanix Distributed Filesystem (NDFS)
Prism Element Web console (for cluster management); also known as the Prism web console; a cluster managed by Prism Central
Prism Central Prism Central (for multicluster management)
Block fault tolerance Block awareness

What’s New in Acropolis base software 4.5

Bandwidth Limit on Schedule

  • The bandwidth throttling policy provides you with an option to set the maximum limit of the network bandwidth. You can specify the policy depending on the usage of your network.

Note: You can configure bandwidth throttling only while updating the remote site. This option is not available during the configuration of remote site.

Cloud Connect for Azure

  • The cloud connect feature for Azure enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Microsoft Azure cloud. Once configured through the Prism web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured. This feature is currently supported for ESXi hypervisor environments only.

Common Access Card Authentication

  • You can configure two-factor authentication for web console users that have an assigned role and use a Common Access Card (CAC).

Default Container and Storage Pool Upon Cluster Creation

  • When you create a cluster, the Acropolis base software automatically creates a container and storage pool for you.

Erasure Coding

  • Complementary to deduplication and compression, erasure coding increases the effective or usable cluster storage capacity. [FEAT-1096]

Hyper-V Configuration through Prism Web Console

  • After creating a Nutanix Hyper-V cluster environment, you can use the Prism web console to join the hosts to the domain, create the Hyper-V failover cluster, and also enable Kerberos.

Image Service Now Available in the Prism Web Console

  • The Prism web console Image Configuration workflow enables a user to upload ISO or disk images (in ESXi or Hyper-V format) to a Nutanix AHV cluster by specifying a remote repository URL or by uploading a file from a local machine.

MPIO Access to iSCSI Disks (Windows Guest VMs)

  • Acropolis base software 4.5 feature to help enforce access control to volume groups and expose volume group disks as dual namespace disks.

Network Mapping

  • Network mapping allows you to control network configuration for the VMs when they are started on the remote site. This feature enables you to specify network mapping between the source cluster and the destination cluster. The remote site wizard includes an option to create one or more network mappings and allows you to select source and destination network from the drop-down list. You can also modify or remove network mappings as part of modifying the remote sites.

Nutanix Cluster Check

  • Acropolis base software 4.5 includes Nutanix Cluster Check (NCC) 2.1, which includes many new checks and functionality.
  • NCC 2.1 Release Notes

NX-6035C Clusters Usable as a Target for Replication

  • You can use a Nutanix NX-6035C cluster as a target for Nutanix native replication and snapshots, created by source Nutanix clusters in your environment. You can configure the NX-6035C as a target for snapshots, set a longer retention policy than on the source cluster (for example), and restore snapshots to the source cluster as needed. The source cluster hypervisor environment can be AHV, Hyper-V, or ESXi. See Nutanix NX-6035C Replication Target in Notes and Cautions.

Note: You cannot use an NX-6035C cluster as a backup target with third-party backup software.

Prism Central Can Now Be Deployed on the Acropolis Hypervisor (AHV)

  • Nutanix has introduced a Prism Central OVA which can be deployed on an AHV cluster by leveraging Image Service features. See the Web Console Guide for installation details.
  • Prism Central 4.5 Release Notes

Prism Central Scalability

  • By increasing memory capacity to 16GB and expanding its virtual disk to 260GB, Prism Central can support a maximum of 100 clusters and 10000 VMs (across all the clusters and assuming each VM has an average of two virtual disks). Please contact Nutanix support if you decide to change the configuration of the Prism Central VM.
  • Prism Central 4.5 Release Notes
  • Prism Central Scalability, Compatibility and Deployment

Simplified Add Node Workflow

  • This release leverages Foundation 3.0 imaging capabilities and automates the manual steps previously required for expanding a cluster through the Prism web console.

SNMP

  • The Nutanix SNMP MIB database includes the following changes:
    • The database includes tables for monitoring hypervisor instances and virtual machines.
    • The service status table named serviceStatusTable is obsolete. Analogous information is available in a new table named controllerStatusTable. The new table has a smaller number of MIB fields for displaying the status of only essential services in the Acropolis base software.
    • The disk status table (diskStatusTable), storage pool table (storagePoolInformationTable), and cluster information table include one or more new MIB fields.
  • The SNMP feature also includes the following enhancements:
    • From the web console, you can trigger test alerts that are sent to all configured SNMP trap receivers.
    • SNMP service logs are now written to the following log file: /home/nutanix/data/logs/snmp_manager.out

Support for Minor Release Upgrades for ESXi Hosts

  • Acropolis base software 4.5 enables you to patch upgrade ESXi hosts with minor release versions of ESXi host software through the Controller VM cluster command. Nutanix qualifies specific VMware updates and provides a related JSON metadata upgrade file for one-click upgrade, but now customers can patch hosts by using the offline bundle and md5sum checksum available from VMware, and using the Controller VM cluster command.

Note: Nutanix supports the ability to patch upgrade ESXi hosts with minor versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those minor releases. Please see the the Nutanix hypervisor support statement in our Support FAQ.

VM High Availability in Acropolis

  • In case of a node failure, VM High Availability (VM-HA) ensures that VMs running on the node are automatically restarted on the remaining nodes within the cluster. VM-HA can optionally be configured to reserve spare failover capacity. This capacity reservation can be distributed across the nodes in chunks known as “segments” to provide better overall resource utilization.

Windows Guest VM Failover Clustering

  • Acropolis base software 4.5 supports configuring Windows guest VMs as a failover cluster. This clustering type enables applications on a failed VM to fail over to and run on another guest VM on the same or different host. This release supports this feature on Hyper-V hosts with in-guest VM iSCSI and SCSI 3 Persistent Reservation (PR).

Tech Preview Features

Note: Do not use tech preview features on production systems or storage used or data stored on production systems.

File Level Restore

  • The file level restore feature allows a virtual machine user to restore a file within a virtual machine from the Nutanix protected snapshot with minimal Nutanix administrator intervention.

Note: This feature should be used only after upgrading all nodes in the cluster to Acropolis base software 4.5.

What’s New in Prism Central

Prism Central for Acropolis Hypervisor (AHV)

Nutanix has introduced a Prism Central VM which is compatible with AHV to enable multicluster management in this environment. Prism Central now supports all three major hypervisors: AHV, Hyper-V, and ESXi.

Prism Central Scalability

The Prism Central VM requires these resources to support the clusters and VMs indicated in the table.

 
Prism Central vCPU
Prism Central Memory (GB, default) Total Storage Required for Prism Central VM (GB) Clusters Supported VMs Supported (across all clusters) Virtual disks per VM
4 8 256 50 5000 2

Release Notes | NCC 2.1

Learn More About NCC Health Checks

You can learn more about the Nutanix Cluster Check (NCC) health checks on the Nutanix support portal. The portal includes a series of Knowledge Base articles describing most NCC health checks run by the ncc health_checks command.

What’s New in NCC 2.1

NCC 2.1 includes support for:

  • Acropolis base software 4.5 or later
  • NOS 4.1.3 or later only
  • All Nutanix NX Series models
  • Dell XC Series of Web-scale Converged Appliances

Tech Preview Features

The following features are available as a Tech Preview in NCC 2.1.

Run NCC health checks in parallel

  • You can specify the number of NCC health checks to run in parallel to reduce the amount of time it takes for all checks to complete. For example, the command ncc health_checks run_all –parallel=25 will run 25 of the health checks in parallel.

Use npyscreen to display NCC status

  • You can specify npyscreen as part of the ncc command to display status to the terminal window. Specify –npyscreen=true as part of the ncc health_checks command.

New Checks in This Release

Check Name Description KB Article
check_disks Check whether disks are discoverable by the host. Pass if the disks are discovered. KB 2712
check_pending_reboot Check if host has pending reboots. Pass if host does not have pending reboots. KB 2713
check_storage_heavy_node Verify that nodes such as the storage-heavy NX-6025C are running a service VM and no guest VMs.
Verify that nodes such as the storage-heavy NX-6025C are runningthe Acropolis hypervisor only.
KB 2726
KB 2727
check_utc_clock Check if UTC clock is enabled. KB 2711
cluster_version_check Verifiy that the cluster is running a released version of NOS or the Acropolis base software. This check returns an INFO status and the version if the cluster is running a pre-release version. KB 2720
compression_disabled_check Verify if compression is enabled. KB 2725
data_locality_check Check if VMs that are part of a cluster with metro availability are in two different datastores (that is, fetching local data). KB 2732
dedup_and_compression_enabled_containers_check Checks if any container have deduplication and compression enabled together. KB 2721
dimm_same_speed_check Check that all DIMMs have the same speed. KB 2723
esxi_ivybridge_performance_degradation_check Check for the Ivy Bridge performance degradation scenario on ESXi clusters. KB 2729
gpu_driver_installed_check Check the version of the installed GPU driver. KB 2714
quad_nic_driver_version_check Check the version of the installed quad port NIC driver version. KB 2715
vmknics_subnet_check Check if any vmknics have same subnet (different subnets are not supported). KB 2722

Foundation Release 3.0

This release includes the following enhancements and changes:

  • A major new implementation that allows for node imaging and cluster creation through the Controller VM for factory-prepared nodes on the same subnet. This process significantly reduces network complications and simplifies the workflow. (The existing workflow remains for imaging bare metal nodes.) The new implementation includes the following enhancements:
    • A Java aplet that automatically discovers factory-prepared nodes on the subnet and allows you to select the first one to image.
    • A simplified GUI to select and configure the nodes, define the cluster, select the hypervisor and Acropolis base software versions to use, and monitor the imaging and cluster creation process.

Customers may create a cluster using the new Controller VM-based implementation in Foundation 3.0. Imaging bare metal nodes is still restricted to Nutanix sales engineers, support engineers, and partners.

  • The new implementation is incorporated in the Acropolis base software version 4.5 to allow for node imaging when adding nodes to an existing cluster through the Prism GUI.
  • The cluster creation workflow does not use IPMI, and for both cluster creation and bare-metal imaging, the host operating system install is done within an “installer VM” in Phoenix.
  • To see the progress of a host operating system installation, point a VNC console at the node’s Controller VM IP address on port 5901.
  • Foundation no longer offers the option to run diagnostics.py as a post-imaging test.  Should you wish to run this test, you can download it from the Tools & Firmware page on the Nutanix support portal.
  • There is no Foundation upgrade path to the new Controller VM implementation; you must download the Java aplet from the Foundation 3.0 download page on the support portal. However, you can upgrade Foundation 2.1.x to 3.0 for the bare metal workflow as follows:
      • Copy the Foundation tarball (foundation-version#.tar.gz) from the support portal to /home/nutanix in your VM.
      • Navigate to /home/nutanix.
      • Enter the following five commands:
        • $ sudo service foundation_service stop
        • $ rm -rf foundation
        • $ tar xzf foundation-version#.tar.gz
        • $ sudo yum install python-scp
        • $ sudo service foundation_service restart
    • If the first command (foundation_service stop) is skipped or the commands are not run in order, the user may get bizarre errors after upgrading. To fix this situation, enter the following two commands:
  • $ sudo pkill -9 foundation
  • $ sudo service foundation_service restart

Release Notes for each of these products is located at:

  • Acropolis base software 4.5:  https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Acr_v4_5.html
  • Prism Central 4.5: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Prism_Central_v4_5.html
  • Nutanix Cluster Check(NCC) 2.1: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-NCC:rel_Release_Notes-NCC_v2_1.html
  • Foundation 3.0: https://portal.nutanix.com/#/page/docs/details?targetId=Field_Installation_Guide-v3_0:fie_release_notes_foundation_v3_0_r.html

Download URLs:

Until next time, Rob…

Nutanix SCVMM Fast Clones Plug-in

Hi Everyone…I love to show off the cool Microsoft integrations that Nutanix has and most recently Nutanix released System Center Virtual Machine Manager (SCVMM) 2012 R2 Fast Clones plug-in.

With NOS 4.1.3, Nutanix has released a Fast Clone plugin for SCVMM.  The plug-in has the ability to provide space efficient, low impact clones from SCVMM and quickly. The plugin is a wrapper around Nutanix powershell commands for Fast Cones. The plugin does need proper access rights to the Hyper-V hosts and SCVMM and already should be setup for most environments that have Nutanix with Hyper-V deployed.  You will need to install the plugin on the SCVMM host along with the Nutanix powershell command-lets.

Once you have the SCVMM Fast Clones plug-in installed, you can start creating Fast Clones right away. Installation is quick and easy and creating clones is just as easy as shown below.

To create VM clones using the Nutanix Fast Clones wizard, follow the below steps:

  1. Start the SCVMM
  2. Navigate to the Nutanix hosts.
  3. Select a host and then select the VM to be cloned.
  4. To invoke the wizard, do one of the following: Click the “Nutanix Fast Clone” button on the top menu-bar. Right-click the target VM and select “Nutanix Fast Clone” from the pop-up context menu:
    fastclones2 Fast Clones
  5. In the Introduction screen, read the instructions and then click the “Next” button.  NOTE: On start of the wizard, it makes a connection to the VMM to be able to communicate with it to run SCVMM PowerShell cmdlets to gather information about the selected VM.
    fastclones1 Fast Clones
  6. The “Identity” screen is displayed. The “Source VM Name” and “Source VM Host Name” is prepopulated, enter the following information and then click the “Next” button:
    1. Clone Type: Click the “Clone One Virtual Machine” radio button and enter a name for the clone when creating a single clone or click the “Clone Multiple Virtual Machines” radio button and enter the following information:
      1. VM Prefix Name: This is the root part of the new VM name.
      2. Beginning Suffix: a number to start the numbering of the new VMs
      3. Number of Clones: a number between 1 and 100.
        fastclones3 Fast Clones
  7. In the Authentication screen, enter the Prism and VMM Service Account user names and passwords in the appropriate fields, and then click the “Next” button.
    fastclones4 Fast Clones
  8. In the “Select Path” screen, select the destination path and then click the “Next” button. Leave the default path “as is” or change it to a new path as needed by clicking the “Change the default path” box. Click the Browse button to select a destination path for the clone VMs. This is the path where virtual machine configuration files will be stored. The path must be on the same Nutanix SMB share as the VM configuration file.
    fastclones5 Fast Clones
  9. In the “Add Properties” screen, click the appropriate radio button to either power on or not power on the VMs after cloning and then click the “Next” button.
    fastclones6 Fast Clones
  10. In the Summary screen below, review (confirm) the settings are correct.
    fastclones7 Fast Clones
    Clicking the “View Script” button displays the script to be executed:
    fastclones9 Fast Clones
    Clicking the “Enable Verbose Messages” displays detailed log messages as the VMs are being created.
  11. When the settings are correct, click the “Create” button to create the cloned VM(s). An hour glass is displayed and progress messages are displayed.
  12. After the clones are created, click the Finish button to close the wizard and you just created VM’s at lighting speed.fastclones10 Fast Clones

If you want to check out Fast Clones for your environment, you can download Fast Clones from the Nutanix Portal at https://portal.nutanix.com.

Below is a demo video shows traditional cloning vs Fast Clones that my buddy @mcghem created.  It shows the awesome benefit of Fast Clones.

https://youtu.be/cWGEvjHZxdw

As always, if you have any questions please post a comment.

Until next time….Rob

Nutanix Next Community podcast – ScienceLogic

Again….at my job at Nutanix…I get to work with great partners and one of the first was ScienceLogic  🙂 .The Solutions team was great to work with and I really got a chance to see deep integration with Nutanix’s REST API and watch the their solution grow.
Below is podcast of an Interview that my awesome buddy TommyG (@NutanixTommy) and I did with Jim Weingarten (@jweingarten) from SL. Jim has a great perspective of the IT landscape and the current shift in Hybrid IT. It was great to work with Jim over the past few months and watch their solution evolve.

A little about ScienceLogic…

ScienceLogic is a software and service vendor. It produces information technology (IT) management and monitoring solutions for IT Operations and Cloud computing.
The company’s product is a monitoring and management system that performs discovery, dependency mapping, monitoring, alerting, ticketing, runbook automation, dashboarding and reporting for networks, compute, storage and applications.
The platform monitors both on-premises and cloud-based IT assets, enabling customers who use public cloud services, such as Azure and Amazon Web Services (AWS), to migrate workloads to the cloud.

Enjoy the show…..Rob

Nutanix SCOM Management Pack – Monitor Your Nutanix Infrastructure

As a Microsoft Evangelist at Nutanix, I am always asked….”How would you monitor your Nutanix Infrastructure and can I use System Center suite. And my answer always is, “YES, with SCOM”….What is SCOM you ask?
SysCnt-OprtnsMgr_h_rgb_2 Nutanix SCOM Management Pack System Center Operations Manager (SCOM) is designed to be a monitoring tool for the datacenter. Think of a datacenter with multiple vendors representing multiple software and hardware products. Consequently, SCOM was developed to be extensible using the concept of management packs. Vendors typically develop one or more management packs for every product they want plugged into SCOM.

To facilitate these management packs, SCOM supports standard discovery and data collection mechanisms like SNMP, but also affords vendors the flexibility of native API driven data collection.  Nutanix provides management packs that support using the Microsoft System Center Operations Manager (SCOM) to monitor a Nutanix cluster.

Nutanix SCOM Management Pack

The management packs collect information about software (cluster) elements through SNMP and hardware elements through ipmiutil (Intelligent Platform Management Interface Utility) and REST API calls and then package that information for SCOM to digest. Note: The Hardware Elements Management Pack leverages the ipmiutil program to gather information from Nutanix block for Fans, Power Supply and Temperature.
SCOM01 Nutanix SCOM Management Pack

Nutanix provides two management packs:

  • Cluster Management Pack – This management pack collects information about software elements of a cluster including Controller VMs, storage pools, and containers.
  • Hardware Management Pack – This management pack collects information about hardware elements of a cluster including fans, power supplies, disks, and nodes.

Installing and configuring the management packs involves the following simple steps:

  1. Install and configure SCOM on the Windows server system (if not installed) (will blog a post soon on this topic)
  2. Uninstall existing Nutanix management packs (if present)
  3. Open the IPMI-related ports (if not open). IPMI access is required for the hardware management pack
  4. Install the Nutanix management packs
  5. Configure the management packs using the SCOM discovery and template wizards

SCOM02 Nutanix SCOM Management Pack SCOM03 Nutanix SCOM Management Pack SCOM04 Nutanix SCOM Management PackSCOM16 Nutanix SCOM Management PackSCOM17 Nutanix SCOM Management PackSCOM18 Nutanix SCOM Management PackSCOM19 Nutanix SCOM Management PackAfter the management packs have been installed and configured, you can use SCOM to monitor a variety of Nutanix objects including cluster, alert, and performance views as shown in examples below. Also, I check out this great video produced by pal @mcghem . He shows a great demo of the SCOM management pack…Kudo’s Mike….also, check out his blog.

YouTube player

Views and Objects Snapshots SCOM05 Nutanix SCOM Management Pack

Cluster Monitoring SnapshotsSCOM06 Nutanix SCOM Management Pack SCOM07 Nutanix SCOM Management Pack

Cluster Performance Monitoring

SCOM08 Nutanix SCOM Management Pack SCOM09 Nutanix SCOM Management Pack SCOM10 Nutanix SCOM Management Pack SCOM11 Nutanix SCOM Management Pack

Hardware Monitoring Snapshots SCOM12 Nutanix SCOM Management Pack SCOM13 Nutanix SCOM Management Pack

In the following diagram views, users can navigate to the components with failure.

SCOM14 Nutanix SCOM Management Pack

Nutanix Objects Available for Monitoring via SCOM

The following provides an high level overview of Nutanix Cluster with Components:

dsf_overview Nutanix SCOM Management Pack

The following sections describe Nutanix Cluster objects being monitored by this version of MPs:

Cluster

Monitored Element

Description

Version

Current cluster version. This is the nutanix-core package version expected on all the Controller VMs.

Status

Current Status of the cluster. This will usually be  one of started or stopped

TotalStorageCapacity

Total storage capacity of the cluster

UsedStorageCapacity

Number of bytes of storage used on the cluster

Iops

For Performance: Cluster wide average IO operations per second

Latency

For Performance: Cluster wide average latency

CVM Resource Monitoring

Monitored Element

Description

ControllerVMId

Nutanix Controller VM Id

Memory

Total memory assigned to CVM

NumCpus

Total number of CPUs allocated to a CVM

Storage

Storage Pool

A storage pool is a group of physical disks from SSD and/or HDD tier.

Monitored Element

Description

PoolId

Storage pool id

PoolName

Name of the storage pool

TotalCapacity

Total capacity of the storage pool

Note: An alert if there is drop in capacity may indicate a bad disk.

UsedCapacity

Number of bytes used in the storage pool

Performance parameters:

Monitored Element

Description

IOPerSecond

Number of IO operations served per second from this storage pool.

AvgLatencyUsecs

Average IO latency for this storage pool in microseconds

Containers

A container is a subset of available storage within a storage pool. Containers hold the virtual disks (vDisks) used by virtual machines. Selecting a storage pool for a new container defines the physical disks where the vDisks will be stored.

Monitored Element

Description

ContainerId

Container id

ContainerName

Name of the container

TotalCapacity

Total capacity of the container

UsedCapacity

Number of bytes used in the container

Performance parameters:

Monitored Element

Description

IOPerSecond

Number of IO operations served per second from this container.

AvgLatencyUsecs

Average IO latency for this container in  microseconds

Hardware Objects

Cluster

Monitored Element

Description

Discovery IP Address

IP address used for discovery of cluster

Cluster Incarnation ID

Unique ID of cluster

CPU Usage

CPU usage for all the nodes of cluster

Memory Usage

Memory usage for all the nodes of cluster

Node IP address

External IP address of Node

System Temperature

System Temperature

Disk

Monitored Element

Description

Disk State/health

Node state as returned by the PRISM [REST /hosts “state” attribute ]

Disk ID

ID assigned to the disk

Disk Name

Name of the disk (Full path where meta data stored)

Disk Serial Number

Serial number of disk

Hypervisor IP

Host OS IP where disk is installed

Tire Name

Disk Tire

CVM IP

Cluster VM IP which controls the disk

Total Capacity

Total Disk capacity

Used Capacity

Total Disk used

Online

If Disk is online or offline

Location

Disk location

Cluster Name

Disk cluster name

Discovery IP address

IP address through which Disk was discovered

Disk Status

Status of the disk

Node

Monitored Element

Description

Node State/health

Node state as returned by the PRISM [REST /hosts “state” attribute ]

Node IP address

External IP address of Node

IPMI Address

IPMI IP address of Node

Block Model

Hardware model of block

Block Serial Number

Serial number of block

CPU Usage %

 CPU usage for Node

Memory Usage  %

Memory usage for node

Fan Count

Total fans

Power Supply Count

Total Power supply

System Temperature

System Temperature

Fan

Monitored Element

Description

Fan number

Fan number

Fan speed

Fan speed in RPM

Power supply

 Element

Description

Power supply number

Power supply number

Power supply status

Power supply status whether present or absent

If you would like to checkout the Nutanix management pack on your SCOM instance, please go to our portal to download the management pack and documentation.
This management pack was development by our awesome engineering team @ Nutanix. Kudos to Yogi and team for a job well done!!! 😉  I hope I gave you a good feel for Nutanix monitoring using SCOM. As always, if you have any questions or comments, please leave below….

Until next time….Rob

Understanding Windows Azure Pack – How to guide with Express Edition on Nutanix – Windows Azure Pack Install – Part 5

To continue Windows Azure Pack series here is my next topic:  Installing and Configuring Windows Azure Pack

If you missed other parts of the series, check links below:
Part 1 – Understanding Windows Azure Pack
Part 2 – Understanding Windows Azure Pack – Deployment Scenarios
Part 3 – Understanding Windows Azure Pack – How to guide with Express Edition on Nutanix – Environment Prep
Part 4 – Deploying Service Provider Framework on Nutanix

Again to reiterate from my previous blog posts and set some context, Windows Azure Pack (WAP) includes the following capabilities: Continue reading

NPP Training series – Drive Breakdown

To continue NPP training series here is my next topic:  Drive Breakdown
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V
I/O Path Overview

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.

Drive Breakdown

In this section I’ll cover how the various storage devices (SSD / HDD) are broken down, partitioned and utilized by the Nutanix platform. NOTE: All of the capacities used are in Base2 Gibibyte (GiB) instead of the Base10 Gigabyte (GB).  Formatting of the drives with a filesystem and associated overheads has also been taken into account.

SSD Devices

SSD devices store a few key items which are explained in greater detail above:

  • Nutanix Home (CVM core)
  • Cassandra (metadata storage) – MORE
  • OpLog (persistent write buffer) – MORE
  • Extent Store (persistent storage) – MORE

Below we show an example of the storage breakdown for a Nutanix node’s SSD(s):
NDFS_SSD_breakdown3 Drive Breakdown
NOTE: The sizing for OpLog is done dynamically as of release 4.0.1 which will allow the extent store portion to grow dynamically.  The values used are assuming a completely utilized OpLog.  Graphics and proportions aren’t drawn to scale.  When evaluating the Remaining GiB capacities do so from the top down.  For example the Remaining GiB to be used for the OpLog calculation would be after Nutanix Home and Cassandra have been subtracted from the formatted SSD capacity. Most models ship with 1 or 2 SSDs, however the same construct applies for models shipping with more SSD devices. For example, if we apply this to an example 3060 or 6060 node which has 2 x 400GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~440GiB of Extent Store SSD capacity per node.  Storage for Cassandra is a minimum reservation and may be larger depending on the quantity of data.
NDFS_SSD_3060_2 Drive Breakdown
For a 3061 node which has 2 x 800GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~1.1TiB of Extent Store SSD capacity per node.
NDFS_SSD_3061v2 Drive Breakdown

HDD Devices

Since HDD devices are primarily used for bulk storage, their breakdown is much simpler:

  • Curator Reservation (Curator storage) – MORE
  • Extent Store (persistent storage)

NDFS_HDD_breakdown Drive Breakdown
For example, if we apply this to an example 3060 node which has 4 x 1TB HDDs this would give us 80GiB reserved for Curator and ~3.4TiB of Extent Store HDD capacity per node.
NDFS_HDD_3060 Drive Breakdown
NOTE: the above values are accurate as of 4.0.1 and may vary by release.
Next up, I figured we would look at some of the cool software technologies that run on our CVM (Controller Virtual Machine), next up Elastic Dedupe Engine.

Until next time, Rob

Nutanix Community Edition – Public Beta – Now Available – Build Your Own Nutanix Test Lab for Free

nutanix-community-edition_w_500

Nutanix Community Edition

Another very exciting announcement was Nutanix Community Edition (CE) on June 9th, 2015 at our Inaugural .NEXT conference. So, what is it?…..Our website describes it the best “Community Edition is a 100% software solution enabling technology enthusiasts to easily evaluate the latest hyperconvergence technology at zero cost.”  In other words, you can use your own hardware to test out Nutanix.  Very cool.  This is great for building a lab and just gaining understanding of hyperconvergence hands on.
Nutanix is offering a hardware compatibility list (HCL) to users that includes the minimum requirements to run the software; essentially, any standard x86 server can be used….
And to quote our CEO and co-founder Dheeraj Pandey,
“From our very first software release in 2012, Nutanix has been dedicated to open architectures and technologies, offering unprecedented customer choice and flexibility,” “Community Edition is the next step in democratizing hyperconverged infrastructure technology, enabling anyone to experience the transformative benefits of our software. Only by eliminating the requirement for proprietary hardware and embracing off-the-shelf platforms can the next revolution of datacenter technologies be fully realized.”
As the name implies, the support for the CE will come from the community through Nutanix’s NEXT online portal. Users will be able to log in, ask questions and get answers from the community.
CE also allow you to also check our new hypervisor based on KVM and Acropolis. Check out Josh Odger’s Blog to learn more about Acropolis.
Join the beta…And don’t forget my NPP training series that helps you with all the concepts around hyperconvergence.
Currently, I am getting started with Nutanix CE installation and will be posting my experiences in a later blog post with how I build my Nutanix Lab @ Home. 🙂

Until next time….Rob