Estimated reading time: 3 minutes
Thank you for reading this post, don't forget to subscribe! Happy New Year 2024!
To continue NPP training series here is my next topic: Drive Breakdown
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V
Data Structure on Nutanix with Hyper-V
I/O Path Overview
To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.
Drive Breakdown
In this section I’ll cover how the various storage devices (SSD / HDD) are broken down, partitioned and utilized by the Nutanix platform. NOTE: All of the capacities used are in Base2 Gibibyte (GiB) instead of the Base10 Gigabyte (GB). Formatting of the drives with a filesystem and associated overheads has also been taken into account.
SSD Devices
SSD devices store a few key items which are explained in greater detail above:
- Nutanix Home (CVM core)
- Cassandra (metadata storage) – MORE
- OpLog (persistent write buffer) – MORE
- Extent Store (persistent storage) – MORE
Below we show an example of the storage breakdown for a Nutanix node’s SSD(s):
NOTE: The sizing for OpLog is done dynamically as of release 4.0.1 which will allow the extent store portion to grow dynamically. The values used are assuming a completely utilized OpLog. Graphics and proportions aren’t drawn to scale. When evaluating the Remaining GiB capacities do so from the top down. For example the Remaining GiB to be used for the OpLog calculation would be after Nutanix Home and Cassandra have been subtracted from the formatted SSD capacity. Most models ship with 1 or 2 SSDs, however the same construct applies for models shipping with more SSD devices. For example, if we apply this to an example 3060 or 6060 node which has 2 x 400GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~440GiB of Extent Store SSD capacity per node. Storage for Cassandra is a minimum reservation and may be larger depending on the quantity of data.
For a 3061 node which has 2 x 800GB SSDs this would give us 100GiB of OpLog, 40GiB of Content Cache and ~1.1TiB of Extent Store SSD capacity per node.
HDD Devices
Since HDD devices are primarily used for bulk storage, their breakdown is much simpler:
- Curator Reservation (Curator storage) – MORE
- Extent Store (persistent storage)
For example, if we apply this to an example 3060 node which has 4 x 1TB HDDs this would give us 80GiB reserved for Curator and ~3.4TiB of Extent Store HDD capacity per node.
NOTE: the above values are accurate as of 4.0.1 and may vary by release.
Next up, I figured we would look at some of the cool software technologies that run on our CVM (Controller Virtual Machine), next up Elastic Dedupe Engine.
Until next time, Rob