Estimated reading time: 3 minutes
Thank you for reading this post, don't forget to subscribe! Happy New Year 2024!


As mentioned in my previous post, S2D can be deployed in either a more traditional disaggregated compute model or as a Hyperconverged model as shown below:
Here are the basic components of the stack…
Failover Clustering – The built-in clustering feature of Windows Server is used to connect the servers.
Software Storage Bus – The Software Storage Bus is new in S2D. The bus spans the cluster and establishes a software-defined storage fabric where all the servers can see all of each other’s local drives.
Storage Bus Layer Cache – The Software Storage Bus dynamically binds the fastest drives present (typically SSDs) to slower HDDs to provide server-side read/write caching. The cache is independent of pools and vDisks, always-on, and requires no configuration.
Storage Pool – When an IT Admin enables storage spaces, all of the eligible drives (excludes boot drives, etc.) discovered by the storage bus. Disks are grouped together to form a pool. It’s created automatically on setup, and by default, there is only one pool per cluster. IT Admin’s can configure additional pools, but Microsoft recommends against it.
Storage Spaces – From the pool, Microsoft’s carves out ‘storage spaces’ or essentially virtual disks. The vDisks can be defined as a simple space (no protection), mirrored space (distributed 2-way or 3-way mirroring), or a parity space (distributed erasure coding). You can think of it as distributed, software-defined RAID using the drives in the pool. IT Admin’s can choose to use the new ReFS file system (more on this later) or traditional NTFS.
Resilient File System (ReFS) – ReFS is the purpose-built filesystem for virtualization. This includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging. It also has built-in checksums to detect and correct bit errors. ReFS also introduces real-time tiers. This allows the rotation data between so-called “hot” and “cold” storage tiers in real-time based on usage.
Cluster Shared Volumes – Each vDisk is a cluster shared volume that exists within a single namespace so that every volume appears to each host server as being mounted locally.
Scale-Out File Server – The scale-out file server only exists in converged deployments and provides remote file access via SMB3.
Networking Hardware – Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. Microsoft strongly recommends 10+ GbE with remote-direct memory access (RDMA). IT Admin’s can either use iWARP or RoCE (RDMA over Converged Ethernet).
In Windows Server 2016, Microsoft has also incorporated Storage Replica, Storage QoS, and a new Health Service. I’ll cover each of these areas in a little more detail in a later post with regards to S2D.
Storage Hardware
Microsoft supports hybrid or all-flash configurations. Each server must have at least 2 SSDs and 4 additional drives. Microsoft has support for NVMe in the product today. IT Admin’s can use a mixture of NVME, SSD, or HDDs in a variety of tiering models. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander.
Now that we have covered the basics, next I will dive into how each of the components work. Next up, ReFS, Multi-Tier Volumes, Erasure Coding and tigers oh my… 🙂
Until next time, Rob…