Hyper-V Management: 9 Tips Every Hyper-V Admin Should Remember

Opinions are various and abundant on how to best configure and manage Hyper-V. Much of the advice seems to be confusing, and some if it downright contradictory. One reason for such confusion is that some articles are obviously written under the assumption that VMWare best practices apply equally well to Hyper-V. The other, more common cause of this contradictory information is that best practices for Hyper-V vary considerably, depending on whether you’re managing a Hyper-V cluster. Continue reading

17 Tips for Hyper-V Disaster Recovery That Could Save Your Bacon

Disaster Recovery

The cyber world can be a perilous one. Even though you may never know exactly when a disaster will occur, you should always be ready for any scenario that may unfold across your datacenter. I have compiled a list of 17 tips that you can use to help you keep your datacenter ready for any type of disaster that may occur. Continue reading

Windows User Profiles…The Mysteries Untold – Part 1

Happy New Year Everyone…This is my first blog post of 2017. Woo Hoo!!  As always, I love to blog about questions from the field.  This one came from a customer testing their new Virtual Desktop Infrustrure (VDI) on Nutanix and had 1 out of 50 users profiles be corrupt. He asked why did this happen and how can I avoid this in the future. Now, I would say that 1 corrupt profile out of 50 is fine during a test, but let understand why it happens. This topic is especially important to understand because directly relates to VDI and your end-user experience in VDI.

Windows User Profiles

What is a Windows User Profile? It not just your desktop 🙂

Continue reading

Storage Spaces Direct Explained – Applications & Performance

Applications

Microsoft SQL Server product group announced that SQL Server, either virtual or bare metal, is fully supported on Storage Spaces Direct. The Exchange Team did not have a clear endorsement for Exchange on S2D and clearly still prefers that Exchange is deployed on physical servers with local JBODs using Exchange Database Availability Groups or that customers simply move to O365.
image031

Continue reading

Storage Spaces Direct Explained – Management & Operations


Good day everyone. It been a few weeks, like busy with work and such. Anyways, this post will go into how Management & Operations are done in S2D.  Now, my biggest pet peeve is complex GUI management and yet again, Microsoft doesn’t disappoint.  It still a number of steps in different interfaces to bring up S2D, Check out Aidan Finns blog post on disaggregated management from last year.  It still rings true to this day with the release of 2016. It shouldn’t be this complex IMO 🙁 That being said, let move to the details.
Management & Operations Continue reading

Storage Spaces Direct Explained – Storage QOS & Networking

Yo everyone…This is going to be a short blog post in this series. I am just covering Networking and Storage QoS as it pertains to S2D. There are the technologies the bind S2D together.
Storage QoS

S2D is using the Storage (QoS) Quality of Service that ships with Windows Server 2016 which provides standard min/max IOPS and bandwidth control. QoS policy can be applied at the VHD, VM, Groups of VMs, or Tenant Level. Benefits include:

  • Mitigate noisy neighbor issues. By default, Storage QoS ensures that a single virtual machine cannot consume all storage resources and starve other virtual machines of storage bandwidth.
  • Monitor end to end storage performance. As soon as virtual machines stored on a Scale-Out File Server are started, their performance is monitored. Performance details of all running virtual machines and the configuration of the Scale-Out File Server cluster can be viewed from a single location
  • Manage Storage I/O per workload business needs Storage QoS policies define performance minimums and maximums for virtual machines and ensures that they are met. This provides consistent performance to virtual machines, even in dense and overprovisioned environments. If policies cannot be met, alerts are available to track when VMs are out of policy or have invalid policies assigned.

Storage QOS & NetworkingWhat’s New in Networking with S2D?
In Windows Server 2016, they added Remote Direct Memory Access (RDMA) support to the Hyper-V virtual switch.
For those that don’t know what RMDA is it technology that allows direct memory access from one computer to another, bypassing TCP layer, CPU , OS layer and driver layer. Allowing for low latency and high-throughput connections. This is done with hardware transport offloads on network adapters that support RDMA.
Back to Hyper-V virtual switch support for RDMA.  This allows you to configure regular or RDMA enabled vNICs on top of a pair of RDMA capable physical NICs. They also added embedded NIC teaming or Switch Embedded Teaming (SET).
SET is where NIC teaming and the Hyper-V switch is a single entity and can now be used in conjunction with RDMA NICs, wherein Windows 2012 Server you needed to have separate NIC teams for RDMA and Hyper-V Switch.
The images below illustrates the architecture changes between Windows Server 2012 R2 and Windows Server 2016.
Storage QOS & Networking
Storage QOS & NetworkingNext up…Management and Operations…

Until next time, Rob

Storage Spaces Direct Explained – ReFS, Multi-Tier Volumes and Erasure Coding

Here’s where we dive in and get dirty…but I promise by the end of my series, you will smiling like my friend here. I am planning a surprise with special guest bloggers. Stayed Tuned. Now one to the show…..
Storage Spaces Direct Explained ReFS

The NEW ReFS File System, Multi-Tier Volumes and Erasure Coding

Like S2D, the ReFS file system actually isn’t new either, they have been working on it for several releases now also.  In Windows Server 2016, it finally drops the tech preview label and is now ready for production.  And there is a lot of benefits… like volume creation doesn’t have to zero out the volume for 10 minutes like NTFS. It’s just a metadata operation that is effectively instantaneous now, I’m just going to focus on the couple of benefits that ReFS has for S2D.
For those not familiar Erasure coding (EC) and to prepare you for the next part, EC is a method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations.
The original goal of EC was to enable data that becomes corrupted at some point in the storage process to be reconstructed by using information about the data that’s stored elsewhere.  Erasure codes are great, because of their ability to reduce the time and overhead required to reconstruct data. The drawback of erasure coding is that it can be more CPU-intensive, and that can translate into increased latency.
Now all that being said, classic erasure codes were designed and optimized more for communication, not for storage. Naively applying classic erasure codes in storage is okay, but is missing enormous efficiencies. Microsoft has developed their own erasure codes optimized for storage called Local Reconstruction Codes (LRC). I will cover this brieifly further down in the post.
Now back on to S2D…For data protection, S2D uses either 3-way mirroring or distributed parity with EC.  Mirroring gives you great write performance, but only 33% data efficiency.  EC gives you good data efficiency, but random write performance isn’t great for hot data.  ReFS supports the ability to combine different disk tiers using different parity schemes in the same vDisk. This allows S2D to do real-time data tiering by writing new data to the mirror tier and then automatically rotating cold data out to the parity tier and applying the erasure code on data rotation.
It is important to note that ReFS does not currently support Deduplication.  There was a question on this in every session and MSFT says that this is all the ReFS is currently focused on. So we’ll expect to see it land in ReFSv3. For now, customers can get dedupe with S2D by using NTFS. 🙁
Storage Spaces Direct Explained ReFS Storage Spaces Direct Explained ReFSNote if you only have two types of storage then the highest performing is used for the cache while the other type will be divided between performance and capacity with the different resiliency option (mirror vs parity) providing the performance/capacity difference between the tiers. If you only have one type of storage then the cache is disabled and the disks divided between performance and capacity like the previously mentioned case.
For non-Storage Spaces Direct only two tiers, of storage are supported like Windows Server 2012 R2, i.e. SSD and HDD, there is no cache. If you had NVMe storage that could be the “hot” tier while the rest of storage (SSD, HDD) could be the “cold” tier (you name the tiers whatever you want) but you cannot use three tiers.
Storage Spaces Direct Explained ReFS Storage Spaces Direct Explained ReFSStorage Spaces Direct Explained ReFSDuring Ignite 2016, Microsoft took many shots at VMware. Microsoft said that there’s a right way and a wrong way to do erasure coding.  “When you do it the wrong way, performance sucks and you have to limit it to all-flash configurations.”
Microsoft research is using a new technique called “Local Reconstruction Codes”. It uses smaller groups within the vDisk that allows them to recover from failures much faster by not having to reconstruct data from across the entire pool. This combined with multi-tier volumes gives S2D good performance, even on hybrid systems. Sounds like a technology that I seen before. Hmmm..I wonder where…….  😉
Storage Spaces Direct Explained ReFSOk, that’s all for now. next up, Fault Tolerance and Multisite Replication with S2D….

Until Next time, Rob….

The Evolution of S2D

storage spaces
The intention of this blog post series is to give some history of how Microsoft Storage Spaces evolved to what it has become known today as Storages Spaces Direct (S2D). This first blog post will go into the history of Storage Spaces. Over my next few posts, I will delve further into the recent Storage Spaces Direct release with Windows 2016 server.  l will conclude my series with where I think it’s headed and how it compares to other HCI solutions in general. Now let’s go for a ride down memory lane….

Continue reading