Hi All, its Rob again and I decided to write a series on Azure Cloud. Since Azure Stack is months away from GA, its good to understand Azure Cloud for a few reasons. The API is the consistent across Azure Cloud and Azure Stack. And building a hybrid environment is the future for IT to use features like DR, Application Portability and Backup.
Tag Archives: Microsoft Azure
Microsoft Azure Stack Technical Preview finally sees the light….:)
Change is in the air! I know that phrase is associated with spring, but I love the change of seasons, especially, winter, when days get shorter and I get to spend time in the snow with my kids. Every winter, I think I can rely on the patterns from the seasons before, but I quickly find I have to adapt to a new reality. For example, I live near Boston and just when I thought we would have a mild winter, mother nature strikes. One week its 50’s and the next we are in the middle of a blizzard. Changes and transformations are just another fact of life.
Understanding Windows Azure Pack – Reconfigure portal names, ports and deploy certificates – Part 6
Happy New Year Everyone!!! I know Azure Stack is just around the corner, but I still get lots of questions around configuring WAP and portals. So to follow-up my Windows Azure Pack (WAP) series, I am going to talk about reconfiguring server names and ports as well as assigning trusted certificates to my WAP Portals.
Nutanix NOS 4.5 Released…
Hi all…It’s been a few weeks since my last blog post. I’ve been busy with some travel to Microsoft Technology Centers and working on the Nutanix Ready Program. Yesterday, Nutanix released NOS 4.5. This exciting upgrade adds some great features.. Sit back and get ready to enjoy the ride…release notes below.
Table 1. Terminology Updates | |
New Terminology | Formerly Known As |
Acropolis base software | Nutanix operating system, NOS |
Acropolis hypervisor, AHV | Nutanix KVM hypervisor |
Acropolis API | Nutanix API and Acropolis API |
Acropolis App Mobility Fabric | Acropolis virtualization management and administration |
Acropolis Distributed Storage Fabric, DSF | Nutanix Distributed Filesystem (NDFS) |
Prism Element | Web console (for cluster management); also known as the Prism web console; a cluster managed by Prism Central |
Prism Central | Prism Central (for multicluster management) |
Block fault tolerance | Block awareness |
What’s New in Acropolis base software 4.5
Bandwidth Limit on Schedule
- The bandwidth throttling policy provides you with an option to set the maximum limit of the network bandwidth. You can specify the policy depending on the usage of your network.
Note: You can configure bandwidth throttling only while updating the remote site. This option is not available during the configuration of remote site.
Cloud Connect for Azure
- The cloud connect feature for Azure enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Microsoft Azure cloud. Once configured through the Prism web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured. This feature is currently supported for ESXi hypervisor environments only.
Common Access Card Authentication
- You can configure two-factor authentication for web console users that have an assigned role and use a Common Access Card (CAC).
Default Container and Storage Pool Upon Cluster Creation
- When you create a cluster, the Acropolis base software automatically creates a container and storage pool for you.
Erasure Coding
- Complementary to deduplication and compression, erasure coding increases the effective or usable cluster storage capacity. [FEAT-1096]
- This is very cool tech…Check out Josh Odgers blog post for more details on EC
Hyper-V Configuration through Prism Web Console
- After creating a Nutanix Hyper-V cluster environment, you can use the Prism web console to join the hosts to the domain, create the Hyper-V failover cluster, and also enable Kerberos.
Image Service Now Available in the Prism Web Console
- The Prism web console Image Configuration workflow enables a user to upload ISO or disk images (in ESXi or Hyper-V format) to a Nutanix AHV cluster by specifying a remote repository URL or by uploading a file from a local machine.
MPIO Access to iSCSI Disks (Windows Guest VMs)
- Acropolis base software 4.5 feature to help enforce access control to volume groups and expose volume group disks as dual namespace disks.
Network Mapping
- Network mapping allows you to control network configuration for the VMs when they are started on the remote site. This feature enables you to specify network mapping between the source cluster and the destination cluster. The remote site wizard includes an option to create one or more network mappings and allows you to select source and destination network from the drop-down list. You can also modify or remove network mappings as part of modifying the remote sites.
Nutanix Cluster Check
- Acropolis base software 4.5 includes Nutanix Cluster Check (NCC) 2.1, which includes many new checks and functionality.
- NCC 2.1 Release Notes
NX-6035C Clusters Usable as a Target for Replication
- You can use a Nutanix NX-6035C cluster as a target for Nutanix native replication and snapshots, created by source Nutanix clusters in your environment. You can configure the NX-6035C as a target for snapshots, set a longer retention policy than on the source cluster (for example), and restore snapshots to the source cluster as needed. The source cluster hypervisor environment can be AHV, Hyper-V, or ESXi. See Nutanix NX-6035C Replication Target in Notes and Cautions.
Note: You cannot use an NX-6035C cluster as a backup target with third-party backup software.
Prism Central Can Now Be Deployed on the Acropolis Hypervisor (AHV)
- Nutanix has introduced a Prism Central OVA which can be deployed on an AHV cluster by leveraging Image Service features. See the Web Console Guide for installation details.
- Prism Central 4.5 Release Notes
Prism Central Scalability
- By increasing memory capacity to 16GB and expanding its virtual disk to 260GB, Prism Central can support a maximum of 100 clusters and 10000 VMs (across all the clusters and assuming each VM has an average of two virtual disks). Please contact Nutanix support if you decide to change the configuration of the Prism Central VM.
- Prism Central 4.5 Release Notes
- Prism Central Scalability, Compatibility and Deployment
Simplified Add Node Workflow
- This release leverages Foundation 3.0 imaging capabilities and automates the manual steps previously required for expanding a cluster through the Prism web console.
SNMP
- The Nutanix SNMP MIB database includes the following changes:
- The database includes tables for monitoring hypervisor instances and virtual machines.
- The service status table named serviceStatusTable is obsolete. Analogous information is available in a new table named controllerStatusTable. The new table has a smaller number of MIB fields for displaying the status of only essential services in the Acropolis base software.
- The disk status table (diskStatusTable), storage pool table (storagePoolInformationTable), and cluster information table include one or more new MIB fields.
- The SNMP feature also includes the following enhancements:
- From the web console, you can trigger test alerts that are sent to all configured SNMP trap receivers.
- SNMP service logs are now written to the following log file: /home/nutanix/data/logs/snmp_manager.out
Support for Minor Release Upgrades for ESXi Hosts
- Acropolis base software 4.5 enables you to patch upgrade ESXi hosts with minor release versions of ESXi host software through the Controller VM cluster command. Nutanix qualifies specific VMware updates and provides a related JSON metadata upgrade file for one-click upgrade, but now customers can patch hosts by using the offline bundle and md5sum checksum available from VMware, and using the Controller VM cluster command.
Note: Nutanix supports the ability to patch upgrade ESXi hosts with minor versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those minor releases. Please see the the Nutanix hypervisor support statement in our Support FAQ.
VM High Availability in Acropolis
- In case of a node failure, VM High Availability (VM-HA) ensures that VMs running on the node are automatically restarted on the remaining nodes within the cluster. VM-HA can optionally be configured to reserve spare failover capacity. This capacity reservation can be distributed across the nodes in chunks known as “segments” to provide better overall resource utilization.
Windows Guest VM Failover Clustering
- Acropolis base software 4.5 supports configuring Windows guest VMs as a failover cluster. This clustering type enables applications on a failed VM to fail over to and run on another guest VM on the same or different host. This release supports this feature on Hyper-V hosts with in-guest VM iSCSI and SCSI 3 Persistent Reservation (PR).
Tech Preview Features
Note: Do not use tech preview features on production systems or storage used or data stored on production systems.
File Level Restore
- The file level restore feature allows a virtual machine user to restore a file within a virtual machine from the Nutanix protected snapshot with minimal Nutanix administrator intervention.
Note: This feature should be used only after upgrading all nodes in the cluster to Acropolis base software 4.5.
What’s New in Prism Central
Prism Central for Acropolis Hypervisor (AHV)
Nutanix has introduced a Prism Central VM which is compatible with AHV to enable multicluster management in this environment. Prism Central now supports all three major hypervisors: AHV, Hyper-V, and ESXi.
Prism Central Scalability
The Prism Central VM requires these resources to support the clusters and VMs indicated in the table.
Prism Central vCPU |
Prism Central Memory (GB, default) | Total Storage Required for Prism Central VM (GB) | Clusters Supported | VMs Supported (across all clusters) | Virtual disks per VM |
4 | 8 | 256 | 50 | 5000 | 2 |
Release Notes | NCC 2.1
Learn More About NCC Health Checks
You can learn more about the Nutanix Cluster Check (NCC) health checks on the Nutanix support portal. The portal includes a series of Knowledge Base articles describing most NCC health checks run by the ncc health_checks command.
What’s New in NCC 2.1
NCC 2.1 includes support for:
- Acropolis base software 4.5 or later
- NOS 4.1.3 or later only
- All Nutanix NX Series models
- Dell XC Series of Web-scale Converged Appliances
Tech Preview Features
The following features are available as a Tech Preview in NCC 2.1.
Run NCC health checks in parallel
- You can specify the number of NCC health checks to run in parallel to reduce the amount of time it takes for all checks to complete. For example, the command ncc health_checks run_all –parallel=25 will run 25 of the health checks in parallel.
Use npyscreen to display NCC status
- You can specify npyscreen as part of the ncc command to display status to the terminal window. Specify –npyscreen=true as part of the ncc health_checks command.
New Checks in This Release
Check Name | Description | KB Article |
check_disks | Check whether disks are discoverable by the host. Pass if the disks are discovered. | KB 2712 |
check_pending_reboot | Check if host has pending reboots. Pass if host does not have pending reboots. | KB 2713 |
check_storage_heavy_node | Verify that nodes such as the storage-heavy NX-6025C are running a service VM and no guest VMs. Verify that nodes such as the storage-heavy NX-6025C are runningthe Acropolis hypervisor only. |
KB 2726 KB 2727 |
check_utc_clock | Check if UTC clock is enabled. | KB 2711 |
cluster_version_check | Verifiy that the cluster is running a released version of NOS or the Acropolis base software. This check returns an INFO status and the version if the cluster is running a pre-release version. | KB 2720 |
compression_disabled_check | Verify if compression is enabled. | KB 2725 |
data_locality_check | Check if VMs that are part of a cluster with metro availability are in two different datastores (that is, fetching local data). | KB 2732 |
dedup_and_compression_enabled_containers_check | Checks if any container have deduplication and compression enabled together. | KB 2721 |
dimm_same_speed_check | Check that all DIMMs have the same speed. | KB 2723 |
esxi_ivybridge_performance_degradation_check | Check for the Ivy Bridge performance degradation scenario on ESXi clusters. | KB 2729 |
gpu_driver_installed_check | Check the version of the installed GPU driver. | KB 2714 |
quad_nic_driver_version_check | Check the version of the installed quad port NIC driver version. | KB 2715 |
vmknics_subnet_check | Check if any vmknics have same subnet (different subnets are not supported). | KB 2722 |
Foundation Release 3.0
This release includes the following enhancements and changes:
- A major new implementation that allows for node imaging and cluster creation through the Controller VM for factory-prepared nodes on the same subnet. This process significantly reduces network complications and simplifies the workflow. (The existing workflow remains for imaging bare metal nodes.) The new implementation includes the following enhancements:
- A Java aplet that automatically discovers factory-prepared nodes on the subnet and allows you to select the first one to image.
- A simplified GUI to select and configure the nodes, define the cluster, select the hypervisor and Acropolis base software versions to use, and monitor the imaging and cluster creation process.
Customers may create a cluster using the new Controller VM-based implementation in Foundation 3.0. Imaging bare metal nodes is still restricted to Nutanix sales engineers, support engineers, and partners.
- The new implementation is incorporated in the Acropolis base software version 4.5 to allow for node imaging when adding nodes to an existing cluster through the Prism GUI.
- The cluster creation workflow does not use IPMI, and for both cluster creation and bare-metal imaging, the host operating system install is done within an “installer VM” in Phoenix.
- To see the progress of a host operating system installation, point a VNC console at the node’s Controller VM IP address on port 5901.
- Foundation no longer offers the option to run diagnostics.py as a post-imaging test. Should you wish to run this test, you can download it from the Tools & Firmware page on the Nutanix support portal.
- There is no Foundation upgrade path to the new Controller VM implementation; you must download the Java aplet from the Foundation 3.0 download page on the support portal. However, you can upgrade Foundation 2.1.x to 3.0 for the bare metal workflow as follows:
-
-
- Copy the Foundation tarball (foundation-version#.tar.gz) from the support portal to /home/nutanix in your VM.
- Navigate to /home/nutanix.
- Enter the following five commands:
- $ sudo service foundation_service stop
- $ rm -rf foundation
- $ tar xzf foundation-version#.tar.gz
- $ sudo yum install python-scp
- $ sudo service foundation_service restart
-
-
- If the first command (foundation_service stop) is skipped or the commands are not run in order, the user may get bizarre errors after upgrading. To fix this situation, enter the following two commands:
- $ sudo pkill -9 foundation
- $ sudo service foundation_service restart
Release Notes for each of these products is located at:
- Acropolis base software 4.5: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Acr_v4_5.html
- Prism Central 4.5: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-Acr_v4_5:rel_Release_Notes-Prism_Central_v4_5.html
- Nutanix Cluster Check(NCC) 2.1: https://portal.nutanix.com/#/page/docs/details?targetId=Release_Notes-NCC:rel_Release_Notes-NCC_v2_1.html
- Foundation 3.0: https://portal.nutanix.com/#/page/docs/details?targetId=Field_Installation_Guide-v3_0:fie_release_notes_foundation_v3_0_r.html
Download URLs:
- Acropolis base software 4.5: https://portal.nutanix.com/#/page/releases/nosDetails?targetId=4.5&targetVal=EA
- Prism Central 4.5: https://portal.nutanix.com/#/page/releases/prismDetails
- Nutanix Cluster Check 2.1: https://portal.nutanix.com/#/page/static/supportTools
- Foundation 3.0: https://portal.nutanix.com/#/page/foundation/list
- Nutanix AHV Metadata(Acropolis base 4.5) AHV-20150921: http://download.nutanix.com/hypervisor/kvm/4.5/host-bundle-el6.nutanix.20150921-metadata.json
- Nutanix AHV (Acropolis base 4.5) AHV-20150921: http://download.nutanix.com/hypervisor/kvm/4.5/host-bundle-el6.nutanix.20150921.tar.gz
Until next time, Rob…
Understanding Windows Azure Pack – How to guide with Express Edition on Nutanix – Windows Azure Pack Install – Part 5
To continue Windows Azure Pack series here is my next topic: Installing and Configuring Windows Azure Pack
If you missed other parts of the series, check links below:
Part 1 – Understanding Windows Azure Pack
Part 2 – Understanding Windows Azure Pack – Deployment Scenarios
Part 3 – Understanding Windows Azure Pack – How to guide with Express Edition on Nutanix – Environment Prep
Part 4 – Deploying Service Provider Framework on Nutanix
Again to reiterate from my previous blog posts and set some context, Windows Azure Pack (WAP) includes the following capabilities: Continue reading
Understanding Windows Azure Pack – Deployment Scenario’s on Nutanix – Part 2
To continue the Windows Azure Pack series, here is my next topic: Deployment Scenario’s on Nutanix
If you missed part 1 – see link below
Part 1 – Understanding Windows Azure Pack
Windows Azure Pack – Deployment Scenario’s
Terminology
Ok, Let’s start with some terminology used when talking about WAP(Windows Azure Pack). Here are two key terms you need to know:
- Administrator – Someone who deploys, configures and manages WAP and makes cloud services available to tenants.
- Tenant – Someone who subscribes to and uses cloud services made available through WAP.
When WAP is deployed by a hoster (service provider) the administrator refers to IT staff at the hoster while the tenants are the customers to which the hoster is selling cloud services. And when WAP is deployed in an enterprise datacenter, the administrator will be your own IT department; the tenants in this case will be the other departments, divisions, or business units within your organization that want to take advantage of the cloud services your IT department is offering.
Admin Portal
User Signin Portal
User Main Portal
Required components
WAP currently includes eight components. Two of these components are portals:
- Management Portal for Administrators – A web-based portal that lets administrators configure and manage user accounts, resource clouds, tenant offers, and so on.
- Management Portal for Tenants – A web-based self-service portal that lets tenants provision, monitor and manage the following cloud services: Web Sites, Virtual Machines, and Service Bus.
The self-service capabilities of the Management Portal for Tenants enables tenants to deploy and manage the cloud services they need when they need them without having to go through the slow procurement processes of the traditional approach to enterprise IT.
Authentication is another key feature of WAP to ensure that only properly authenticated administrators have access to the Management Portal for Administrators and only properly authenticated users have access to the Management Portal for Tenants. By default, the Management Portal for Administrators uses Windows authentication (Kerberos or NTLM) but you can optionally use Active Directory Federation Services (ADFS) for authentication purposes. The Management Portal for Tenants on the other hand uses the ASP.NET Membership Provider for authentication purposes. WAP includes two authentication sites, an Admin Authentication Site and a Tenant Authentication Site, for these purposes.
WAP also includes components that provide the following application programming interfaces (APIs):
- Windows Azure Pack Admin API – Enables administration tasks to be performed using the Management Portal for Administrators and Windows PowerShell.
- Windows Azure Pack Tenant API – Enables tenant-specific tasks to be performed using the Management Portal for Tenants and Windows PowerShell.
- Windows Azure Pack Tenant Public API – Provides additional tenant-specific functionality primarily for hosting provider environments.
All of the above components are required in any WAP deployment.
Optional components
The following components of WAP may be deployed in order to offer additional forms of cloud services and other resources to tenants:
- Web Sites – Provides you with a managed web environment you can use to create new websites or migrate your existing business website into the cloud.
- Virtual Machines – Provides you with a general-purpose computing environment that lets you create, deploy, and manage virtual machines running in the Windows Azure cloud.
- Service Bus – Allows you to keep your apps connected across your private cloud environment and the Windows Azure public cloud.
- Automation and Extensibility – Allows you to automate and integrate custom services into your services framework using runbooks.
- SQL and MySQL – Allows you to provision Microsoft SQL and MySQL databases for tenants to use.
Windows Azure Pack – Deployment Scenario’s
There are two basic deployment scenarios for WAP:
- Express architecture – Recommended for proof of concept testing only.
- Distributed architecture – Recommended for production environments.
In addition, the distributed architecture can be implemented in various ways depending on the scale and degree of availability needed. Let’s briefly examine each of these scenarios.
Express architecture
In an express deployment of Windows Azure Pack, you install all of the required components on a single server and any optional components needed on one or more additional servers. This is the deployment I will be doing in the next part of the series. Specifically, the following required components must all be installed on your first server:
- Windows Azure Pack Admin API
- Windows Azure Pack Tenant API
- Windows Azure Pack Tenant Public API
- Admin Authentication Site
- Tenant Authentication Site
- Management Portal for Administrators
- Management Portal for Tenants
In addition, your first server must host the SQL Management Database used by the required components. This means you must install a required version of Microsoft SQL Server on the first server.
Distributed architecture
In a distributed deployment of WAP, you spread out the required components across multiple servers. There are many ways you can do this, but the following recommendations should generally be adhered to in order to ensure optional performance and supportability for your deployment:
- Install a management portal and its corresponding authentication site on the same server. For example, install the Management Portal for Administrators and the Admin Authentication Site on the same server.
- If you will be providing cloud services to tenants over the public Internet, install the following components on the same public-facing server:
- Management Portal for Tenants
- Tenant Authentication Site
- Windows Azure Pack Tenant Public API
- If Active Directory Domain Services (ADDS) is to be used for authentication purposes, install it on a separate identity server.
- If Active Directory Federation Services (ADFS) is to be used for authentication purposes, install it on a separate identity server along with an ADFS
- For greater scalability and high availability in large deployments, install the SQL Management Database on a separate failover cluster. In addition, use failover clustering for your public-facing servers and for the servers hosting your other required components.
- For even higher scalability, install each required component on a separate failover cluster and the SQL Management Database on another failover cluster. In other words, use eight failover clusters to deploy the seven required components plus the SQL Management Database. Check out Nutanix Best Practices guide for deploying SQL
In the next blog post in this series, we will begin our walk-through of installing and configuring WAP. I will focus primarily on the express deployment scenario in this series along with two types of cloud services: Virtual Machines and SQL Databases…………..Let’s build a cloud……
Until next time, Rob…
Azure Stack…What is it?
The Ignite 2015 conference in Chicago is where Microsoft made the official announcement of Azure Stack, its private cloud infrastructure for data centers that want to be Azure in their own right. Or in other words, on-premises will be in full parity with Azure Cloud.
Quotes from Brad Anderson from Keynote on Azure Stack
“If you think about Azure, there’s all the infrastructure that you’re aware of, in network, storage and compute. There’s a set of services like IaaS and PaaS that we deliver. And then there’s all your applications, and that, really, is what Azure is,” explained Brad Anderson, Microsoft’s corporate vice president for cloud and enterprise, during a keynote session Monday morning. “Two years ago, we announced we were going to bring portions of this to your data center, and we called it the Azure Pack.”
Portions of this Azure Pack had made their way onto partner vendors’ hardware in the past — in the form of Microsoft Private Cloud Fast Track Program and Dell’s Cloud Platform System. My company, Nutanix was one of the first Private Cloud Fast Track Partners with certified reference architecture. So we’ve seen private cloud platforms with third-party vendor brands, built around server software made by Microsoft but not called Windows.
What Azure Stack becomes, over and above Azure Pack, is not just a microcosm of Azure, but an extension of Azure itself. As several Microsoft officials confirmed at Ignite, Azure Stack extends the file and object system of Azure into the private space. (And Azure Stack won’t be the only Microsoft technology that does this….Hint, Hint…Hmm…under NDA at moment)
“You want to be able to take those cloud applications, and host them in your environment,” said Anderson. “You’ve told us you want Azure — all of Azure — in your data centers. Azure Stack … is literally us giving you all of Azure to run in your data centers.
I saw early demonstrations of Azure Stack at Ignite, and what I saw was user access policy management system that essentially duplicated the one currently used on the public Azure cloud as shown below.
“The Microsoft Azure Stack gives application owners the ability to ‘write once, deploy anywhere,’ whether it be to your private cloud, a service provider’s cloud, or the public Azure cloud,” reads a post to Microsoft’s server and cloud blog Monday. “Developers will have the broadest access to application development platforms across Windows and Linux to build, deploy and operate cloud applications using consistent tools, processes and artifacts. One Azure ecosystem across public, private and hosted clouds will allow you to participate in a unified, robust partner network for Azure clouds.”
Microsoft’s idea is to make private cloud space and public space addressable and manageable using the same tool set, and by extension, to effectively make data centers into planks, if you will, for Azure. It’s one big reason why the words “Windows Server” are being spoken less and less often by people whom you would think were in charge of it.
Azure Stack Deeper Dive
Now let’s start at the top. When we look at the image below we see the browser experience. In the current version of Azure Pack we have 2 portals, 1 for the tenant and 1 for the admin. In Azure Stack we have 1 browser experience. That experience is also the same across Azure Stack and Azure. So admins as well as the tenants go through the same portal site and leveraging the same portal API’s and extensions.
In the deployment of the portal site there is still an option to scale out to multiple website nodes like we do with a distributed deployment of Windows Azure Pack today. When we go down that rabbit hole, we see the Azure Resource Manager and the Core Management Resource Providers. The Core Management Resource Providers integrate in Azure Resource Manager and all components interact with that. Below in this post, I will focus on the Azure Resource Manager and the Core Resource Providers. Further down we see the Service Resource Providers. The Service Resource Providers will control and manage the resources it is assigned to. Like the Compute Service Resource Provider will manage the compute resources (nodes), the Storage Resource Provider will manage the storage resources (nodes) and so on…
And that’s really in a nutshell the top to bottom service layout of the Azure Stack.
Let’s look at the portal. The portal is completely redesigned and which allow you to fully personalize. It is highly scalable and have integration across services. When you install new resource providers today in WAP you need to edit the core code for the Azure Pack portal. Then you need to restart the web service process to see the result of that change. With the new design the portal process runs continuously in a separate process and when you extend the portal by adding extensions a workflow will distribute the extensions to all nodes running the portal site. As mentioned before the admin and tenant site are integrated in the same portal.
The portal UI is very nice, but it would be useless if we cannot login to the portal, right? Let me talk about the identity part of the new Azure Stack. In the old portal we had the options to use the SQL .Net membership or we could integrate ADFS to use AD or other federated identity providers (IDP’s). In the new portal they use claims-based authentication and there is native support for the following:
- Azure Active Directory
- Windows AD
- Active Directory Federation Services (ADFS)
From the Azure Resource Manager to the Core Management Resource Providers it will use Windows Authentication or Basic Authentication. The Core Management Resource Providers will use Windows Authentication or an authentication method defined by the Resource Provider.
Now on to the Azure Resource Manager. The Azure Resource Manager is the new Service Management API. It’s as Microsoft calls it “a product” that allows the management of the compute, storage, network. When you, as a tenant, create a resource group it allows you to put all the resources (VM’s, Networks, websites etc…) in a resource group that can be managed as a whole (Create /Add / Update /Delete – aka Life Cycle Management).
With role based access control (RBAC) you, as a tenant, can also provide access to other users that have access based on the permission you assign to the resource group. Also usage is collected for a particular resource group so you can see how much the resources in a resource group will cost.
The Azure Resource Manager will also allow you to put deployments in regions. Regions represents the datacenters of your service provider or your own datacenters. Furthermore the Azure Resource Manager is providing audit logging on your subscriptions and resources. To create resources using the Azure Resource Manager you need to create or use an existing template. A template is a json file what can be edited to define the resources in your deployment.
The Azure Resource Manager will talk to the Core Management services. Let’s look at the components involved in that.
- The Authorization Service: By using RBAC, it allows us to granular assign permissions to resource groups. Subscriptions are assigned to tenants that have a plan defined.
- The Subscription Management Service is responsible for managing the Service Plans, Offers and subscriptions. You can even use Azure Resource Manager templates to define new subscriptions based on a template you have defined.
- The Gallery Service is a core common service that will work across any of the connected services. Admins as well as tenants are allowed to put their own gallery items in it.
- The Events Service is a collector to collect all events across all the services
- The Monitoring Service collects metrics from all services.
- And last but not least we have the Usage Service which will collect the usage per service for each tenant / resource group.
So this what I know so far from MS, but will update this post as I know more. MS is not giving defiant answer, but rumors are beta late fall and Tech Preview in spring. I can’t wait to get the early bird bits to play around with it and when I do I will follow-up on this post to give you more technical information of Azure Stack!
Until next time, Rob…