Preview only show first 10 pages with watermark. For full document please download

Xtremio - Hyper-v Vspex Proven Infrastructure Guide

   EMBED


Share

Transcript

Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This Proven Infrastructure Guide describes the EMC® VSPEX® Proven Infrastructure solution for private cloud deployments with Microsoft Windows Server 2012 R2 with Hyper-V and EMC XtremIO™ all-flash array technology. July 2015 Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA. Published July 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection Proven Infrastructure Guide Part Number: H14157.1 2 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Contents Contents Chapter 1 Executive Summary 10 Introduction ............................................................................................................. 11 Target audience ........................................................................................................ 11 Document purpose ................................................................................................... 11 Business benefits ..................................................................................................... 12 Chapter 2 Solution Overview 13 Introduction ............................................................................................................. 14 Virtualization ............................................................................................................ 14 Private cloud foundation ...................................................................................... 14 Compute .................................................................................................................. 14 Network .................................................................................................................... 15 Storage..................................................................................................................... 15 Challenges........................................................................................................... 15 Scalability............................................................................................................ 16 Operational agility ............................................................................................... 16 Deduplication ...................................................................................................... 17 Thin provisioning ................................................................................................. 17 Data protection .................................................................................................... 17 Microsoft ODX support ......................................................................................... 17 EMC ViPR integration ........................................................................................... 17 API support .......................................................................................................... 18 Benefits of using XtremIO .................................................................................... 18 Chapter 3 Solution Technology Overview 19 Overview .................................................................................................................. 20 VSPEX Proven Infrastructures ................................................................................... 20 Key components ....................................................................................................... 22 Virtualization layer ................................................................................................... 22 Overview.............................................................................................................. 22 Microsoft Hyper-V ................................................................................................ 23 Virtual Fibre Channel ports................................................................................... 23 Microsoft System Center Virtual Machine Manager .............................................. 23 High availability with Hyper-V Failover Clustering ................................................. 23 Hyper-V Replica ................................................................................................... 24 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 3 Contents Cluster-Aware Updating ....................................................................................... 24 EMC Storage Integrator for Windows Suite ........................................................... 25 Compute layer .......................................................................................................... 25 Network layer ........................................................................................................... 27 Storage layer ............................................................................................................ 28 EMC XtremIO........................................................................................................ 28 EMC Data Protection ................................................................................................. 30 Overview.............................................................................................................. 30 EMC Avamar deduplication .................................................................................. 31 EMC Data Domain deduplication storage systems ............................................... 31 EMC RecoverPoint ................................................................................................ 31 Other technologies ................................................................................................... 31 Overview.............................................................................................................. 31 EMC PowerPath.................................................................................................... 31 EMC ViPR Controller ............................................................................................. 32 Public-key infrastructure ...................................................................................... 32 Chapter 4 Solution Architecture Overview 33 Overview .................................................................................................................. 34 Solution architecture ................................................................................................ 34 Overview.............................................................................................................. 34 Logical architecture ............................................................................................. 34 Key components .................................................................................................. 35 Hardware resources ............................................................................................. 37 Software resources .............................................................................................. 38 Server configuration guidelines ................................................................................ 39 Overview.............................................................................................................. 39 Intel Ivy Bridge updates ....................................................................................... 39 Hyper-V memory virtualization ............................................................................. 40 Memory configuration guidelines ......................................................................... 42 Network configuration guidelines ............................................................................. 42 Overview.............................................................................................................. 42 VLANs .................................................................................................................. 43 Enable jumbo frames (for iSCSI) .......................................................................... 44 Storage configuration guidelines .............................................................................. 44 Overview.............................................................................................................. 44 XtremIO X-Brick scalability ................................................................................... 44 Hyper-V storage virtualization .............................................................................. 46 VSPEX storage building blocks ............................................................................. 47 4 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Contents High-availability and failover .................................................................................... 48 Overview.............................................................................................................. 48 Virtualization layer ............................................................................................... 48 Compute layer ..................................................................................................... 48 Network layer....................................................................................................... 49 Storage layer ....................................................................................................... 50 XtremIO Data Protection....................................................................................... 50 Backup and recovery configuration guidelines.......................................................... 50 Chapter 5 Environment Sizing 51 Overview .................................................................................................................. 52 Reference workload .................................................................................................. 52 Overview.............................................................................................................. 52 Defining the reference workload .......................................................................... 52 Scaling out ............................................................................................................... 53 Reference workload application................................................................................ 53 Overview.............................................................................................................. 53 Example 1: Custom-built application ................................................................... 53 Example 2: Point-of-sale system .......................................................................... 54 Example 3: Web server ........................................................................................ 54 Example 4: Decision-support database ............................................................... 54 Summary of examples ......................................................................................... 55 Quick assessment .................................................................................................... 55 Overview.............................................................................................................. 55 CPU requirements ................................................................................................ 56 Memory requirements .......................................................................................... 56 Storage performance requirements ...................................................................... 56 IOPS .................................................................................................................... 56 I/O size ................................................................................................................ 57 I/O latency ........................................................................................................... 57 Unique data ......................................................................................................... 57 Storage capacity requirements ............................................................................ 58 Determining equivalent reference virtual machines ............................................. 58 Fine-tuning hardware resources ........................................................................... 61 EMC VSPEX Sizing Tool ........................................................................................ 63 Chapter 6 VSPEX Solution Implementation 64 Overview .................................................................................................................. 65 Pre-deployment tasks ............................................................................................... 65 Deployment resources checklist .......................................................................... 66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 5 Contents Customer configuration data................................................................................ 67 Network implementation .......................................................................................... 67 Preparing the network switches ........................................................................... 67 Configuring the infrastructure network ................................................................. 67 Configuring VLANs ............................................................................................... 68 Configuring jumbo frames (iSCSI only) ................................................................. 69 Completing network cabling ................................................................................ 69 Microsoft Hyper-V hosts installation and configuration ............................................. 69 Overview.............................................................................................................. 69 Installing the Windows hosts ............................................................................... 69 Installing Hyper-V and configuring failover clustering........................................... 70 Configuring Windows host networking ................................................................. 70 Installing and configuring Multipath software ...................................................... 70 Planning virtual machine memory allocations ...................................................... 70 Microsoft SQL Server database installation and configuration .................................. 71 Overview.............................................................................................................. 71 Creating a virtual machine for SQL Server ............................................................ 72 Installing Microsoft Windows on the virtual machine ........................................... 72 Installing SQL Server ............................................................................................ 72 Configuring SQL Server for SCVMM ...................................................................... 72 System Center Virtual Machine Manager server deployment ..................................... 73 Overview.............................................................................................................. 73 Creating a SCVMM host virtual machine............................................................... 74 Installing the SCVMM guest OS ............................................................................ 74 Installing the SCVMM server ................................................................................ 74 Installing the SCVMM Admin Console .................................................................. 74 Installing the SCVMM agent locally on a host ....................................................... 74 Adding the Hyper-V cluster to SCVMM.................................................................. 74 Storage array preparation and configuration ............................................................. 74 Overview.............................................................................................................. 74 Configuring the XtremIO array .............................................................................. 75 Preparing the XtremIO array ................................................................................. 75 Setting up the initial XtremIO configuration ......................................................... 75 Creating the CSV disk .......................................................................................... 80 Creating a virtual machine in SCVMM................................................................... 80 Performing partition alignment ........................................................................... 80 Creating a template virtual machine..................................................................... 81 Deploying virtual machines from the template ..................................................... 81 6 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Contents Chapter 7 Solution Verification 82 Overview .................................................................................................................. 83 Post-installation checklist ........................................................................................ 84 Deploying and testing a single virtual machine ......................................................... 84 Verifying solution component redundancy................................................................ 84 Chapter 8 System Monitoring 85 Overview .................................................................................................................. 86 Key areas to monitor ................................................................................................. 86 Performance baseline .......................................................................................... 86 Servers ................................................................................................................ 87 Networking .......................................................................................................... 87 Storage ................................................................................................................ 88 XtremIO resource monitoring guidelines ................................................................... 88 Monitoring the storage......................................................................................... 88 Monitoring the performance ................................................................................ 90 Monitoring the hardware elements ...................................................................... 91 Using advanced monitoring ................................................................................. 93 Appendix A Reference Documentation 95 EMC documentation ................................................................................................. 96 Other documentation ............................................................................................... 96 Appendix B Customer Configuration Worksheet 98 Customer configuration worksheet ........................................................................... 99 Appendix C Server Resource Component Worksheet 101 Server resources component worksheet .................................................................102 Figures Figure 1. I/O randomization brought by server virtualization .............................. 15 Figure 2. VSPEX Proven Infrastructures .............................................................. 21 Figure 3. Compute layer flexibility examples ...................................................... 26 Figure 4. Example of highly available network design ........................................ 27 Figure 5. Logical architecture for the solution..................................................... 35 Figure 6. Hypervisor memory consumption ........................................................ 41 Figure 7. Required networks for XtremIO storage ................................................ 43 Figure 8. Single X-Brick XtremIO storage ............................................................ 44 Figure 9. Cluster configuration as single and multiple X-Brick clusters ............... 45 Figure 10. Hyper-V virtual disk types .................................................................... 46 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 7 Contents Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines .......... 47 Figure 12. XtremIO single X-Brick building block for 700 virtual machines ........... 47 Figure 13. High availability at the virtualization layer ........................................... 48 Figure 14. Redundant power supplies .................................................................. 49 Figure 15. Network layer high availability ............................................................. 49 Figure 16. XtremIO high availability ..................................................................... 50 Figure 17. Resource pool flexibility ...................................................................... 55 Figure 18. Required resources from the RVM pool ................................................ 59 Figure 19. Aggregate resource requirements - Stage 2.......................................... 61 Figure 20. Customizing server resources .............................................................. 62 Figure 21. Sample Ethernet network architecture ................................................. 68 Figure 22. XtremIO initiator group ........................................................................ 76 Figure 23. Adding volume .................................................................................... 77 Figure 24. Volume summary................................................................................. 78 Figure 25. Volumes and initiator group ................................................................ 79 Figure 26. Mapping volumes ................................................................................ 80 Figure 27. Monitoring the efficiency ..................................................................... 89 Figure 28. Volume capacity .................................................................................. 90 Figure 29. Physical capacity ................................................................................. 90 Figure 30. Monitoring the performance (IOPS)...................................................... 91 Figure 31. Data and management cable connectivity ........................................... 92 Figure 32. X-Brick properties ................................................................................ 92 Figure 33. Monitoring the SSDs ............................................................................ 93 Tables 8 Table 1. Solution hardware ............................................................................... 37 Table 2. Solution software ................................................................................ 38 Table 3. Hardware resources for the compute layer ........................................... 40 Table 4. XtremIO scalable scenarios with virtual machines ............................... 48 Table 5. VSPEX Private Cloud RVM workload ..................................................... 52 Table 6. Blank worksheet row ........................................................................... 56 Table 7. Reference virtual machine resources ................................................... 58 Table 8. Sample worksheet row ........................................................................ 59 Table 9. Example applications – Stage 1........................................................... 60 Table 10. Example applications -Stage 2 ............................................................ 60 Table 11. Server resource component totals ....................................................... 62 Table 12. Deployment process overview ............................................................. 65 Table 13. Pre-deployment tasks .......................................................................... 66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Contents Table 14. Deployment resources checklist .......................................................... 66 Table 15. Tasks for switch and network configuration ......................................... 67 Table 16. Tasks for server installation ................................................................. 69 Table 17. Tasks for SQL Server database setup ................................................... 71 Table 18. Tasks for SCVMM configuration ........................................................... 73 Table 19. Tasks for XtremIO configuration ........................................................... 75 Table 20. Storage allocation for block data ......................................................... 79 Table 21. Testing the installation ........................................................................ 83 Table 22. Advanced monitor parameters............................................................. 93 Table 23. Common server information ................................................................ 99 Table 24. ESXi server information ....................................................................... 99 Table 25. X-Brick information .............................................................................. 99 Table 26. Network infrastructure information ....................................................100 Table 27. VLAN information ..............................................................................100 Table 28. Service accounts ...............................................................................100 Table 30. Blank worksheet for server resource totals ........................................102 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 9 Chapter 1: Executive Summary Chapter 1 Executive Summary This chapter presents the following topics: Introduction .............................................................................................................11 Target audience .......................................................................................................11 Document purpose ...................................................................................................11 Business benefits.....................................................................................................12 10 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 1: Executive Summary Introduction Server virtualization has been a driving force in data center efficiency gains for the past decade. However, mixing multiple virtual machine workloads creates a randomization of I/O for the storage array, which stalls virtualization of I/O intensive workloads. EMC® VSPEX® Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk. The VSPEX Private Cloud architecture provides your customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer backed by the highly available EMC XtremIO™ all-flash array family. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment. XtremIO effectively addresses the effects of virtualization on I/O-intensive workloads with impressive random I/O performance and consistent ultra-low latency. XtremIO also provides new levels of speed and provisioning agility to virtualized environments, with advanced data services that include space-efficient snapshots, inline data deduplication, and thin provisioning features. Target audience You must have the necessary training and background to install and configure Microsoft Hyper-V, the EMC XtremIO storage systems, and the associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents. You should also be familiar with the infrastructure and database security policies of the customer installation. If you are a partner selling and sizing a Private Cloud for Microsoft Hyper-V infrastructure, you must pay particular attention to the first four chapters of this guide. After purchase, the implementers of the solution should focus on the configuration guidelines in Chapter 6, the solution verification in Chapter 7, and the appropriate references and appendices. Document purpose This guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific customer engagements, and instructions for effectively deploying and monitoring the system. The EMC VSPEX Private Cloud for Microsoft Hyper-V solution for up to 700 virtual machines described in this guide is based on the XtremIO storage array and a defined reference workload. The guide describes the minimum server capacity required for EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 11 Chapter 1: Executive Summary CPU, memory, and network interfaces when sizing this solution. You can select server and networking hardware that meets or exceeds these minimum requirements. A private cloud architecture is a complex system offering. This guide makes the solution setup easier by providing you with lists of prerequisite software and hardware materials, step-by-step sizing guidance and worksheets, and verified deployment steps. After all components have been installed and configured, verification tests and monitoring instructions ensure that the systems of your private cloud are operating properly. Follow the instructions in this guide to ensure an efficient and painless journey to the cloud. Business benefits VSPEX solutions are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision about the hypervisor, server, network, and storage environment. The VSPEX Private Cloud for Microsoft Hyper-V reduces the complexity of configuring every component of a traditional deployment model. The solution simplifies integration management while maintaining the application design and implementation options. It also provides unified administration while enabling adequate control and monitoring of process separation. The business benefits of the VSPEX Private Cloud for Microsoft Hyper-V architecture include: 12  An end-to-end virtualization solution to effectively use the capabilities of the all-flash array infrastructure components  Efficient virtualization of 700 reference virtual machines (RVMs) for varied customer use cases  A reliable, flexible, and scalable reference design  Secure, multitenancy services to both intra- and inter-company departments and organizations  Server consolidation from isolated resources to a shared, flexible resource model that further simplifies management  A single environment to run mixed workloads and tiered applications  Extendible platform to provide complete self-service portal functionality to users  Optional implementation of the Federation Enterprise Hybrid Cloud offering on this platform to provide full-service cloud functionality  Optional integration with configuration management tools such as Docker Orchestration or DevOps to simplify management and maintenance of the cloud platform EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 2: Solution Overview Chapter 2 Solution Overview This chapter presents the following topics: Introduction .............................................................................................................14 Virtualization ...........................................................................................................14 Compute ..................................................................................................................14 Network ...................................................................................................................15 Storage ....................................................................................................................15 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 13 Chapter 2: Solution Overview Introduction The VSPEX Private Cloud for Microsoft Hyper-V solution provides a complete cloudenabled system architecture capable of supporting up to 700 RVMs with a redundant server, network topology, and highly available storage. The core components that make up this solution are virtualization, compute, network, and storage. Virtualization Microsoft Hyper-V is a key virtualization platform. It provides flexibility and cost savings by enabling you to consolidate large, inefficient siloed server farms into nimble, reliable cloud infrastructures. Features such as Live Migration, which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization, which performs live migrations automatically to balance loads, make Hyper-V a solid business choice. With the release of Windows Server 2012 R2, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM). Private cloud foundation Cloud computing is the next logical progression from virtualization and is becoming mainstream in a modern data center. Cloud computing provides a hardware and software platform that is flexible in how users perceive and operate within the environment. This VSPEX reference architecture provides the methods to guarantee a private-cloud environment with a known level of performance and availability. In a private cloud environment, organizations manage their virtual-machine environment internally. Virtual machines can be moved seamlessly throughout the private cloud platform. The platform can be extended to offer multitenancy by adding additional software components. Full self-service provisioning complete with chargeback, cost control, and workflow automation, can also be layered. The platform can be further extended to offer hybrid cloud services, which enable virtual machines to run locally in the private cloud or remotely in a service provider’s public-cloud environment. Virtual machines can be moved between the two physical platforms without interruption of services. The VSPEX reference architecture serves as the core pillar for all of these services. Compute VSPEX provides the flexibility to design and implement a customer’s choice of server components. The infrastructure must have sufficient: 14  CPU cores and memory to support the required number and types of virtual machines  Network connections to enable redundant connectivity to the network switches EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 2: Solution Overview  Capacity to enable the environment to withstand a server failure and failover in the environment Network VSPEX provides the flexibility to design and implement a customer’s choice of network components. The infrastructure must provide:  Redundant network links for the hosts, switches, and storage  Traffic isolation based on industry-accepted best practices  Support for link aggregation  Network switches with a minimum non-blocking backplane capacity that is sufficient for the target number of virtual machines and their associated workloads. EMC recommends enterprise-class network switches with advanced features such as quality of service. Storage Challenges Virtualization In highly virtualized environments, when a large number of virtual machines are virtualized on a cluster of servers that share a common storage pool, the I/O requests from all the disparate virtual machines are randomized for the storage, as shown in Figure 1. Traditional storage architectures cannot handle these high random I/O requests and introduce unacceptable application and virtual-machine latency. This is known as the “I/O blender”. Figure 1. I/O randomization brought by server virtualization Storage efficiency challenges The challenge for all-flash arrays is that often their high I/O performance alone can be insufficient for virtualized environments. Additional technologies that drive high EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 15 Chapter 2: Solution Overview storage efficiencies are also required. Storage efficiency is important because storage infrastructure acquisition and operations costs are among the top challenges of cloud-based virtual machine environments. To achieve storage efficiency, customers must maximize available storage capacity and processing resources, which often turn out to be competing resources. Storage efficiency is key to enabling the promise of elastic scalability, pay-as-you-grow efficiency, and a predictable cost structure, while increasing productivity and innovation. Technologies such as data compression and deduplication are key enablers to efficiency from a capacity standpoint, while simple, insightful management tools reduce management complexity. Resiliency and availability features, especially if enabled by default, further increase efficiency. While storage efficiency is important, in a private cloud environment, many disparate virtual machines with vastly different performance profiles and criticality are typically consolidated. Customers need a storage platform that can fulfill the performance demands, enhance storage efficiencies by reducing the data footprint, and enable agile provisioning and management of service delivery. Scalability An agile, virtualized infrastructure must also scale in the multiple dimensions of performance, capacity, and operations. It must have the ability to scale efficiently, without sacrificing performance and resiliency, or requiring additional IT resources to manage the environment. Operational agility Agility is a major reason why organizations choose to virtualize their infrastructures. However, IT responsiveness often slows exponentially as virtual environments grow. Resources are typically unable to deploy or service quickly enough to meet rapidly changing business requirements. Bottlenecks occur because organizations do not have the right tools to quickly determine the capacity and health of their physical and virtual resources. While enterprise users want responsive deployment of business applications to meet changing business requirements, the enterprise is often unable to rapidly deploy or update virtual machines and storage on a large scale. Standard virtual machine provisioning or cloning methods, which are commonly implemented in flash arrays, can be expensive, because full copies of virtual machines can require 50 GB or more storage for each copy. In a large-scale cloud data center, when shared storage may be cloning up to hundreds of virtual machines each hour while concurrently delivering I/O to active virtual machines, cloning can become a major bottleneck for data center performance and operational efficiency. Most storage arrays are designed to be statically installed and run, yet virtualized application environments are naturally dynamic and variable. Change and growth of virtualized workloads causes organizations to actively redistribute workloads across storage-array resources for load balancing to avoid running out of space or reducing performance. This ongoing load balancing is usually a manual, iterative task that is often costly and time-consuming. As a result, storage arrays that support large-scale virtualization environments require optimal and inherent data placement to ensure 16 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 2: Solution Overview maximum utilization of both capacity and performance without any planning demands. Deduplication Storage arrays can accumulate duplicate data over time, which increases management and other costs. In particular, large-scale virtual machine environments create large amounts of duplicate data when virtual machines are deployed by cloning existing virtual machines, or when the same OS and applications are installed. Deduplication eliminates duplicate data by replacing it with pointers to unique instances of the data. This deduplication process can be implemented after I/O has been de-staged to disks, or it can be done in real time, which actively reduces the amount of redundant data written to the array. Thin provisioning Thin provisioning is a popular technique that improves storage utilization. The storage capacity is consumed only when data is written instead of when storage volumes are provisioned. Thin provisioning removes the need for overprovisioning storage up front to meet anticipated future capacity demands and enables you to allocate storage on-demand from an available storage pool. Data protection While storage arrays have traditionally supported several RAID data protection levels, the arrays required that storage administrators choose between data protection and performance for specific workloads. The challenge for large-scale virtual environments is the shared storage system that stores data for hundreds or thousands of virtual machines with different workloads. Optimal data protection for virtualized environments requires that arrays support data protection schemes that combine the best attributes of existing RAID levels while avoiding the drawbacks. Because flash endurance is a special consideration in an all-flash array, the scheme maximizes the service life of the array’s solid-state drives (SSDs) while complementing the high I/O performance of flash media. Microsoft ODX support XtremIO 4.0, in beta at the time of publication of this guide, supports Microsoft Offloaded Data Transfers (ODX) technology, which offloads intra-array data movement requests to the array itself. This frees up the compute and network resources and reduces response times to data-transfer requests, which can result in drastically reduced virtual machine provisioning times and snapshot creation. For additional information on ODX, refer to the Microsoft Windows Dev Center Library topic Offloaded data transfers. EMC ViPR integration EMC ViPR® integrates with Microsoft System Center Virtual Machine Manager (SCVMM) and Orchestrator APIs to simplify storage management and reduce the need for multiple management tools to address common management tasks. Using ViPR, storage provisioning and management can be done within SCVMM, and common tasks can be done within Orchestrator. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 17 Chapter 2: Solution Overview API support RESTful API support enables advanced functionality exposure of the XtremIO 4.0 storage resources for customized workflows, and self-service portal development and integration without heavy coding efforts. This API support enables orchestration architects and developers access to a wide range of features without having to develop cumbersome wrappers or one-off drivers. Benefits of using XtremIO To meet the multiple demands of a large-scale virtualized data center, you need a storage solution that is able to provide superb performance and capacity scale-out to accommodate:  Infrastructure growth  Built-in data reduction features  Thin provisioning for capacity efficiency and cost mitigation  Flash-optimized data protection techniques  Near-instantaneous virtual machine provisioning and cloning  Automated load-balancing  Integration with key monitoring and orchestration tools  Consistent, predictable, highly random I/O performance The XtremIO all-flash array is built to unlock the full performance potential of flash storage and to deliver array-based inline data services that make it an optimal storage solution for large-scale, agile, and dynamic virtual environments. 18 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview Chapter 3 Solution Technology Overview This chapter presents the following topics: Overview ..................................................................................................................20 VSPEX Proven Infrastructures...................................................................................20 Key components ......................................................................................................22 Virtualization layer...................................................................................................22 Compute layer ..........................................................................................................25 Network layer ...........................................................................................................27 Storage layer ...........................................................................................................28 EMC Data Protection ................................................................................................30 Other technologies ..................................................................................................31 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 19 Chapter 3: Solution Technology Overview Overview This solution uses the XtremIO all-flash array and Microsoft Hyper-V to provide storage and server virtualization in a private cloud. The solution has been designed and proven by EMC to provide virtualization, server, network, and storage resources that provide customers with the ability to deploy up to 700 RVMs and the associated shared storage. This guide provides guidance on how to scale the solution infrastructure for larger environments or as the environment grows. The following sections describe the components in more detail. VSPEX Proven Infrastructures EMC has joined forces with the providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of the private cloud. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk. VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity that is characteristic of truly converged infrastructures, with more choice in individual stack components. VSPEX Proven Infrastructures, as shown in Figure 2, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. These infrastructures include virtualization, server, network, and storage layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while XtremIO storage systems and technologies provide the storage layers. 20 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview Figure 2. VSPEX Proven Infrastructures EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 21 Chapter 3: Solution Technology Overview Key components This section describes the following key components of this solution:  Virtualization layer: Decouples the physical implementation of resources from the applications that use the resources, so that the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. This solution uses Microsoft Hyper-V for the virtualization layer.  Compute layer: Provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and implements the solution by using any server hardware that meets these requirements.  Network layer: Connects the users of the private cloud to the resources in the cloud and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements.  Storage layer: Critical for the implementation of the server virtualization. With multiple hosts accessing shared data, many use cases can be implemented. The XtremIO all-flash array used in this solution provides high performance, enables rapid service and virtual machine provisioning, and supports a number of capacity efficiency and data services capabilities.  Data protection: The components of the solution provide protection when the data in the primary system is deleted, damaged, or unusable. For more information, see EMC Data Protection.  Security layer: Optional solution component that provides customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. This solution uses RSA SecurID® to provide secure user authentication. For more details about the reference architecture components, see Solution architecture. Virtualization layer Overview 22 The virtualization layer decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and enables the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, the virtualization layer enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview Microsoft Hyper-V Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server 2008. Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers. Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machine files between Hyper-V servers or storage systems transparently and with minimal performance impact. Virtual Fibre Channel ports Windows Server 2012 R2 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N-port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host’s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO). Prerequisites for virtual FC include:  One or more installations of Windows Server 2012 R2 with the Hyper-V role  One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC  NPIV-enabled SAN Virtual machines using the virtual FC adapter must use one of the following as the guest OS: Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2 as the guest operating system. Microsoft System Center Virtual Machine Manager Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM enables administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment. High availability with Hyper-V Failover Clustering The Windows Server 2012 R2 Failover Clustering feature provides high-availability in Microsoft Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes. Configure Windows Server 2012 R2 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are:  Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 23 Chapter 3: Solution Technology Overview Hyper-V Replica  Enables other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation.  Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server or migrated to a different host server. Hyper-V Replica, introduced in Windows Server 2012 R2, provides asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network using HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate. If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site. Cluster-Aware Updating Cluster-Aware Updating, introduced in Windows Server 2012 R2, provides a way to update cluster nodes with little or no disruption. Cluster-Aware Updating transparently performs the following tasks during the update process: 1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes) 2. Installs the updates 3. Performs a restart if necessary 4. Brings the node back online (migrated virtual machines are moved back to the original node) 5. Updates the next node in the cluster The node managing the update process is called the Update Coordinator. The Update Coordinator works in a couple of modes:  Self-updating: Runs on the cluster node being updated  Remote-updating: Runs on a standalone Windows operating system and remotely manages the cluster update Cluster-Aware Updating is integrated with Windows Server Update Service. PowerShell enables automation of the Cluster-Aware Updating process. 24 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview EMC Storage Integrator for Windows Suite EMC Storage Integrator (ESI) for Windows Suite is a software package with the essential components for storage administrators to provision business applications in less time, monitor storage health with an in-depth storage topology view, and automate storage management with rich scripting libraries. Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions:  Provisioning, formatting, and presenting drives to Windows servers  Provisioning new cluster disks and automatically adding them to the cluster  Provisioning SharePoint storage, sites, and databases in a single wizard Compute layer The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX solutions have minimum requirements for the number of processor cores and the amount of RAM. The solution can be implemented with two or twenty servers, and still be considered the same VSPEX solution. In the example shown in Figure 3, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might select a higher-end server with 20 processor cores and 144 GB of RAM. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 25 Chapter 3: Solution Technology Overview Figure 3. Compute layer flexibility examples The first customer needs four of the selected servers, while the other customer needs two. Note: To enable high availability for the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails. Use the following best practices in the compute layer: 26  Use several identical, or at least compatible, servers. VSPEX implements hypervisor-level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.  If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment.  Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview least single server failures. This enables the implementation of minimaldowntime upgrades, and tolerance for single-unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be sufficiently flexible to meet your specific needs. Ensure that there are sufficient processor cores and enough RAM per core to for the target environment. Network layer The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 4 shows an example of this highly available network topology. Figure 4. Example of highly available network design EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 27 Chapter 3: Solution Technology Overview This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. XtremIO is a block-only storage platform, and it provides network high availability or redundancy by using two ports per storage controller. If a link is lost on the storage processor I/O port, the link fails over to another port. All network traffic is distributed across the active links. Storage layer The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, enhances operational agility, and reduces total cost of ownership. EMC XtremIO The EMC XtremIO all-flash array is a clean-sheet design with a revolutionary architecture. It brings together all the necessary and sufficient requirements to enable the agile data center: linear scale-out, inline, all-the-time data services, and rich data center services for the workloads. The basic hardware building block for these scale-out arrays is the EMC XtremIO XBrick. Each X-Brick has two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The EMC XtremIO Starter X-Brick with 13 SSDs can be non-disruptively expanded to a full X-Brick with 25 SSDs without any downtime. The scale-out cluster can support up to six X-Bricks. The XtremIO platform is designed to optimize the use of flash storage media. Key attributes of this platform are: 28  High levels of I/O performance, particularly for the random I/O workloads that are typical in virtualized environments  Consistently low (sub-millisecond) latency  Inline data services that include thin provisioning, deduplication, data compression, and copy data management  Scale-out architecture that scales capacity and I/O performance in tandem while ensuring consistently low sub-millisecond latency  A full suite of enterprise array capabilities, such as N-way active controllers, high availability, strong data protection, and thin provisioning  Integration with EMC solutions for data center services including business continuity, backup and data protection, and converged infrastructure deployments EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview Because the XtremIO array has a scale-out design, you can add additional performance and capacity in a building block approach, with all building blocks forming a single clustered system. XtremIO storage includes the following components:  Host adapter ports: Provide host connectivity through fabric into the array.  Storage controllers: The compute component of the storage array. Storage controllers handle all aspects of data moving into, out of, and between arrays.  Disk drives: SSDs that contain the host/application data and their enclosures.  InfiniBand switches: A computer network communications link used in multi XBrick configurations that is switched, has high throughput, low latency, scalable, and provides quality of service and failover capability. This is used for intra-X-Brick communication and high-speed data movement. EMC XtremIO Operating System The XtremIO storage cluster is managed by the EMC XtremIO Operating System (XIOS). XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention. XIOS:  Ensures that data is evenly distributed across all SSD and controller resources, providing the highest possible performance and endurance that stands up to demanding workloads for the entire life of the array.  Eliminates the need to perform the complex configuration and optimization performance steps required for traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, and so on.  Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system. Standards-based enterprise storage system The XtremIO system interfaces with Hyper-V hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for native Microsoft Multipath I/O, protection against failed SSDs, nondisruptive software and firmware upgrades, no single point of failure, and hotswappable components. Real-time, inline data reduction The XtremIO storage system deduplicates and compresses incoming data in real time, enabling a massive amount of virtual machines and application data to reside in a small and economical amount of flash capacity. Due to the inline functionality, there is no post-processing of data, which helps to extend the endurance of the SSDs. Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; instead, it enhances the performance of the virtualized environment. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 29 Chapter 3: Solution Technology Overview Scale-out architecture Using a Starter X-Brick, Microsoft Hyper-V deployments can start small and be grown to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making the virtualized environments simple to size and manage as the demands grow over time. Massive performance The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read-and-write I/O, which is typical in virtual environments. It does so with consistent predictable sub-millisecond latency. Fast provisioning XtremIO arrays deliver writable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can rapidly clone virtual machine environments of any size. Ease of use The XtremIO storage system requires only a few basic setup steps that can be completed in minutes with absolutely no tuning or ongoing administration to achieve and maintain high performance levels. The XtremIO system can be deployment-ready in less than an hour after delivery. Security with Data at Rest Encryption (D@RE) XtremIO securely encrypts all data stored on the all-flash array, delivering protection for regulated use cases in sensitive industries, such as healthcare, finance, and government. Data center economics XtremIO provides breakthrough total cost of ownership in the virtualized workload environment through its exceptional performance, capacity savings from unique data reduction capabilities, linear predictable scaling with scale-out architecture, and ease of use. EMC Data Protection Overview EMC Data Protection provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster. EMC Data Protection is a smart method of backup. It consists of optimal integrated storage protection and software designed to meet backup and recovery objectives now and in the future. With EMC storage protection, deep data source integration, and feature-rich data management services, you can deploy an open, modular storage protection architecture that enables you to scale resources while lowering cost and minimizing complexity. 30 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 3: Solution Technology Overview EMC Avamar deduplication EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, NAS servers, and desktops/laptops. EMC Data Domain deduplication storage systems EMC Data Domain® deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. EMC RecoverPoint EMC RecoverPoint® is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance and combines continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables dedicated appliances to protect data locally (continuous data protection (CDP)), remotely (continuous remote replication (CRR)), or both (continuous local and remote replication (CLR)), offering the following advantages:  EMC RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via FC.  EMC RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order.  EMC RecoverPoint CLR replicates to both a local and a remote site simultaneously. EMC RecoverPoint uses lightweight splitting technology to mirror application writes to the EMC RecoverPoint cluster, and supports the following write splitter types:  Array-based  Intelligent fabric-based  Host-based Other technologies Overview In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to, the following technologies. EMC PowerPath EMC PowerPath® is a host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure:  Standardized data management across physical and virtual environments EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 31 Chapter 3: Solution Technology Overview  Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments  Improved service-level agreements by eliminating application impact from I/O failures Note: In this solution, we used PowerPath 6.0 for the management of I/O traffic. EMC ViPR Controller EMC ViPR Controller is storage automation software that centralizes, automates, and transforms storage into a simple and extensible platform. It abstracts and pools resources into a single storage platform to deliver automated, policy-driven storage services on demand via a self-service catalog. With vendor-neutral centralized storage management, your team can reduce costs, provide choice, and deliver a path to the cloud. Public-key infrastructure The ability to secure data and ensure the identity of devices and users is critical in today’s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, finance, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI). VSPEX solutions can be engineered with a PKI designed to meet the security criteria of your organization. The solution can be implemented via a modular process where layers of security are added as needed. The general process involves first implementing a PKI by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI are then enabled using the trusted certificates to ensure a high degree of authentication and encryption, where supported. Depending on the scope of PKI services needed, you may need to implement a PKI dedicated to those needs. There are many third-party tools that offer these services, including end-to-end solutions from RSA that can be deployed within a VSPEX environment. 32 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview Chapter 4 Solution Architecture Overview This chapter presents the following topics: Overview ..................................................................................................................34 Solution architecture ...............................................................................................34 Server configuration guidelines ...............................................................................39 Network configuration guidelines ............................................................................42 Storage configuration guidelines .............................................................................44 High-availability and failover ...................................................................................48 Backup and recovery configuration guidelines .........................................................50 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 33 Chapter 4: Solution Architecture Overview Overview This chapter is a comprehensive guide to the architecture and configuration of this solution. Server capacity is presented in generic terms for the required minimum CPU, memory, and network resources. Your server and networking hardware must meet the stated minimum requirements outlined in this chapter. EMC has validated the storage architecture to ensure that it delivers a high performance, highly available architecture. Each Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important that you first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Solution architecture Overview This section details the VSPEX Private Cloud solution for Microsoft Hyper-V with XtremIO configuration for up to 700 RVMs. Note: VSPEX uses a reference workload to describe and define a virtual machine. Therefore, one physical or virtual machine in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This process is described in Reference workload application. Logical architecture 34 Figure 5 shows a validated XtremIO infrastructure, where an 8 Gb/s FC or 10 Gb/s iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview Figure 5. Key components Logical architecture for the solution This solution architecture includes the following key components:   Microsoft Hyper-V: Provides a common virtualization layer to host the server environment. Hyper-V provides a highly available infrastructure through features such as:  Live Migration: Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption  Live Storage Migration: Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption  Failover Clustering High Availability: Detects and provides rapid recovery for a failed virtual machine in a cluster  Dynamic Optimization: Provides load balancing of computing capacity in a cluster with support of SCVMM Microsoft System Center Virtual Machine Manager: SCVMM is technically not required for this VSPEX solution, because the Hyper-V Management Tools in Windows Server 2012 R2 can be used to manage the Hyper-V environment. However, considering the large number of virtual machines the solution is capable of hosting, EMC recommends using SCVMM. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 35 Chapter 4: Solution Architecture Overview  Microsoft SQL Server: Stores configuration and monitoring details for SCVMM, which requires a database service. This solution uses a Microsoft SQL Server 2012 database.  DNS server: Provides name resolution for the various solution components. This solution uses the Microsoft DNS Service running on Windows Server 2012 R2.  Active Directory server: Provides functionality to various solution components that require the Active Directory service. The Active Directory service runs on a Windows Server 2012 R2 system.  Shared infrastructure: Adds DNS and authentication/authorization services with existing infrastructure or sets them up as part of the new virtual infrastructure.  IP network: Carries user and management traffic. A standard Ethernet network carries all network traffic with redundant cabling and switching. Storage network The storage network is isolated to provide hosts access to the array with the following options:  Fibre Channel: Performs high-speed serial data transfer with a set of standard protocols. Fibre Channel (FC) provides a standard data transport frame among servers and shared storage devices.  10 Gb Ethernet (iSCSI): Enables the transport of SCSI blocks over a TCP/IP network. ISCSI works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network. XtremIO all-flash array The XtremIO all-flash array includes the following components: 36  X-Brick: Represents a physical chassis that contains two active/active storage controllers as the fundamental scaling unit of the array, and a disk array enclosure (DAE) of eMLC SSDs. When the XtremIO cluster scales, the array clusters together multiple X-Bricks with an InfiniBand back-end switch.  Storage controller: Represents a physical computer (1U in size) in the cluster, which acts as the storage controllers, providing block data that supports FC and iSCSI protocols. Storage controllers can access all SSDs in the same X-Brick.  Processor D: Represents one of two CPU sockets for each storage controller. Processor D is responsible for disk access.  Processor RC: Represents the other CPU socket that is responsible for the router (hash writes and lookup) and controller (meta data).  Battery backup unit: Provides enough power to each storage controller to ensure that any data in flight destages to disk in the event of a power failure. The first X-Brick has two battery backup units for redundancy. As clusters require additional X-Bricks, only a single battery backup unit is necessary for each additional X-Brick, which is 1U in size.  DAE: Houses the flash drives that the array uses and is 2U in size. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview  Hardware resources InfiniBand switches: Connects multiple X-Bricks together and is 1U in size. Two separate switches are needed to ensure the fabric that ties the controllers together is highly available. Table 1 lists the hardware used in this solution. Table 1. Solution hardware Component Hyper-V servers Configuration CPU  1 vCPU per virtual machine  4 vCPUs per physical core Note: For Intel Ivy Bridge or later processors, use six vCPUs per physical core. For 700 virtual machines:  700 vCPUs  Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors) Memory  2 GB RAM per virtual machine  2 GB RAM reservation per Hyper-V host For 700 virtual machines:  Minimum of 1,400 GB RAM  Add 2 GB for each physical server Network  2 x 10 GbE network interface cards (NICs) per server  2 HBA per server or 2 x 10 GbE NICs per server for data traffic Note: You must add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums. Network infrastructure Minimum switching capacity  2 physical Ethernet switches  2 physical SAN switches, if implementing FC  2 x 10 GbE ports per Hyper-V server for management, user/application traffic, and Live Migration  2 ports per Hyper-V server for the storage network (FC or iSCSI)  2 ports per storage controller for storage data (FC or iSCSI) EMC XtremIO all-flash array One X-Brick with 25 x 400 GB SSD drives EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 37 Chapter 4: Solution Architecture Overview Component Configuration Shared infrastructure In most cases, the customer environment already has infrastructure services configured, such as Active Directory, DNS, and so on. The setup of these services is beyond the scope of this guide. If implemented without the existing infrastructure, the new minimum requirements are:  2 physical servers  16 GB RAM per server  4 processor cores per server  2 x 1 GbE ports per server Note: You can migrate the services into this solution post-deployment. However, the services must exist before the solution is deployed. Note: EMC recommends that you use a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled. Software resources Table 2 lists the software used in this solution. Table 2. Solution software Software Configuration Microsoft Windows Server with Hyper-V Microsoft Windows Server Version 2012 R2 Datacenter Edition Note: Datacenter Edition is necessary to support the number of virtual machines in this solution. Microsoft System Center Virtual Machine Manager Version 2012 R2 Datacenter Edition Microsoft SQL Server Version 2012 Standard Edition Note: Datacenter Edition is necessary to support the number of operating system environments (servers and virtual machines) used in this solution. Note: Any version of Microsoft SQL Server supported by SCVMM is acceptable. EMC PowerPath Use latest version XtremIO ( for Hyper-V datastores) EMC XtremIO Operating System Release 3.0 EMC Data Protection 38 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview Software Configuration EMC Avamar Refer to EMC Backup and Recovery Options for EMC Data Domain Operating System Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. VSPEX Private Clouds Design and Implementation Guide. Virtual machines (used for validation, but not required for deployment) Microsoft Windows Base Operating System Microsoft Windows Server 2012 R2 Datacenter Edition Server configuration guidelines Overview When designing and ordering the compute layer of this VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased. Intel Ivy Bridge updates Testing on the Intel Ivy Bridge processor series has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, EMC recommends increasing the vCPU/physical CPU (pCPU) ratio from 4:1 to 6:1. This reduces the number of server cores required to host the RVMs. Current VSPEX sizing guidelines require a maximum vCPU core to pCPU core ratio of 4:1, with a maximum 6:1 ratio for Ivy Bridge or later processors. This ratio is based on an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, original equipment manufacturer (OEM) server vendors that are VSPEX partners may suggest different (normally higher) ratios. Follow the updated guidance supplied by the OEM server vendor. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 39 Chapter 4: Solution Architecture Overview Table 3 lists the hardware resources used for the compute layer. Table 3. Hardware resources for the compute layer Component Microsoft Hyper-V servers Configuration CPU  1 vCPU per virtual machine  4 vCPUs per physical core Note: For Intel Ivy Bridge or later processors, use six vCPUs per physical core. For 700 virtual machines:  700 vCPUs  Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors) Memory  2 GB RAM per virtual machine  2 GB RAM reservation per Hyper-V host For 700 virtual machines:  Minimum of 1,400 GB RAM  Add 2 GB for each physical server Network Block  2 x 10 GbE NICs per server  2 HBA per server or 2 x 10 GbE NICs per server for iSCSI connection Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums. Note: EMC recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled. Hyper-V memory virtualization Microsoft Hyper-V has several advanced features that help maximize performance and overall resource use. The most important features relate to memory management. This section describes some of these features, and what to consider when using these features in a VSPEX environment. Figure 6 shows how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as Dynamic Memory and Smart Paging can reduce total memory usage and increase consolidation ratios in the hypervisor. 40 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview Figure 6. Hypervisor memory consumption Understanding the technologies in this section makes it easier to understand this basic concept. Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012 R2, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V enables more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 41 Chapter 4: Solution Architecture Overview It swaps out less-used memory to disk storage, and swaps it in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multinode computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, and therefore Windows Server 2012 R2 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature was only available to the host. Windows Server 2012 R2 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Memory configuration guidelines The memory configuration guidelines take into account the Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead Virtualized memory has some associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. In this solution, leave at least 2 GB memory for the Hyper-V parent partition. Virtual machine memory In this solution, configure each virtual machine with 2 GB memory in fixed mode. Network configuration guidelines Overview 42 This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider VLANs and FC/iSCSI connections on XtremIO storage. For detailed network resource requirements, refer to Table 1 on page 37. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview VLANs Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; however, in many cases, logical isolation with VLANs is sufficient. For a best practice, EMC recommends that you use a minimum of three or four VLANs for:  Customer data  Storage for iSCSI, if implemented  Live Motion or Storage Migration  Management Figure 7 shows the VLANs and the network connectivity requirements for the XtremIO array. Figure 7. Required networks for XtremIO storage The customer data network is for system users (or clients) to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts. Note: Some best practices need additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 43 Chapter 4: Solution Architecture Overview Enable jumbo frames (for iSCSI) EMC recommends setting the maximum transmission unit (MTU) to 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches. Storage configuration guidelines Overview This section provides guidelines for setting up the storage layer to provide high availability and the expected level of performance. Microsoft Hyper-V allows more than one method of storage when hosting virtual machines. The tested solution uses different block protocols (FC/iSCSI), and the storage layout described in this section adheres to all current best practices. If required, you can make modifications to this solution based on your system usage and load requirements. XtremIO X-Brick scalability XtremIO storage clusters support a fully distributed, scale-out design that enables linear increases in both capacity and performance in order to provide infrastructure agility. XtremIO uses a building-block approach in which the array can be scaled using additional X-Bricks. With clusters of two or more X-Bricks, XtremIO uses a redundant 40 Gb/s quad data rate (QDR) InfiniBand network for back-end connectivity among the storage controllers. This ensures a highly available, ultra-lowlatency network. Host access is provided by using two N-way active controllers for linear scaling of performance and capacity for simplified support of growing virtual environments. As a result, as capacity in the array grows, performance also grows as more storage controllers are added. As shown in Figure 8, the single brick is the basic building block of an XtremIO array. Figure 8. Single X-Brick XtremIO storage Each X-Brick comprises:  44 One 2U DAE, containing:  25 eMLC SSDs (10 TB X-Brick) or 13 eMLC SSDs (5 TB Starter X-Brick)  Two redundant power supply units  Two redundant SAS interconnect modules EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview  One battery backup unit  Two 1U storage controllers (redundant storage processors). Each storage controller includes:  Two redundant power supply units  Two 8 Gb/s FC ports  Two 10 GbE iSCSI ports  Two 40 Gb/s InfiniBand ports  One 1 Gb/s management/IPMI port Note: For details on X-Brick racking and cabinet requirements, refer to the EMC XtremIO Storage Array Site Preparation Guide. Figure 9 shows what the different cluster configurations look like as you scale up. You can start from one single X-Brick, and then, as you scale, you can add a second XBrick, and then a third, and so on. The performance scales linearly as additional XBricks are added. Figure 9. Cluster configuration as single and multiple X-Brick clusters Note: A Starter X-Brick is physically similar to a single X-Brick cluster, except for the number of SSDs in the DAE (13 SSDs in a Starter X-Brick instead of 25 SSDs in a standard single XBrick). EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 45 Chapter 4: Solution Architecture Overview Hyper-V storage virtualization Windows Server 2012 R2 Hyper-V and Failover Clustering use CSVs and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 10, the storage array presents block-based LUNs (as CSVs) to the Windows hosts to host virtual machines. Figure 10. Hyper-V virtual disk types CSV A CSV is a shared disk containing an NTFS volume that is made accessible to all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows Server 2012 R2 also supports pass-through disks, which enable a virtual machine to access a physical disk mapped to a host that does not have a volume configured on it. VHDX Hyper-V in Windows Server 2012 R2 contains a VHD format update, VHDX, which has much greater capacity and built-in resiliency. The main features of the VHDX format are:  Support for virtual hard disk storage capacity of up to 64 TB  Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures  Optimal structure alignment of the virtual hard disk format to suit large sector disks The VHDX format has the following features: 46  Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload  A 4 KB logical-sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview VSPEX storage building blocks  The ability to store custom file metadata that the user might want to record, such as the operating system version or applied updates  Space reclamation features that can result in smaller file sizes and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware) Sizing the storage system to meet virtual machine IOPS is a complicated process. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disks that can support a certain number of virtual machines in the VSPEX architecture. Each building block combines several disks to create an XtremIO protection group that supports the needs of the private cloud environment. Building block for Starter X-Brick The Starter X-Brick building block can support up to 300 virtual machines with 13 SSDs in the XtremIO data protection group, as shown in Figure 11. Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines In the Starter X-Brick configuration, the raw capacity is 5 TB. Detailed information about the test profile can be found in Chapter 5. You can expand the raw capacity of this building block to 10 TB by adding an additional 12 SSDs and enabling the configuration to support up to 700 virtual machines. Building block for a single X-Brick X-Bricks with 25 SSDs as shown in Figure 12 are available with 10 TB and 20 TB raw capacity. Figure 12. XtremIO single X-Brick building block for 700 virtual machines A single X-Brick with 10 TB raw capacity can support up to 700 virtual machines, while an X-Brick with 20 TB raw capacity can support up to 1,400 virtual machines. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 47 Chapter 4: Solution Architecture Overview Table 4 lists the different scales of virtual machines supported by different types and numbers of X-Bricks. Table 4. XtremIO scalable scenarios with virtual machines Scalable Virtual machines Starter X-Brick (5 TB) 300 One X-Brick (10 TB) 700 One X-Brick (20 TB) 1,400 Two X-Brick cluster (40 TB) 2,800 Four X-Brick cluster (80 TB) v4.0 5,600 Six X-Brick cluster (120 TB) v4.0 8,400 Note: The number of supported virtual machines is based on a tested configuration using a value of 15 percent for unique data. This constitutes data that cannot be deduplicated. The XtremIO platform uses real-time deduplication to maximize the efficiency of its all-flash architecture. As a result, its logical capacity presented to users is greater than the physical capacity available in the system. When managing the system, monitor the current physical usage independently of the logical allocation so that out-of-space conditions can be avoided. EMC recommends keeping the physical allocation of the unit below 90 percent as a best practice. High-availability and failover Overview This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When you implement the solution by following the instructions in this guide, business operations survive with little or no impact from single-unit failures. Virtualization layer Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 13 shows the hypervisor layer responding to a failure in the compute layer. Figure 13. High availability at the virtualization layer By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible. Compute layer 48 While the choice of servers to implement in the compute layer is flexible, we recommend that you use enterprise-class servers designed for the data center. This type of server has increased component redundancy, for example, redundant power EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 4: Solution Architecture Overview supplies, as shown in Figure 14. Connect these servers to separate power distribution units (PDUs) following your server vendor’s best practices. Figure 14. Redundant power supplies To configure high availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 13. Network layer The XtremIO advanced networking features provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 15. Spread these connections across multiple Ethernet switches to guard against component failure in the network. Figure 15. Network layer high availability EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 49 Chapter 4: Solution Architecture Overview Storage layer The XtremIO families are designed for five-9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 16. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures. Figure 16. XtremIO high availability EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability. XtremIO Data Protection Every other flash array in the market uses standard disk-based RAID algorithms, which do not perform, waste a lot of expensive flash capacity, and hurt the lifespan of flash. XtremIO developed a data protection scheme, XtremIO Data Protection (XDP), that uses both the random access nature of flash and the unique XtremIO dual-stage metadata engine. The result is flash-native data protection that delivers much lower capacity overhead, superior data protection, and much better flash endurance and performance than any RAID algorithm. XDP delivers superior RAID 6 performance, while exceeding RAID 1 performance and RAID 5 capacity utilization. More importantly, XDP is optimized for long-term enterprise operating conditions, where overwriting existing data becomes dominant in the array. Unlike other flash arrays, XDP enables XtremIO to maintain its performance until completely full, giving you the most economical use of flash. Backup and recovery configuration guidelines For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide. 50 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing Chapter 5 Environment Sizing This chapter presents the following topics: Overview ..................................................................................................................52 Reference workload..................................................................................................52 Scaling out ...............................................................................................................53 Reference workload application ...............................................................................53 Quick assessment ....................................................................................................55 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 51 Chapter 5: Environment Sizing Overview The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and descriptions of how that may change the end delivery from the server and network perspective. Modify the storage definition by adding drives for greater capacity and performance and by adding X-Bricks to improve the cluster performance. The cluster layouts provide support for the appropriate number of virtual machines at the defined performance level. Reference workload Overview When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system, and by improving resource utilization of the underlying hardware. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. Each virtual machine has its own unique requirements. In any discussion about virtual infrastructures, you need to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics. To simplify this discussion, this section presents a representative customer reference Defining the reference workload workload. By comparing the actual customer usage to this reference workload, you can determine how to size the solution. VSPEX Private Cloud solutions define an RVM workload, which represents a common point of comparison. Since XtremIO has an in-line deduplication feature, it is critical to determine the unique data percentage, as this parameter will impact XtremIO physical capacity usage. In the validated solution, we set the unique data ratio to 15 percent. The parameters are described in Table 5. Table 5. VSPEX Private Cloud RVM workload Parameter Value Virtual machine OS Windows Server 2012 R2 vCPUs 1 vCPUs per physical core (maximum) 41 Memory per virtual machine 2 GB IOPS per virtual machine 25 1 Based on testing with Intel Sandy Bridge processors. Newer processors can support six vCPU/core or greater. Follow the recommendations of your VSPEX server vendor. 52 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing Parameter Value I/O size 8 KB I/O pattern Fully random; skew = 0.5 I/O read percentage 67% I/O write percentage 33% Virtual machine storage capacity 100 GB Unique data 15% This specification for a virtual machine represents a single common point of reference by which to measure other virtual machines. Scaling out XtremIO is designed to scale from a Starter X-Brick or single X-Brick to a cluster of multiple X-Bricks (up to six X-Bricks based on the current code release). Unlike most traditional storage systems, as the number of X-Bricks grows, so do capacity, throughputs, and IOPS. The scalability of performance is linear with the growth of the deployment. Whenever additional storage and compute resources (such as servers and drives) are needed, you can add them modularly. Storage and compute resources grow together so that the balance between them is maintained. Reference workload application Overview The solution creates storage resources that are sufficient to host a target number of RVMs with the characteristics shown in Table 5. Virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of RVMs together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the pool until no resources remain. Example 1: Custom-built application A small custom-built application server must move into a virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges is between four IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on direct-attached storage (DAS). Based on these numbers, the application needs the following resources:  CPU of one RVM  Memory of two RVMs  Storage of one RVM  I/Os of one RVM EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 53 Chapter 5: Environment Sizing In this example, a corresponding virtual machine uses the resources for two of the RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 698 RVMs remain. Example 2: Pointof-sale system The database server for a customer’s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle. The requirements to virtualize this application are:  CPUs of four RVMs  Memory of eight RVMs  Storage of two RVMs  I/Os of eight RVMs In this case, the corresponding virtual machine uses the resources of eight RVMS. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 692 RVMs remain. Example 3: Web server The customer’s web server must move into a virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The requirements to virtualize this application are:  CPUs of two RVMs  Memory of four RVMs  Storage of one RVM  I/Os of two RVMs In this case, the corresponding virtual machine uses the resources of four RVMs. If implemented on a single 10TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 696 RVMs remain. Example 4: Decision-support database The database server for a customer’s decision-support system must move into a virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The requirements to virtualize this application are: 54  CPUs of 10 RVMs  Memory of 32 RVMs  Storage of 52 RVMs  I/Os of 28 RVMs EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing In this case, the corresponding virtual machine uses the resources of 52 RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 648 RVMs remain. Summary of examples These examples demonstrate the flexibility of the resource pool model. In all four examples, the workloads reduce the amount of available resources in the pool. With business growth, the customer must implement a much larger virtual environment to support one custom-built application, one point-of-sale system, two web servers, and ten decision-support databases. Using the same strategy, calculate the number of equivalent RVMs to get a total of 538 RVMs. All these RVMs can be implemented on the same virtual infrastructure with an initial capacity of 700 RVMs that is supported with a single 10 TB X-Brick. The resources for 162 RVMs remain in the resource pool, as shown in Figure 17. Figure 17. Resource pool flexibility In this case, you must examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples. In more advanced cases, tradeoffs might be necessary between memory and I/O or other relationships in which increasing the amount of one resource, decreases the need for another. In these cases, the interactions between resource allocations become highly complex and are beyond the scope of this guide. Quick assessment Overview Performing a quick assessment of the customer's environment helps you determine the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of vCPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of RVMs required from the resource pool. The Reference workload section provides examples of this process. Complete the worksheet for each application listed in Table 6. Each row requires inputs on the following resources: CPU, memory, IOPS, and capacity. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 55 Chapter 5: Environment Sizing Table 6. Blank worksheet row CPU (virtual CPUs) Application Example application Memory (GB) Resource requirements IOPS Capacity (GB) Equivalent RVMs NA Equivalent reference virtual machines CPU requirements Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as Perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required. In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes. Memory requirements Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Perfmon, to determine memory efficiency. Storage performance requirements Several components become important when discussing the I/O performance of a system: IOPS  The number of requests coming in, or IOPS.  The size of the request or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data.  The average I/O response time, or I/O latency. The RVM calls for 25 IOPS. To monitor this on an existing system, use a performancemonitoring tool such as Perfmon. Perfmon provides several counters that can help. The most common are:  56 Logical disk or disk transfer/sec EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing  Logical disk or disk reads/sec  Logical disk or disk writes/sec The RVM assumes a 2:1 read/write ratio. Use these counters to determine the total number of IOPS and the approximate ratio of reads to writes for the customer application. I/O size The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The RVM assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even—powers of 2: 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of even I/O sizes. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the RVM assumes 8 KB I/O sizes. I/O latency You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. VSPEX solutions must meet a target average I/O latency of 20 ms. The XtremIO array easily achieved this with an average sub-millisecond response time. The recommendations in this guide enable the system to continue to meet that 20 ms target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/Transfer counter in Microsoft Windows Perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended. Unique data XtremIO automatically and globally deduplicates data as it enters the system. Deduplication is performed in real time and not as a post-processing operation. XtremIO is an ideal capacity-saving storage array due to this feature. The consumed capacity is based on the deduplication ratio from the testing tool. Virtualization platforms typically have a high number of duplicate datasets. For example, the use of common OS builds and versions for virtual machines results in a relatively low percentage of truly unique data. The scaling numbers for this solution were based on a data uniqueness value of 15 percent. This translates into a deduplication ratio of approximately 7:1, which was validated by monitoring the XtremIO deduplication and compression metrics during testing. If your datasets have a higher percentage of unique data, the amount of capacity consumed on the XtremIO array will increase, and the number of available storage resources for RVMs will decrease accordingly. This may lower the number of RVMs the configuration can hold unless additional capacity is added. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 57 Chapter 5: Environment Sizing XtremIO offers tools to assess the deduplicatability of data in your present environment. Use the tool to determine a likely deduplication ratio and compare it to that used for this testing to assess the impact to available capacity and the number of RVMs the configuration can support. For information about the XtremIO Data Reduction Estimator tool, read the “Everything Oracle at EMC” blog post EMC XtremIO Data Reduction Estimator. Storage capacity requirements Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full. Determining equivalent reference virtual machines With all of the resources defined, determine an appropriate value for the equivalent RVMs line by using the relationships in Table 7. Round all values up to the closest whole number. Table 7. Reference virtual machine resources Resource Value for RVMs Relationship between requirements and equivalent RVMs CPU 1 vCPU Equivalent reference virtual machines = resource requirements Memory 2 GB Equivalent reference virtual machines = (resource requirements)/2 IOPS 25 IOPS Equivalent reference virtual machines = (resource requirements)/25 Capacity 100 GB Equivalent reference virtual machines = (resource requirements)*0.15/100 For example, the point-of-sale system database used in Example 2: Point-of-sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 30 GB (15 percent unique data converted to physical capacity consumption is 200 * 0.15 = 30 GB) of physical capacity. This translates to four RVMs of CPU, eight RVMs of memory, eight RVMs of IOPS, and two RVMs of capacity. Table 8 shows how that fits into the worksheet row. 58 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing Table 8. Sample worksheet row CPU (vCPUs) Memory (GB) IOPS Capacity (GB) Equivalent RVMs Resource requirements 4 16 200 30 N/A Equivalent reference virtual machines 4 8 8 1 8 Application Sample application Use the highest value in the row to fill in the Equivalent RVMs column. As shown in Figure 18, the sample requires eight RVMs. Figure 18. Required resources from the RVM pool Implementation example - Stage 1 A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent RVMs column, as shown in Table 9, to calculate the total number of RVMs required. The table shows the result of the calculation, rounded up to the nearest whole number. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 59 Chapter 5: Environment Sizing Table 9. Example applications – Stage 1 Application Server resources Storage resources CPU (vCPUs) Memory (GB) IOPS Capacity (GB) RVMs Example application 1: Custom-built application Resource requirements 1 3 15 5 N/A Equivalent reference virtual machines 1 2 1 1 2 Example application 2: Point-of-sale system Resource requirements 4 16 200 60 N/A Equivalent reference virtual machines 4 8 8 1 8 Example application 3: Web server Resource requirements 2 8 50 4 N/A Equivalent reference virtual machines 2 4 2 1 4 Total equivalent reference virtual machines 14 This example requires 14 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth, because it supports up to 300 RVMs. Implementation example – Stage 2 The customer must add a decision-support database to the virtual infrastructure. Using the same strategy, you can calculate the number of RVMs required, as shown in Table 10. Table 10. Example applications -Stage 2 Application 60 Server resources Storage resources CPU (vCPUs) Memory (GB) IOPS Capacity (GB) Equivalent RVMs Example application 1: Custom-built application Resource requirements 1 3 15 5 N/A Equivalent reference virtual machines 1 2 1 1 2 Example application 2: Resource requirements 4 16 200 30 N/A EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing Application Server resources Storage resources Equivalent RVMs Point-of-sale system Equivalent reference virtual machines 4 8 8 1 8 Example application 3: Web server Resource requirements 2 8 50 4 N/A Equivalent reference virtual machines 2 4 4 1 4 Example application 4: Decisionsupport database Resource Requirements 20 128 14 1,500 N/A Equivalent reference virtual machines 20 64 56 15 64 Total equivalent reference virtual machines 78 This example requires 78 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth. You can implement this storage layout with a Starter X-Brick that supports up to 300 virtual machines. Figure 19 shows that 222 RVMs are available after implementing one Starter X-Brick. Figure 19. Aggregate resource requirements - Stage 2 Fine-tuning hardware resources This process usually determines the recommended hardware size for servers and storage. However, in some cases, there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, additional customization can be done at this point. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 61 Chapter 5: Environment Sizing Server resources For some workloads, the relationship between server needs and storage needs does not match what is outlined in the RVM. You should size the server and storage layers separately in this scenario, as shown in Figure 20. Figure 20. Customizing server resources To do this, first total the resource requirements for the server components, as shown in Table 11. In the Server resource component totals row, add up the server resource requirements from the applications in the table. Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Server and storage resource component totals row in Table 11 describes the required amount of storage. Table 11. Server resource component totals Application 62 Server resources Storage resources CPU (Virtual CPUs) Memory (GB) IOPS Capacity (GB) RVMs Example Application 1: Custombuilt application Resource Requirements 1 3 15 5 N/A Equivalent Reference Virtual Machines 1 2 1 1 2 Example Application 2: Point-ofsale system Resource Requirements 4 16 200 30 N/A Equivalent Reference Virtual Machines 4 8 8 1 8 Example Application Resource Requirements 2 8 50 4 N/A EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 5: Environment Sizing Application #3: Web Server Example Application #4: Decision Support Database Server resources Storage resources RVMs Equivalent Reference Virtual Machines 2 4 2 1 4 Resource Requirements 10 64 700 768 Equivalent Reference Virtual Machines 10 32 28 8 Total equivalent reference virtual machines Server and storage resource component totals 17 32 46 155 Note: Calculate the sum of the Resource requirements row for each application, not the Equivalent reference virtual machines, to get the Server and storage resource component totals. In this example, the target architecture required 17 vCPUs and 155 GB of memory. If four vCPUs per physical processor core are allocated, and memory over-provisioning is not necessary, the architecture requires five physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server resources. Note: Keep high-availability requirements in mind when customizing the hardware resource. EMC VSPEX Sizing Tool To simplify the sizing of this solution, EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions. The VSPEX Sizing Tool enables you to input your resource requirements from the customer’s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions and provides platform configuration information that meets those requirements. You can access this tool at: EMC VSPEX Sizing Tool. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 63 Chapter 6: VSPEX Solution Implementation Chapter 6 VSPEX Solution Implementation This chapter presents the following topics: Overview ..................................................................................................................65 Pre-deployment tasks ..............................................................................................65 Network implementation ..........................................................................................67 Microsoft Hyper-V hosts installation and configuration ...........................................69 Microsoft SQL Server database installation and configuration .................................71 System Center Virtual Machine Manager server deployment ...................................73 Storage array preparation and configuration ...........................................................74 64 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation Overview The deployment process consists of the stages listed in Table 12. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure. Table 12. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Deployment resources 3 Gather customer configuration data Customer configuration data 4 Rack and cable the components Refer to the vendor documentation. 5 Configure the switches and networks, connect to the customer network Network implementation 6 Install and configure the XtremIO array Storage array 7 Configure virtual machine storage Storage array 8 Install and configure the servers Microsoft Hyper-V hosts 9 Set up Microsoft SQL Server (used by SCVMM) Microsoft SQL Server database 10 Install and configure SCVMM Server and virtual machine networking Configuring SQL Server for SCVMM Pre-deployment tasks The pre-deployment tasks, as shown in Table 13, include procedures not directly related to the environment installation and configuration, and provide needed results at the time of installation. Pre-deployment tasks include gathering hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required on site. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 65 Chapter 6: VSPEX Solution Implementation Table 13. Deployment resources checklist Pre-deployment tasks Task Description Gathering documents Gather the related documents listed in Appendix A. These documents provide setup procedures and deployment best practices for the various components of the solution. Gathering tools Gather the required and optional tools for the deployment. Use Table 14 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process. Gathering data Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration worksheet for reference during the deployment process. Table 14 lists the hardware, software, and licenses required to configure the solution. For more information, refer to Table 1 and Table 2 on pages 37 and 38. Table 14. Deployment resources checklist Requirement Description Hardware Physical servers to host virtual machines: Sufficient physical server capacity as determined by sizing for the deployment (see Chapter 5) Microsoft Hyper-V servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement. Switch port capacity and capabilities as required by the virtual machine infrastructure EMC XtremIO X-Bricks in the type and quantity as determined by sizing for the deployment (see Chapter 5). Software Microsoft Windows Server 2012 R2 (or later) Datacenter Edition installation media Microsoft System Center Virtual Machine Manager 2012 R2 installation media Microsoft SQL Server 2012 or newer installation media Note: This requirement may be covered in the existing infrastructure. Licenses Microsoft System Center Virtual Machine Manager 2012 R2 license keys Microsoft Windows Server 2012 R2 Datacenter Edition license keys Note: An existing Microsoft Key Management Server (KMS) may cover this requirement. Microsoft SQL Server Standard Edition license key Note: The existing infrastructure may already meet this requirement. 66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation Customer configuration data Gather information such as IP addresses and hostnames as part of the planning process to reduce time onsite. The Customer configuration worksheet provides a set of tables to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment process. Network implementation This section describes the network infrastructure requirements needed to support this architecture. Table 15 provides a summary of the tasks for network configuration, and references for further information. Table 15. Tasks for switch and network configuration Task Description Reference Configuring the infrastructure network Configure storage array and Hyper-V host infrastructure networking.  Storage array preparation and configuration Configuring VLANs Configure private and public VLANs as required. Completing network cabling 1. Connect the switch interconnect ports.  Microsoft SQL Server database installation and configuration Vendor switch configuration guide 2. Connect the XtremIO front-end ports. 3. Connect the Microsoft Hyper-V server ports. Preparing the network switches For validated performance levels and high availability, this solution requires the switching capacity listed in Table 1 on page 37. You do not need to use new hardware if the existing infrastructure meets the requirements. Configuring the infrastructure network To provide both redundancy and additional network bandwidth, the infrastructure network requires redundant network links for:  Each Hyper-V host  The storage array  The switch interconnect ports  The switch uplink ports This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution. Figure 21 shows a sample redundant infrastructure for this solution and the use of redundant switches and links to ensure that there are no single points of failure. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 67 Chapter 6: VSPEX Solution Implementation Converged switches provide customers with different protocol options (FC or iSCSI) for storage networks for block storage. While existing 8 Gb FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iSCSI. Figure 21. Sample Ethernet network architecture Configuring VLANs Ensure that there are adequate network switch ports for the storage array and Windows hosts. EMC recommends that you configure the Windows hosts with three VLANs: Customer data network: Virtual machine networking (these are customer-facing networks, which can be separated if needed). Storage network: XtremIO data networking (private network). Management network: Live Migration or Storage Migration networking (private network). These networks can also reside on separate VLANs for additional traffic isolation. 68 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation Configuring jumbo Use jumbo frames for iSCSI protocol. Set the maximum transmission unit (MTU) to frames (iSCSI only) 9,000 for the switch ports for the iSCSI storage network. To enable jumbo frames on switch ports for storage and host ports on the switches refer to the switch vendor guidelines. Completing network cabling Ensure that all solution servers, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is a complete connection to the existing customer network. Note: The new equipment is connected to the existing customer network. Ensure that unexpected interactions do not cause service issues on the customer network. Microsoft Hyper-V hosts installation and configuration Overview This section provides the requirements for installing and configuring the Windows hosts and infrastructure servers required to support the architecture. Table 16 describes the tasks that must be completed. Table 16. Installing the Windows hosts Tasks for server installation Task Description Reference Installing the Windows hosts Install Windows Server 2012 R2  Installing Windows Server 2012 R2 on the physical servers for the solution. Installing HyperV and configuring Failover Clustering 1. Add the Hyper-V Server role.  Installing Windows Server 2012 R2 2. Add the Failover Clustering feature. Configuring Microsoft Hyper-V networking Configure Windows hosts networking, including NIC teaming and the virtual switch network. Installing PowerPath on Windows Servers Install and configure PowerPath to manage multipathing for XtremIO LUNs. Planning virtual machine memory allocations Ensure that Microsoft Hyper-V  Installing Windows Server 2012 R2 guest memory-management features are configured properly for the environment. 3. Create and configure the Hyper-V cluster.  Installing Windows Server 2012 R2 PowerPath and PowerPath/VE for Windows Installation and Administration Guide Follow Microsoft best practices to install Windows Server 2012 R2 on the physical servers for the solution. Windows requires hostnames, IP addresses, and a root password for installation. The Customer configuration worksheet provides appropriate values. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 69 Chapter 6: VSPEX Solution Implementation Installing Hyper-V and configuring failover clustering To install Hyper-V and configure Failover Clustering: Configuring Windows host networking To ensure performance and availability, the following NICs are required: 1. Install and patch Windows Server 2012 R2 on each Windows host. 2. Configure the Hyper-V role, and the Failover Clustering feature.  At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary)  At least two 10 GbE NICs for the storage network (iSCSI)  At least two 8 GbE HBAs for the storage network (FC)  At least one NIC for Live Migration Note: Enable jumbo frames for NICS that transfer iSCSI data. Set the MTU to 9,000. For instructions, refer to the NIC configuration guide. To improve and enhance the performance and capabilities of XtremIO storage array, Installing and you can choose the Windows Native Multipathing feature or install PowerPath for configuring Multipath software Windows on the Microsoft Hyper-V host. For detailed information and the configuration steps to install EMC PowerPath, refer to the PowerPath and PowerPath/VE for Windows Installation and Administration Guide. Note: This solution uses PowerPath as the multipathing solution to manage XtremIO LUNs. Planning virtual machine memory allocations Server capacity Server capacity in the solution is required for two purposes:  To support the new virtualized server infrastructure  To support required infrastructure services such as authentication and authorization, DNS, and databases For information on the minimum infrastructure requirements, refer to Table 3 on page 40. There is no need for new hardware if existing infrastructure meets the requirements. Memory configuration Take care to properly size and configure the server memory for this solution. Memory virtualization techniques, such as Dynamic Memory, enable the hypervisor to abstract physical host resources to provide resource isolation across multiple virtual machines and avoid resource exhaustion. With advanced processors, such as Intel processors with Extended Page Table support, abstraction takes place within the CPU. Otherwise, abstraction takes place within the hypervisor itself. Microsoft Hyper-V includes multiple techniques for maximizing the use of system resources such as memory. Do not substantially overcommit resources as this can 70 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation lead to poor system performance. The exact implications of memory overcommitment in a real-world environment are difficult to predict. Performance degradation due to resource exhaustion increases with the amount of memory overcommitted. Microsoft SQL Server database installation and configuration Overview Most customers use a management tool to provision and manage their server virtualization solution even though this is not required. The management tool requires a database back end. SCVMM uses SQL Server 2012 as the database platform. Note: Do not use Microsoft SQL Server Express Edition for this solution. Table 17 lists the tasks for installing and configuring a SQL Server database for the solution. The subsequent sections describe these tasks. Table 17. Tasks for SQL Server database setup Task Description Reference Creating a virtual machine for SQL Server Create a virtual machine to host SQL Server. Verify that the virtual machine meets the hardware and software requirements. msdn.microsoft.com Installing Microsoft Windows on the virtual machine Install Microsoft Windows Server 2012 R2 on the virtual machine created to host SQL Server. technet.microsoft.com Installing Microsoft SQL Server Install Microsoft SQL Server on the designated virtual machine. technet.microsoft.com Configuring SQL Server for SCVMM Configure a remote SQL Server instance for SCVMM. technet.microsoft.com EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 71 Chapter 6: VSPEX Solution Implementation Creating a virtual machine for SQL Server On one of the Windows servers designated for infrastructure virtual machines, create a virtual machine with sufficient computing resources for SQL Server. Use the datastore designated for the shared infrastructure. Note: EMC recommends CPU and memory values of 2 vCPU and 6 GB respectively for the SQL virtual machine. If the customer environment already contains a SQL Server instance, refer to Configuring SQL Server for SCVMM. The SQL Server service must run on Microsoft Windows. Install the required Windows Installing Microsoft Windows version on the virtual machine, and select the appropriate network, time, and authentication settings. on the virtual machine Installing SQL Server Install SQL Server on the virtual machine from the SQL Server installation media. Microsoft SQL Server Management Studio is one of the components in the SQL Server installer. Install this component on the SQL Server instance directly, and on an administrator console. In many implementations, you may want to store data files in locations other than the default path. To change the default path for storing data files: 1. Right-click the server object in SQL Server Management Studio and select Database Properties. 2. In the Properties window, change the default data and log directories for new databases created on the server. Note: For high availability, install SQL Server on a Microsoft failover cluster. Configuring SQL Server for SCVMM To use SCVMM in this solution, configure the SQL Server instance for remote connections. Create individual login accounts for each service that accesses a database on the SQL Server instance. For detailed requirements and instructions, refer to the Microsoft TechNet Library topic Configuring a Remote Instance of SQL Server for VMM. For further information, refer to the list of documents in Reference Documentation. 72 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation System Center Virtual Machine Manager server deployment This section provides information about configuring SCVMM for the solution. Table 18 outlines the tasks to be completed. Overview Table 18. Tasks for SCVMM configuration Task Description Reference Creating the SCVMM host virtual machine Create a virtual machine for the SCVMM server. Create a virtual machine Installing the SCVMM guest OS Install Windows Server 2012 R2 Datacenter Edition on the SCVMM host virtual machine. Install the guest operating system Installing the SCVMM server Install an SCVMM server.  How to Install a VMM Management Server  Installing the VMM Server Installing the SCVMM Admin Console Install an SCVMM Admin Console.  How to Install the VMM Console  Installing the VMM Administrator Console Installing the SCVMM agent locally on the hosts Install an SCVMM agent locally on the hosts that SCVMM manages. Installing a VMM Agent Locally on a Host Adding the Hyper-V cluster to SCVMM Add the Hyper-V cluster to SCVMM. How to Add a Host Cluster to VMM Creating a virtual machine in SCVMM Create a virtual machine in SCVMM.  Creating and Deploying Virtual Machines in VMM  How to Create a Virtual Machine with a Blank Virtual Hard Disk Performing partition alignment Use diskpart.exe to perform partition alignment, assign drive letters, and assign the file allocation unit size of the virtual machine’s disk drive. Disk Partition Alignment Best Practices for SQL Server Creating a template virtual machine Create a template virtual machine from the existing virtual machine.  How to Create a Virtual Machine Create the hardware profile and Guest OS profile at this time.  How to Create a Template from Deploy the virtual machines from the template virtual machine.  How to Create and Deploy a Deploying virtual machines from the template virtual machine Template a Virtual Machine Virtual Machine from a Template  How to Deploy a Virtual Machine EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 73 Chapter 6: VSPEX Solution Implementation Creating a SCVMM host virtual machine To deploy a SCVMM server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Hyper-V server with the customer guest OS configuration by using infrastructure server storage presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines that SCVMM must manage. Installing the SCVMM guest OS Install the guest OS on the SCVMM host virtual machine. Install the required Windows Server version on the virtual machine and select appropriate network, time, and authentication settings. Installing the SCVMM server Set up the SCVMM database and the default library server; then install the SCVMM server. To install the SCVMM server, refer to the Microsoft TechNet Library topic Installing the VMM Server. Installing the SCVMM Admin Console The SCVMM Admin Console is a client tool to manage the SCVMM server. Install the SCVMM Admin Console on the same computer as the VMM server. Installing the SCVMM agent locally on a host If the hosts must be managed on a perimeter network, install an SCVMM agent locally on the host before adding the host to SCVMM. Optionally, install an SCVMM agent locally on a host in a domain before adding the host to SCVMM. In all other cases, agents are installed automatically. To install the SCVMM Admin console, refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console. To install a VMM agent locally on a host, refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally on a Host . Adding the Hyper-V cluster to SCVMM SCVMM manages the Hyper-V cluster. Add the deployed Hyper-V cluster to SCVMM. To add the Hyper-V cluster, refer to the Microsoft TechNet Library topic How to Add a Host Cluster to VMM. Storage array preparation and configuration Overview This section provides information about creating volume in XtremIO and mapping XtremIO volumes to SCVMM environment. Implementation instructions and best practices may vary depending on the storage network protocol selected for the solution. Follow high-level these steps in each case: 74 1. Configure the XtremIO array, including the register host initiator group. 2. Provision storage and LUN masking to the Hyper-V hosts. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation The following sections explain the options for each step separately, depending on whether the FC or iSCSI protocol is selected. Configuring the XtremIO array This section describes how to configure the XtremIO storage array for host access using a block-only protocol such as FC or iSCSI. In this solution, XtremIO provides data storage for Hyper-V hosts. Table 19 describes the XtremIO configuration tasks. Table 19. Tasks for XtremIO configuration Task Description Reference Preparing the XtremIO array Physically install the XtremIO hardware following the procedures in the product documentation.  XtremIO Storage Array Setting up the initial XtremIO configuration Configure the IP addresses and other key parameters on the XtremIO.  XtremIO Storage Array User Provisioning storage for Microsoft Hyper-V hosts Create the storage areas required for the solution. Operation Guide  XtremIO Storage Array Site Preparation Guide version 3.0 Guide version 3.0  Vendor switch configuration guide Preparing the XtremIO array The XtremIO Storage Array Operation Guide provides instructions to assemble, rack, cable, and power up the XtremIO. There are no specific setup steps for this solution. Setting up the initial XtremIO configuration After completing the initial XtremIO array setup, configure key information about the existing environment so that the storage array can communicate with other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information:  DNS  NTP  Storage network interfaces For data connections using the FC protocol Ensure that one or more servers are connected to the XtremIO storage system through qualified FC switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows. For data connections using the iSCSI protocol 1. Connect one or more servers to the XtremIO storage system through qualified IP switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows. 2. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information: a. Set up a storage network IP address. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 75 Chapter 6: VSPEX Solution Implementation Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage. b. Enable jumbo frames on the XtremIO front-end iSCSI ports. Use jumbo frames for iSCSI networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment. To enable the jumbo frames option: i. From the menu bar, click the Administration icon to display the Administration workspace. ii. Click the Cluster tab and select iSCSI Ports Configuration from the left pane. The iSCSI Ports Configuration screen appears. iii. In the Port Properties Configuration section, select the Enable Jumbo Frames option. iv. Set the MTU value by using the up and down arrows. v. Click Apply. The reference documents listed in Appendix A provide more information on how to configure the XtremIO platform. The Storage configuration guidelines section provides more information on the disk layout. Managing the initiator group The XtremIO storage array uses "initiators" to refer to ports that can access a volume. Initiators can be managed by the XtremIO storage array by assigning them to an initiator group. You can do this by either editing an initiator group in the GUI as shown in Figure 22 and adding the initiator's properties or using the relevant CLI command. Figure 22. XtremIO initiator group 76 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation The initiators within an initiator group share access to one or more of the cluster's volumes. You can define which initiator groups have access to which volumes using LUN mapping. For detailed instructions, refer to the EMC XtremIO User Guide. Managing the volumes This section describes provisioning XtremIO volumes for Microsoft Hyper-V hosts. You can define various quantities of disk space as volumes in an active cluster. Volumes are defined as:  Volume size: The quantity of disk space reserved for the volume.  LB size: The logical block size in bytes.  Alignment-offset: A value for preventing unaligned access performance problems. Note: In the GUI, selecting a predefined volume type defines the alignment-offset and LB size values. In the CLI, you can define the alignment-offset and LB size values separately. This section explains how to manage volumes using the XtremIO storage array GUI. Complete the steps in the XtremIO GUI to configure LUNs to store virtual machines. When XtremIO initializes during the installation process, the data protection domain is created automatically. Provision the LUNs based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4. 1. Log in to the XtremIO GUI. 2. From the menu bar, click Configuration. 3. From the Volumes pane, click Add, as shown in Figure 23. Figure 23. Adding volume 4. In the Add New Volumes window, as shown in Figure 24, define the following: a. Name: The name of the volume. b. Size: The amount of disk space allocated for this volume. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 77 Chapter 6: VSPEX Solution Implementation c. Volume Type: Select one of the following types that define the LB size and alignment-offset: i. Normal (512 LBs) ii. 4 KB LBs iii. Legacy Windows (offset:63) d. Small I/O Alerts: Enable if you want an alert to be sent when small I/Os (less than 4 KB) are detected. e. Unaligned I/O Alerts: Enable if you want an alert to be sent when unaligned I/Os are detected. f. VAAI TP Alerts: Enable if you want an alert to be sent when the storage capacity reaches the set limit. Figure 24. Volume summary 5. 78 For volumes: a. If you do not want to add the new volumes to a folder, click Finish. The new volumes are created and appear in the root under Volumes in the Configuration window. b. If you want to add the new volumes to a folder: i. Click Next. ii. Select the existing folder (or click New Folder to create a new one). EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation iii. Click Finish. The new volumes are created and appear in the selected folder under Volumes in the Configuration window. Table 20 lists a single X-Brick storage allocation layout for 700 virtual machines in the solution. Table 20. Configuration 700 virtual servers Storage allocation for block data Availability physical capacity (TB) Number of SSDs (400 GB) for single X-Brick Number of LUNs for single X-Brick Volume capacity (TB) 7.2 25 1 50 Note: In this solution, each virtual machine occupies 102 GB, with 100 GB for the OS and user space and a 2 GB swap file. Mapping volumes to an initiator group This section describes how to map XtremIO volumes to an initiator group. To enable initiators within an initiator group to access a volume's disk space, you can map the volume to the initiator group. A LUN is automatically assigned when this is done. This number appears under Selected Volumes in the Configuration window. To map a volume to an initiator: 1. From the menu bar, click Configuration. 2. Under Volumes, select the volumes you want to map. To select multiple volumes, hold Shift and select the volumes. The volumes appear under Volumes in the Configuration window, as shown in Figure 25. Figure 25. Volumes and initiator group 3. Under Initiator Groups, select the initiator group to which you want to map the volume. The initiator appears under Initiator Groups in the Configuration window. 4. Once you have selected the volumes and initiator groups you want to map, under LUN Mapping Configuration, click Map All. 5. Click Apply, as shown in Figure 26. The selected volumes are mapped to the initiator group. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 79 Chapter 6: VSPEX Solution Implementation Figure 26. Mapping volumes XtremIO volumes have been created and mapped to an initiator group. You can see the disks in the Windows hosts. Creating the CSV disk To create the CSV disk for the Failover Cluster: 1. On each Microsoft Hyper-V host, open Disk Management, click Action and Rescan disks. After the rescan, all the XtremIO volumes appear under Disk Management on each Hyper-V host. 2. Initialize and format each XtremIO volume with NTFS file systems on one of the Hyper-V hosts. 3. Under Failover Cluster Manager, expand the name of the cluster, and then expand Storage. Right-click Disks, and then click Add Disk. Select the disks and click OK. 4. To add the disks to the CSV, select all the cluster disks and right-click Add to Cluster Shared Volumes. Note: EMC recommends that you format the Windows C drive and CSV volumes with the Allocation Unit Size set to 8,192(8 KB). To format the boot volume to 8,192, refer to EMC best practices. To create the CSV disks, refer to the Microsoft TechNet Library topic Use Cluster Shared Volumes in a Failover Cluster. Creating a virtual machine in SCVMM Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, install the software, and then change the Windows and application settings. To create a virtual machine, refer to the Microsoft TechNet Library topic How to Create and Deploy a Virtual Machine from a Blank Virtual Hard Disk. Performing partition alignment 80 Perform disk partition alignment only for virtual machines running Windows Server 2003 R2 or earlier. EMC recommends implementing disk partition alignment with an offset of 1,024 KB, and formatting the disk drive with a file allocation unit (cluster) size of 8 KB. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 6: VSPEX Solution Implementation To perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe, refer to the Microsoft TechNet topic Disk Partition Alignment Best Practices for SQL Server. Creating a template virtual machine Create a template virtual machine from the existing virtual machine in SCVMM. Create a hardware profile and a guest OS profile when creating the template. Use the profiler to deploy the virtual machines. Converting a virtual machine into a template destroys the source virtual machine. Consequently, you should back up the virtual machine before converting it. To create a template from a virtual machine, refer to the Microsoft TechNet topic How to Create a Template from a Virtual Machine. Deploying virtual machines from the template The virtual machine deployment wizard in the SCVMM Admin Console enables you to save the PowerShell scripts that perform the conversion and reuse them to deploy other virtual machines with the same configuration. To deploy a virtual machine from a template, refer to the Microsoft TechNet topic How to Deploy a Virtual Machine. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 81 Chapter 7: Solution Verification Chapter 7 Solution Verification This chapter presents the following topics: Overview ..................................................................................................................83 Post-installation checklist .......................................................................................84 Deploying and testing a single virtual machine........................................................84 Verifying solution component redundancy ...............................................................84 82 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 7: Solution Verification Overview This chapter provides a list of items to review and tasks to perform after configuring the solution. To verify the configuration and functionality of specific aspects of the solution, and to ensure that the configuration meets the customer’s core availability requirements, complete the tasks listed in Table 21. Table 21. Testing the installation Task Description Reference Postinstallation checklist Verify that sufficient virtual ports exist on each Hyper-V host virtual switch. Hyper-V: How many network cards do I need? Verify that the VLAN for virtual machine networking is configured correctly on each Hyper-V host. Network Recommendations for a Hyper-V Cluster in Windows Server 2012 R2 Verify that each Hyper-V host has access to the required Cluster Shared Volumes. Hyper-V: Using Hyper-V and Failover Clustering Verify that the live migration interfaces are configured correctly on all Hyper-V hosts. Virtual Machine Live Migration Overview Deploying and testing a single virtual machine Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface. Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager Verifying solution component redundancy Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained. Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact. Vendor documentation On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host. Creating a Hyper-V Host Cluster in VMM Overview EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 83 Chapter 7: Solution Verification Post-installation checklist Before moving to production, on each Windows Server, verify the following critical items:  The VLAN for virtual machine networking is configured correctly.  The storage networking is configured correctly.  Each server can access the required CSVs.  A network interface is configured correctly for Live Migration. Deploying and testing a single virtual machine Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to log in to it. Verifying solution component redundancy To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures. Complete the following steps to restart each XtremIO storage controller in turn and verify that connectivity to Microsoft Hyper-V CSV file system is maintained throughout each restart: 1. Log in to XtremIO XMS CLI console with administrator credentials. 2. Power off storage controller 1 using the following command: deactivate-storage-controller sc-id=1 power-off sc-id=1 3. Activate storage controller 1 using the following command: power-on sc-id=1 activate-storage-controller sc-id=1 84 4. When the cycle completes, change the sc-id=2 to verify another storage controller using the same command as in the previous steps. 5. On the host side, enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 8: System Monitoring Chapter 8 System Monitoring This chapter presents the following topics: Overview ..................................................................................................................86 Key areas to monitor ................................................................................................86 XtremIO resource monitoring guidelines ..................................................................88 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 85 Chapter 8: System Monitoring Overview Monitoring a VSPEX environment is no different from monitoring any core IT system, and is a relevant and essential component of administration. Monitoring a highly virtualized infrastructure, such as a VSPEX environment, is more complex than in a purely physical infrastructure, because the interaction and interrelationships between various components can be subtle and nuanced. If you are experienced in administering virtualized environments, you should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows. Several business needs require proactive, consistent monitoring of the environment:  Stable, predictable performance  Sizing and capacity needs  Availability and accessibility  Elasticity—the dynamic addition, subtraction, and modification of workloads  Data protection If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system. This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter. Key areas to monitor VSPEX Proven Infrastructures provide end-to-end solutions and require system monitoring of three discrete, but highly interrelated areas:  Servers, both virtual machines, and clusters  Networking  Storage This chapter focuses primarily on monitoring key components of the storage infrastructure, the XtremIO array, but also briefly describes other components. Performance baseline 86 When a workload is added to a VSPEX deployment, server and networking resources are consumed. As more workloads are added, modified, or removed, resource availability and, more importantly, capabilities change, which affects all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components before deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined RVM. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 8: System Monitoring Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As more workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and the impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these assessments consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected. The following components comprise the critical areas that affect overall system performance. Servers  Servers  Networking  Storage The key server resources to monitor include:  Processors  Memory  Disk (local and SAN)  Networking Monitor these areas both from a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). For a VSPEX deployment with Microsoft Hyper-V, you can use Windows Perfmon to monitor and log the metrics. Follow your vendors’ guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application. For detailed information about Perfmon, refer to the Microsoft TechNet Library topic Using Performance Monitor. Each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of RVMs deployed and their defined workload. Networking Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities. From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths, and inter-switch link (ISL) utilization. Networking storage protocols are discussed in the following section. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 87 Chapter 8: System Monitoring Storage Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the XtremIO series of storage arrays provide an easy, yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including:  Capacity  Hardware elements   X-Brick  Storage controllers  SSDs Cluster elements  Clusters  Volumes  Initiator groups Additional considerations (primarily from a tuning perspective) include:  I/O size  Workload characteristics These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers additional guidance on the subject in the EMC XtremIO Storage Array User Guide. XtremIO resource monitoring guidelines To monitor XtremIO, use the XMS GUI console, which you can access by opening an HTTPS session to the XMS IP address. The XtremIO series is an all-flash array storage platform that provides block storage access through a single entity. Monitoring the storage This section explains how to use the XtremIO GUI to monitor block storage resource usage that includes the list elements. Performance counters can be displayed from the Dashboard. Efficiency You can monitor the cluster efficiency status under Storage > Overall Efficiency in the Dashboard, as shown in Figure 27. 88 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 8: System Monitoring Figure 27. Monitoring the efficiency The Overall Efficiency section displays the following data:  Overall Efficiency: The disk space saved by the XtremIO storage array, calculated as: 𝑇𝑜𝑡𝑎𝑙 𝑝𝑟𝑜𝑣𝑖𝑠𝑖𝑜𝑛𝑒𝑑 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷  Data Reduction Ratio: The inline data deduplication and compression ratio, calculated as: 𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑  Deduplication Ratio: The real-time Inline data deduplication ratio, calculated as: 𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦 𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷  Compression Ratio: The real-time inline compression ratio, calculated as: 𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷 𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑  Thin Provisioning Savings: Used disk space compared to allocated disk space. Volume capacity You can monitor the volume capacity status under Storage > Volume Capacity in the Dashboard, as shown in Figure 28. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 89 Chapter 8: System Monitoring Figure 28. Volume capacity Volume Capacity displays the following data:  Total disk space defined by the volumes  Physical space used  Logical space used Physical capacity You can monitor the physical capacity status under Storage > Physical Capacity in the Dashboard, as shown in Figure 29. Figure 29. Physical capacity Physical Capacity displays the following data: Monitoring the performance  Total physical capacity  Used physical capacity To monitor the cluster performance from the GUI: 1. From the menu bar, click the Dashboard icon to display the Dashboard. 2. Under Performance, select the desired parameters: a. Select the measurement unit of the display by clicking one of the following: i. Bandwidth: MB/s ii. IOPS iii. Latency: Microseconds (μs). Applies only to the activity history graph. b. Select the item to be monitored from the Item Selector: i. Block Size ii. Initiator Groups iii. Volumes 90 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 8: System Monitoring c. Set the Activity History timeframe by selecting one of the following periods from the Time Period Selector: i. Last Hour ii. Last 6 Hours iii. Last 24 Hours iv. Last 3 Days v. Last Week Figure 30 shows the Performance GUI. Figure 30. Monitoring the performance (IOPS) Note: You can also monitor the performance through the CLI. For more information, refer to the XtremIO Storage Array User Guide. Monitoring the Monitoring X-Bricks hardware elements You can quickly view the X-Brick name and any associated alerts by hovering the mouse pointer over the X-Brick in the Hardware pane of the Dashboard workspace. To view the displayed X-Brick’s details in the Hardware workspace, you can hover the mouse pointer over different parts of the component to view that component’s parameters and associated alerts: 1. Click Show Front to view the X-Brick’s front end. 2. Click Show Back to view the X-Brick’s back end. EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 91 Chapter 8: System Monitoring 3. Click Show Cable Connectivity to view the X-Brick’s cable connections. Figure 31 shows the data and management cable connectivity. Figure 31. Data and management cable connectivity 4. Click X-Brick Properties to display the dialog box, as shown in Figure 32. Figure 32. X-Brick properties 92 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Chapter 8: System Monitoring Monitoring storage controllers To view the storage controller information from the GUI: 1. From the menu bar, click the Hardware icon to display the Hardware workspace. 2. Select the X-Brick for the storage controller to be monitored. 3. Click X-Brick Properties to open the X-Brick Properties dialog box. 4. View the details of the selected X-Brick’s two storage controllers. Monitoring SSDs To view the SSDs information from the GUI: 1. From the menu bar, click the Hardware icon to display the Hardware workspace. 2. Select the X-Brick for the storage controller to be monitored. 3. Click X-Brick Properties to open the X-Brick Properties dialog box. 4. View the details of the selected X-Brick’s SSDs, as shown in Figure 33. Figure 33. Monitoring the SSDs Using advanced monitoring In addition to the available monitoring services provided by the XtremIO storage array, you can define monitors tailored to your cluster’s needs. Table 22 displays the parameters that can be monitored (depending on the selected monitor type). Table 22. Advanced monitor parameters Parameters Description Read-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Write-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB IOPS Total read and write IOPS, by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 93 Chapter 8: System Monitoring Parameters Description Read-BW (MB/s) By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Write-BW (MB/s) By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB BW (MB/s) Total bandwidth of read and write combined, by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Write-Latency (μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Read-Latency (μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB Average-Latency (μsec) The average of Read and Write latency. 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB SSD-Space-In-Use SSD space in use Endurance-Remaining-% Percentage of SSD remaining endurance Memory-Usage-% Percentage of memory usage Memory-In-Use (MB) Memory-In-Use (MB) CPU (%) Percentage of used CPU For detailed information on using the advanced monitoring feature, refer to the EMC XtremIO Storage Array User Guide. 94 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Appendix A: Reference Documentation Appendix A Reference Documentation This appendix presents the following topics: EMC documentation .................................................................................................96 Other documentation ...............................................................................................96 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 95 Appendix A: Reference Documentation EMC documentation The following documents, available on EMC Online Support, provide additional and relevant information. If you do not have access to a document, contact your EMC representative.  EMC XtremIO Storage Array User Guide  EMC XtremIO Storage Array Operations Guide  EMC XtremIO Storage Array Site Preparation Guide  EMC XtremIO Storage Array Security Configuration Guide  EMC XtremIO Storage Array RESTful API Guide  EMC XtremIO Storage Array Release Notes  EMC XtremIO Simple Support Matrix  EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Linux Environment  EMC Host Connectivity with Emulex Fibre Channel and iSCSI HBAs and Converged Network Adapters (CNAs) for the Linux Environment  EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment  EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment  EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Solaris Environment  EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) for the Solaris Environment Other documentation The following documents, located on the Microsoft website, provide additional and relevant information: 96  Adding Hyper-V Hosts and Host Clusters, and Scale-Out File Servers to VMM  Configuring a Remote Instance of SQL Server for VMM  Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager (video)  Hardware and Software Requirements for Installing SQL Server 2012  Hyper-V: How many network cards do I need?  How to Add a Host Cluster to VMM EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Appendix A: Reference Documentation  How to Create a Virtual Machine Template  How to Create a Virtual Machine with a Blank Virtual Hard Disk  How to Deploy a Virtual Machine  How to Install a VMM Management Server  Hyper-V: Using Hyper-V and Failover Clustering  Install SQL Server 2012  Installing a VMM Agent Locally on a Host  Installing the VMM Administrator Console  Installing the VMM Server  Installing Virtual Machine Manager  Install and Deploy Windows Server 2012 R2 and Windows Server 2012 R2  Use Cluster Shared Volumes in a Failover Cluster  Virtual Machine Live Migration Overview EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 97 Appendix B: Customer Configuration Worksheet Appendix B Customer Configuration Worksheet This appendix presents the following topic: Customer configuration worksheet ..........................................................................99 98 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Appendix B: Customer Configuration Worksheet Customer configuration worksheet Before you start the configuration, gather some customer-specific network and host configuration information. The following tables list the essential numbering, naming, and host address information required for assembling the network. This worksheet can also be used as a “leave behind” document for future reference. Table 23. Common server information Server name Purpose Primary IP address Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP System Center Virtual Machine Manager SQL Server Table 24. ESXi server information Server name Purpose Primary IP address Private net (storage) addresses VMkernel IP address Hyper-V Host 1 Hyper-V Host 2 … Table 25. X-Brick information Array name Admin account XtremIO Management Server IP Storage Controller 1 management IP Storage Controller 2 management IP EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide 99 Appendix B: Customer Configuration Worksheet Array name SC1 IPMI IP SC2 IPMI IP Datastore name Block FC WWPN iSCSI IQN iSCSI Server IP Table 26. Name Network infrastructure information Purpose Default gateway IP address Subnet mask VLAN ID Allowed subnets Ethernet Switch 1 Ethernet Switch 2 … Table 27. Name VLAN information Network purpose Virtual machine networking Windows Management iSCSI storage network Live Motion Storage Migration Table 28. Account Service accounts Purpose Windows Server administrator Array administrator SCVMM administrator SQL Server administrator 100 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide Password (optional, secure appropriately) Appendix C: Server Resource Component Worksheet Appendix C Server Resource Component Worksheet This appendix presents the following topic: Server resources component worksheet ................................................................102 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V 101 for up to 700 Virtual Machines Proven Infrastructure Guide Appendix C: Server Resource Component Worksheet Server resources component worksheet Table 30 provides a blank worksheet to record the server resource totals. Table 30. Blank worksheet for server resource totals Server resources Application CPU (Virtual CPUs) Memory (GB) Storage resources IOPS Resource requirements Capacity (GB) Reference virtual machines N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Resource requirements N/A Equivalent reference virtual machines Total equivalent reference virtual machines Server customization Server component totals N/A Storage customization Storage component totals N/A Storage component equivalent reference virtual machines N/A Total equivalent reference virtual machines - storage 102 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide