当前位置:文档之家› WindowsServer 2016超融合解决方案介绍

WindowsServer 2016超融合解决方案介绍

Windows Server 2016 Hyper-Converged Solution - Virtual Machines and Software Defined Storage on the Same ClusterWindows Server 2016 Technical Preview introduces Storage Spaces Direct, which enables building highly available (HA) storage systems with local storage. This is a significant step forward in Microsoft Windows Server software-defined storage (SDS) as it simplifies the deployment and management of SDS systems and also unlocks use of new classes of disk devices, such as SATA and NVMe disk devices, that were previously not possible with clustered Storage Spaces with shared disks.Windows Server 2016 provides a hyper-converged solution by allowing the same set of servers to provide SDS, through Storage Spaces Direct (S2D), and serve as the hosts for virtual machines using Hyper-V. The sameHow to Use this Guide:This document provides both an introductory overview and specific standalone examples of how to deploy a Hyper Converged Solution with Storage Spaces Direct.Before taking any action, it is recommended that you do a quick read through of this document to familiarize yourself with the overall approach, to get a sense for the important Notes associated with some steps, and to acquaint yourself with the additional supporting resources and documentation. Hyper-converged Solution with Software Defined Storage OverviewIn the Hyper-Converged configuration described in this guide, Storage Spaces Direct seamlessly integrates with the features you know today that make up the Windows Server software defined storage stack, including Clustered Shared Volume File System (CSVFS), Storage Spaces and Failover Clustering.The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine's files are stored on local CSVs. This allows for scaling Hyper-V compute clusters together with the storage it’s using. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster. Figure 5 illustrates the hyper-converged deployment scenario.1FIGURE 5: Hyperconverged – same cluster configured for Storage Spaces Direct and the hosting of virtual machinesHardware requirementsWe are working with our hardware partners to define and validate specific hardware configurations, including SAS HBA, SATA SSD and HDD, RDMA enabled network adapters etc. to ensure a good user experience. You should contact your hardware vendors for the solutions that they have verified are compatible for use with Storage Spaces Direct.If you would like to evaluate Storage Spaces Direct in Windows Server 2016 Technical Preview without investing in hardware, you can use Hyper-V virtual machines, see Testing Storage Spaces Direct using Windows Server 2016 virtual machines.For more information about hardware options, see Hardware options for evaluating Storage Spaces Direct in Technical Preview 4NoteStorage Spaces Direct does not support disks connected via multiple paths, and the Microsoft Multipath MPIO software stack.2Example Hardware for this GuideFor simplicity, this guide references a specific set of hardware that we were able to test. This is for example purposes only and most of the steps are not specific to hardware. Where something is specific to hardware, it will be noted. There are many hardware vendors with solutions that are compatible with the hyper-converged system described in this guide and this hardware example does not indicate a preference over other systems or hardware vendors. Due to limited resources and time constraints imposed by TP5, we are in a position to offer detailed guidance only for a specific subset of tested hardware configurations at this time.Server: Dell 730XD- Bios: 1.5.54HBA: Dell HBA330- Firmware:9.17.20.07 A00Network Interfaces:Mellanox ConnectX-3 Pro (dual port 10Gb, SFP+) for RoCEv2 networks- Firmware: 2.34.50.60 or newerTop of Rack Switch (TOR) Cisco Nexus 3132- BIOS version: 1.7.0Information GatheringThe following information will be needed as inputs to configure provision and manage the hyper-converged system, and therefore it will speed up the process and make it easier for you if you have it on hand when you start:- Server Names–you should be familiar with your organization’s nami ng policies for computers, files, paths, and other resources as you will be provisioning several servers with Nanoinstallations and each will need to have a unique server name.- Domain name – you will be joining computers to your domain, and you will need to specify the domain name. It would be good to familiarize with your internal domain naming and domainjoining policies.3- Administrator Password for the new servers: When the Nano images are created, the command to create the images will prompt you to input the password for the local administrator account.- For RDMA configurationso Top of Rack switch make/modelo Network Adapter make/modelThere are 2 types of RDMA protocols, note which type your RDMA adapter is(RoCEv2 or iWarp)o Vlan IDs to be used for the 2 network interfaces used by the management OS on the hyper-converged hosts. You should be able to obtain this from your networkadministrator.Nano or Full/Core Install OptionsHyper-converged deployments can be done using either Nano or Full installations of Windows Server 2016 Preview.Nano is a new install type for Windows Server 2016, see this link for more information on the advantages of using Nano and deploying and managing Nano server.This guide will focus on deploying hyper-converged systems using Nano server and the “Deploy the operating system” section is a step-by-step method of deploying Nano server.However, the steps in t he “Configure the Network” and “Configure Storage Spaces Direct” sections are identical whether you are using Nano or full or core installations.For full and core installations, instead of following the “Deploy the operating system” in this guide, you can deploy Windows Server 2012 Datacenter like you would any other Failover Cluster deployment. This would include joining them to an Active Directory domain and installing the Hyper-V role and Failover Cluster feature and if using RoCE RDMA devices including the “Data Center Bridging” feature. Nano server installations require all management to be done remotely, except what can be done through the Nano Recovery Console. On Full and core installations you can use the remote management steps in this guide, or in some cases you can log into the servers and do the commands and management locally.Nano: Installing and configuring Hyper-Converged SystemThis section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016 Technical Preview with a Nano Server configuration of the operating system. The act of deploying a Hyper-Converged system can be divided into three high level phases:1. Deploy the operating system2. Configure the Network3. Configure Storage Spaces Direct45Figure 6 illustrates the process for building a hyper-converged solution using Windows Server 2016 Technical Preview.Figure 6: Process for building a hyper-converged solution using Windows Server 2016 Technical Preview.You can tackle these steps a few at a time or all at once, but they do need to be completed in the order shown in Figure 6. After describing some prerequisites and terminology, we will describe each of the three phases in more detail and provides examples.ImportantThis preview release should not be used in production environments.Prerequisites and TerminologyThe provisioning and deployment process for a Windows Server Nano server involves specific steps that include:Creating a bootable .VHDx file for each Nano server∙Copying the bootable .VHDx files to a physical host and configuring the host to boot from the .VHDx files∙Remotely managing the newly deployed host machines running Nano ServersNote: The Image creation machine and the Management machine (defined below) can be the same machine. The critical factor is that the machine from which you are managing must be of the same version (or higher) as the Nano server that are being managed. For Windows Server 2016 Technical Preview 5 evaluation we recommend that your Management machine be runningWS2016 TP5 so you will be able to efficiently manage the Nano Servers (which are also running TP5).1. Image creation machine. The instructions in this guide includes creating bootableNano .VHDx files for each server. It’s a simple process, but you will need a system (Windows10 or Windows Server 2012 R2 or later) where you can use PowerShell to create andtemporarily store the .VHDX files that will be copied to the servers. The cmdlet modules used to create the image are imported from the Windows Server 2016 preview ISO, the instructionsbelow will have details on this process.2. Management machine. For the purposes of this document, the machine that has themanagement tools to remotely manage the Nano servers will be referred to as theManagement system. The management system machine has the following requirements:a. Running Windows Server 2016 Technical Preview 5, domain joined to the samedomain or fully trusted domain as the Nano systems.b. Remote Server Administration Tools (RSAT) and PowerShell modules for Hyper-V andFailover Clustering. RSAT tools and PowerShell modules are available on WindowsServer 2016 and can be installed without installing other features. They are alsoavailable by installing the RSAT package for Windows clients.c. Management system can be run inside of a Virtual Machine or on a physical machine.d. Requires network connectivity to the Nano servers3. Host machines. In the example below, the expectation is that you start with physicalmachines that are booted to a Windows Server operating system (full or core). We’ll becopying the VHDs files to the Host machines and then re-booting into Nano operation systemthat was created in the VHDx files. Booting from a VHDx file is the method of deploymentbeing outlined in this guide. Other methods of deploying VHDx boot files can also be used.Deploy the operating systemDeploying the operating system is composed of the following tasks:1. Acquire an ISO image of Windows Server 2016 TP52. Use the ISO and PowerShell to create the new Nano Server Images3. Copy the new Nano Server images to the Host machines4. Reboot into the new Nano Server image5. Connecting to and managing the Nano Servers from the Management system machine678Complete the steps below to create and deploy the Nano Server as the operating system on your Host machines in a Hyper-Converged system. Note: The “Getting Started with Nano Server” guide has many more examples and detailed explanations of how to deploy and manage a Nano server. The instructions below are solely intended to illustrate one of many possible deployments; you need to find an approach that fits your organization’s needs and situation.Acquire an ISO image of Windows Server 2016 TP5 DatacenterDownload a copy Datacenter ISO from <link to Technet> to your Image creation machine and note the path.Use the ISO and PowerShell to Create the new Nano Server ImagesThere are other methods do deploy Nano, but in the case of this exam ple we’ll provide a set of steps below. If you want to learn more about creating and managing different kinds of Nano deployments or images, see the “Getting Started with Nano Server” guid e, starting in the section “To quickly deploy Nano Server on a physical server”.NoteIf your deployment isn’t using a RoCEv2 RDMA adapter, then you can remove the“-Packages Microsoft-NanoServer-DCB-Package” parameter in the PowerShellcommandlet string below. Our example hardware for this guide does use RoCEv2RDMA adapters and Data Center Bridging, so the DCB package is included in theexample.NoteIf you are going to manage the servers with System Center, add the following itemsin the “-Packages” section of the “New-NanoServerImage” commandMicrosoft-NanoServer-SCVMM-PackageMicrosoft-NanoServer-SCVMM-Compute-PackageNoteIf you have drivers that are recommended by your hardware vendor, It is simplestto inject the network drivers into the image during the “New-NanoServerImage”step below. If you don’t, you may be able to use the in-box drivers using the –OEMDrivers parameter in the “New-NanoServerImage” command, and then updatethe drivers using Windows Update after deployment. It is important to have thedrivers that your hardware vender recommends, so that the networks provide thebest reliability and performance possible.91. On the Image creation machine, mount the Windows Server Technical Preview .ISO. Tomount the ISO, in File Explorer select and right click on the ISO, then choose Mount. Once the mounted drive is opened, navigate to the \NanoServer\NanoServerImageGenerator directory and copy the contents to a directory to your desired working folder on your Image creation machine where you want to create and store your new Nano Server Images.In this example, the NanoServerImageGenerator directory will be copied to:C:\NanoBuild\NanoBuildScripts2. Start Windows PowerShell as an administrator, change directory your desired workingfolder where you copied the “NanoServerImageGenerator” contents to, and run the following command to import the Nano Server Image Generator PowerShell module. This module will enable you to create the new Nano Server images.Import-Module.\NanoServerImageGenerator–VerboseYou should see something like this:3. Copynetwork drivers to a directory and note the path. The example in the next step will use c:\WS2016TP5_Drivers4. Before using the following PowerShell commands to create the new Nano Server imagesplease read the following section to get an overview of the entire task. Some features, need specific packages to be specified to be included in the “New-NanoServerImage”command below.In this step, you will create a unique image for each Host machine. We need 4 images;one for each physical host in the HyperConverged setup.10Creating each Nano Server image can take several minutes depending on the size ofthe drivers and other packages being included. It is not unusual for a large image to take30 minutes to complete the creation process.∙Create the images one at a time. Because of possible file collision, werecommend creating the images one at a time.∙You will be prompted to input a password for the Administrator accounts of your new Nano Servers. Type carefully and note your password for later use.You will use these passwords later to log into the new Nano Servers∙You will need the following information (at a minimum)o MediaPath: Specifies the path to the mounted Windows Server PreviewISO. It will usually be something like D:\o TargetPath: Specifies the path where the resulting .VHDx file will belocated. NOTE: this path needs to pre-exist before running the new-NanaServerImage cmdlet.o ComputerName: Specifies the name that the Nano server will use and beaccessed by.o Domain name: Specifies the fully qualified name to the domain that yourserver will join.o DriversPath– folder location where the expanded drivers that you want toinject to the image are maintainedo Other options: If you want a richer understanding of the all the inputparameters associated with New-NanoServerImage you can learn morefrom the “Getting Started with Nano Server” guide.New-NanoServerImage -MediaPath <MediaPath> -TargetPath <TargetPath> -ComputerName <ComputerName> -Compute -Storage -Clustering -DomainName <DomainName -OEMDrivers -DeploymentType Host -Edition Datacenter -EnableRemoteManagementPort -ReuseDomainNode -DriversPath <DriversPath> -Packages Microsoft-NanoServer-DCB-PackageThe following is an example of how you can execute the same thing in a script://Example definition of variable names and values$myNanoServerName="myComputer-1"$myNanoImagePath=".\Nano\NanoServerPhysical"$myNanoServerVHDXname="myComputer=1.VHDX"$myDomainFQDN=""$MediaPath="d:\"$myDriversPath="C:\WS2016TP5_Drivers"New-NanoServerImage-MediaPath d:\-TargetPath"$myNanoImagePath\$myNanoServerVHDXname"-ComputerName$myNanoServerName-Compute-Storage-Clustering-DomainName$myDomainFQDN-OEMDrivers-DeploymentType Host-Edition Datacenter-EnableRemoteManagementPort-ReuseDomainNode-DriversPath$myDriversPath-Packages Microsoft-NanoServer-DCB-Package11When you complete this task, you should have 1 VHDx file for each of the four hyper-converged systems that you are provisioningOther Packages that you may want to include:Desired State Configuration. An example feature that requires this is the Software Defined Network feature. The packages to include are:Microsoft-NanoServer-DSC-PackageShielded VMMicrosoft-NanoServer-SecureStartup-PackageMicrosoft-NanoServer-ShieldedVM-PackageManaging Nano with System Center Virtual Machine Manager or Operations ManagerMicrosoft-NanoServer-SCVMM-PackageMicrosoft-NanoServer-SCVMM-Compute-PackageCopy the new Nano Server images to the Host machinesThe tasks in this section assume that the servers that will be used for the hyper-converged system (Host Machines) are booted into a Windows Server operating system and accessible to the network. 1. Log in as an Administrator on the Host machines that will be the nodes of the hyper-convergedsystem.2. Copy the VHDX files that you created earlier to each respective Host machine and configureeach Host machine to boot from the new VHDX using the following steps:∙Mount the VHDx. If you are using Windows Explorer, the mount is accomplished by right clicking on the VHDX file and “mount”.Note: In this example, it is mounted under D:\ ∙Open a PowerShell console with Administrator privilages.∙Change the prompt to the “Windows” directory of the mounted VHD: In this example the command would be:cd d:\windows∙Enable booting to the VHDx:Bcdboot.exe d:\windows12Unmount the VHD. If you are using Windows Explorer, the unmount is accomplished by right clicking on the drive letter in the left hand navigation pane, and selecting “eject”. THIS STEPIS IMPORTANT, THE SYSTEM MAY HAVE ISSUES BOOTING IF YOU DON’T UNMOUNTTHE VHDX.Reboot into the new Nano Server image1. Reboot the Host machines. They will automatically boot into the new Nano Server VHDx images.2. Log into the Nano Recovery Console: After the Host machines are booted, they will show alogon screen for the Nano Server Recovery Console (see the "Nano Server Recovery Console"section in this Nano guide). You will need to enter “Administrator” for the User Name and enter the password you specified earlier when creating the new Nano Server images. For the Domain field, you can leave this blank or enter the computer name of your Nano server.3. Acquire the IP address of the Nano Server: You will use these IP addresses to connect to theNano Server in the next section, so it’s suggested to write it down or note it somewhere.a. Steps to aquire the IP address in the Nano Recovery Console:i. Select Networking then press Enterii. Identify from the network adapter list the one that is being used to connect to the system to manage it. If you aren’t sure which one, look at each of them andidentify the addresses.iii. Select your Ethernet adapter then press Enteriv. Note your IPv4 address for later useNote: While you are in the Nano Recovery Console, you may also specify static IP addresses at this time for networks if DHCP is not available.Connecting to and managing the Nano Servers from the Management system machineYou will need a Management system machine that has the same build of Windows Server 2016 to manage and configure your Nano deployment. Remote Server Administration Tools (RSAT) for Windows Serve 2016 is not suggested for this scenario since some of the Windows 10 storage APIs may not be updated to be fully compatible at the time of this preview release.1. On the Management system install the Failover Cluster and Hyper-V management tools. Thiscan be done through Server Man ager using the “Add Roles and Features” wizard. In the“Features” page, select “Remote Server Administration Tools” and then select the tools toinstall.2. On the Management system machine configure TrustedHosts; this is a onetimeconfiguration on the management system machine:Open a PowerShell console with Administrator privilages and execute the following. This willconfigure the trusted hosts to all hosts.enter13After the onetime configuration above, you will not need to repeat Set-Item. However, each time you close and reopen the PowerShell console you should establish a new remote PS Session to the Nano Server by running the commands below:3. Enter the PS session and use either the Nano Server name or the IP address that you acquiredfrom the Recovery Console earlier in this doc. You will be prompted for a password after youexecute this command, enter the administrator password you specified when creating the Nano VHDx.Enter-PSSession-ComputerName<myComputerName>-CredentialLocalHost\AdministratorExamples of doing the same thing in a way that is more useful in scripts, in case you need todo this more than once:Example 1: using an IP address:$ip="10.100.0.1"$user="$ip\Administrator"Enter-PSSession-ComputerName$ip-Credential$userExample 2: OR you can do something similar with computer name instead of IP address.$myNanoServer1="myNanoServer-1"$user="$myNanoServer1\Administrator"Enter-PSSession-ComputerName$myNanoServer1-Credential$userAdding domain accounts.So far this guide has had you deploying and configuring individual nodes with the local administrator account<ComputerName>\Administrator.Managing a hyper-converged system, including the cluster and storage and virtualization components, often requires using a domain account that is in the Administrators group on each node.The following steps are done from the Management System.For each server of the hyper-converged system:e a PowerShell console that was opened with Administrator privileges and in a PSSession issue thefollowing command to add your domain account(s) in the Administrators local security group. See the section above for information about how to connect to the Nano systems using PSSession.Net localgroup Administrators<Domain\Account>/add14Network ConfigurationThe following assumes 2 RDMA NIC Ports (1 dual port, or 2 single port). In order to deploy Storage Spaces Direct, the Hyper-V switch must be deployed with RDMA-enabled host virtual NICs. Complete the following steps to configure the network on each server:NoteSkip this Network Configuration section, if you are testing Storage Spaces Direct inside of virtual machines. RDMA is not available for networking inside a virtual machine.Configure the Top of Rack (TOR) SwitchOur example configuration is using a network adapter that implements RDMA using RoCEv2. Network QoS for this type of RDMA requires that the TOR have specific capabilities set for the network ports that the NICs are connected to.15Enable Network Quality of Service (Network QoS)Network QoS is used to in this hyper-converged configuration to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance. Do the following steps from a management system using Enter-PSSession to connect and do the following to each of the servers.NoteFor Windows Server 2016 Technical Preview, there are multiple vendors supporting these RDMA network capabilities. Check with your network interface card vendor to verify which of their products support hyper-converged RDMA networking in in technical preview 5.1. Set a network QoS policy for SMB-Direct, which is the protocol that the software definedstorage system uses.New-NetQosPolicy “SMB” –NetDirectPortMatchCondition 445 –PriorityValue8021Action 3The output should look something like this:Name : SMBOwner : Group Policy (Machine)NetworkProfile : AllPrecedence : 127JobObject :NetDirectPort : 445PriorityValue : 32. Turn on Flow Control for SMBEnable-NetQosFlowControl –Priority 33. Disable flow control for other trafficDisable-NetQosFlowControl –Priority 0,1,2,4,5,6,74. Get a list of the network adapters to identify the target adapters (RDMA adapters)Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedThe output should look something like the following. The Mellanox ConnectX03 Pro adapters are the RDMA network adapters and are the only ones connected to a switch, in this example configuration.[MachineName]: PS C:\Users\User\Documents> Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedName InterfaceDescription Status LinkSpeed---- -------------------- ------ ---------NIC3 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #46 Disconnected 0 bpsEthernet 2 Mellanox ConnectX-3 Pro Ethernet Adapter #2 Up 10 Gbps16SLOT # Mellanox ConnectX-3 Pro Ethernet Adapter Up 10 GbpsNIC4 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #47 Disconnected 0 bpsNIC1 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #44 Disconnected 0 bpsNIC2 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #45 Disconnected 0 bps5. Apply network QoS policy to the target adapters. The target adapters are the RDMA adapters.Use the “Name” of the target adapters for the –InterfaceAlias in the following exampleEnable-NetAdapterQos –InterfaceAlias “<adapter1>”,”<adapter2>”Using the example above, the command would look like this:Enable-NetAdapterQoS –InterfaceAlias “Ethernet 2”,”SLOT #”6. Create a Traffic class and give SMB Direct 30% of the bandwidth minimum. The name of theclass will be “SMB”New-NetQosTrafficClass “SMB” –Priority 3 –BandwidthPercentage 30 –Algorithm ETSCreate a Hyper-V Virtual Switch with SET and RDMA vNICThe Hyper-V virtual switch allows the physical NIC ports to be used for both the host and virtual machines and enables RDMA from the host which allows for more throughput, lower latency, and less system (CPU) impact. The physical network interfaces are teamed using the Switch Embedded Teaming (SET) feature that is new in Windows Server 2016.Do the following steps from a management system using Enter-PSSession to connect to each of the servers.1. Identify the network adapters (you will use this info in step #2)Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed [MachineName]: PS C:\Users\User\Documents> Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedName InterfaceDescription Status LinkSpeed---- -------------------- ------ ---------NIC3 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #46 Disconnected 0 bpsEthernet 2 Mellanox ConnectX-3 Pro Ethernet Adapter #2 Up 10 GbpsSLOT # Mellanox ConnectX-3 Pro Ethernet Adapter Up 10 GbpsNIC4 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #47 Disconnected 0 bpsNIC1 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #44 Disconnected 0 bpsNIC2 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #45 Disconnected 0 bps2. Create the virtual switch connected to both of the physical network adapters, and enable theSwitch Embedded Teaming (SET). You may notice a message that your PSSession lostconnection. This is expected and your session will reconnect.New-VMSwitch –Name SETswitch –NetAdapterName “<adapter1>”,”<adapter2>”–EnableEmbeddedTeaming $trueUsing the Get-NetAdapter example above, the command would look like this:17。

相关主题