Step by Step Configuration of 2 node Hyper- V Cluster in Windows Server 2. R2* Material taken from my own testing as well as http: //alexappleton. Although the features presented in Hyper- V replica give you a great setup, there are many reasons to still want a failover cluster. This won’t be a comparison between the benefits of Hyper- V replica vs failover clustering. This will be a guide on configuring a Hyper- V cluster in Windows Server 2. Part one will cover the initial configuration and setup of the servers and storage appliance. The scope: 2- node Hyper- V failover cluster with i. SCSI shared storage for small scalable highly available network.
![Pc Wizard 2012 2 11 Setup Key For Windows Pc Wizard 2012 2 11 Setup Key For Windows](http://i.imgur.com/Q4YOenG.png)
- My PC Configurations. Intel Core2Duo 2.66GHz, 3GB RAM on Host and Windows 7 32 Bit Operating System. Hardware Virtualization Technology (VT) is enabled in the host.
- Citrix XenDesktop 7.6, Provisioning Services 7.6 and the XenDesktop Setup Wizard with Write Cache and Personal vDisk Drives.
![Pc Wizard 2012 2 11 Setup Key For Windows Pc Wizard 2012 2 11 Setup Key For Windows](http://coastchlorinator.com/windows-installer-wizard-download-27.jpg)
SmartPCFixer™ is a fully featured and easy-to-use system optimization suite. With it, you can clean windows registry, remove cache files, fix errors, defrag disk. · Robbo100 s MediaPortal 1.14.X Installation Guide For Index of topics, please see post number 11. 1) Introduction 1.1) What is MediaPortal.
Equipment: 2 - HP Pro. Liant DL3. 60p Gen. Server- 6. 4GB RAM- 8 1. Gb Ethernet NIC, (4- port 3. FLR Adapter, 4- Port 3.
T Adapter)- 2 1. 46. GB SAS 1. 5K drives. HP Storage. Works P2.
MSA- 1. 7. TB RAW storage. Background: When sizing you environment you need to take into consideration how many VM’s you are going to need. This specific environment only required 4 virtual machines to start with, so it didn’t make sense to go with Datacenter. Windows Server 2. With versions prior to 2. Enterprise level licensing or above, standard didn’t give you the option to add the failover clustering feature (even though you could go with the free Hyper- V Server version which did support failover clustering). This has changed in 2.
No longer do you have to buy specific editions to get roles or features, all editions include the same feature set. However, when purchasing your server license you need to cost out your VM requirements. Server 2. Standard includes two virtual use licenses, while Datacenter includes unlimited. The free Hyper- V Server doesn’t include any. Virtual use licenses are only allowed so long as the host server is not running any other role other than Hyper- V. Because there is no difference in feature set, you can start off with standard and look to move to datacenter if you happen to scale out in the future. Although I see no purpose in changing editions, you can convert a standard edition installation to datacenter by entering the following command at the command prompt: dism /online /set- edition: Server. Datacenter /productkey: 4. HP8- DN9. 8B- MYWDG- T2. DCC- 8. W8. 3P /Accept.
EULAI have found issues when trying to use a volume license key during the above dism command. The key above is a well- documented key, which always works for me. After the upgrade is completed I enter my MAK or KMS key to activate the server since the key above will only give you a trial. Next thing you are going to need to determine is whether or not you want to go with GUI or Non- GUI (core). Again, thankfully Microsoft has given us the option to switch between both versions with a powershell entry so you don’t need to stress over which one: To go “core”: Get- Windows. Feature *gui* | Uninstall- Windows. Feature –Restart. To go “GUI”: Get- Windows.
Feature Server- Gui- Mgmt- Infra, Server- Gui- Shell | Install- Windows. Feature –restart. Get Started: Install your Windows Operating system on each of the nodes, but don’t add any features or roles just yet. We will do that at a later stage.
Each server has a total of 8 NIC’s and they will be used for the following: 1 – Dedicated for management of the nodes, and heartbeat. Dedicated for Hyper- V live migration. To connect to the shared storage appliance directly. For virtual machine network connections. We are going multipath I/O to connect to the shared storage appliance. Of the NIC’s dedicated to the VM’s we will create a team for redundancy. Always keep redundancy in mind. We have two 4- port adapters, so we will use one NIC from each for SAN connectivity, and when creating a team we will use one NIC from each of the adapters as well. The P2. 00. 0 MSA has two controller cards, with 4 1.
Gb Ethernet ports on each controller. We will connect the Controller as follows: Two i. SCSI host ports will connect to the dedicated NICs on each of the Hyper- V hosts. Use CAT6 cables for this since they are certified for 1.
Gbps network traffic. Try to keep redundancy in mind here, so connect one port from one controller card to a single nic port on the 3. FLR, and the second controller card to a single NIC port on the 3. T: On our hyper- V nodes we are going to have to configure the connecting Ethernet adapters with the specified subnet that co- relates to the SAN. I tend to use 1. When configuring your server adapters be sure to uncheck the option to register the adapter in DNS so you don’t end up populating your DNS database with errant entries for your hostservers. See for example: From each server ping the host interfaces to ensure connectivity.
HP used to ship a network configuration utility with their Windows Servers. This is not supported yet in Windows Server 2. NIC’s I am using are all Broadcom. A quick look on Broadcom’s website led me to the Windows Management Application BACS. This utility allows you to fine tune the network adapter settings, what we need this for is to hard set the MTU on the adapters connecting to the SAN to 9. There is a netsh command that will do it as well, but I found it to be unreliable when testing and it rarely stuck. Download and install the Broadcom Management Applications Installeron each of your hyper- v nodes. Once installed, there should be a management application called Broadcom Advanced Control suite. This is where we want to set the jumbo frame MTU to 9.
This management application does run in the non- gui version of Windows Server, and you can also connect to remote hosts using the utility as well. You need to make sure you have the right adapter here, and if you are dealing with 8 NICs like I am this can get confusing so take your time here. Luckily enough you can see the configuration of the NIC in theapplication’s window: Verify connectivity to the SAN after you set the MTU. Send a large packet size when pinging the associated IP addresses of the SAN ports using a ping command such as: ping 1. If you don’t get a successful reply here then revisit your settings until you get it right. Network Teaming. You could create a network team in the Broadcom utility as well, however, in testing I encountered there to be issues using the Broadcom utility. The team created fine, but didn’t initialize on one server. Removing the errant team proved to be a major hassle. Windows Server 2. NIC teaming function, so I prefer to configure the team on the server directly using the Windows configuration. Again, since I am dealing with two different network cards, I typically create a team using one nic port from either card on the server.
The new NIC teaming management interface can be invoked through server manager, or by running lbfoadmin. To create a new team highlight the NICs involved by holding control down while clicking on each. Once highlighted, right click the group and choose theoption “Add to New Team”This will bring up the new team dialog. Enter a name that will be used for the team. Try to stay consistent across your nodes here so remember the name you use. I typically go with “Hyper- V External#”. We have three additional options under “Additional properties”Teaming mode is typically set to switch independent. Using this mode you don’t have to worry about configuring your network switches. As the name implies, the nics can be plugged into different switches, so long as they have a link light they will work on the team. Static teaming requires you to configure the network switch as well. Finally, LACP is based on link aggregation which requires you to have a switch that supports this feature. The benefit of LACP is that you can dynamically reconfigure the team by adding or removing individual NIC’s without losing network communication on the team. Load balancing mode should be set to Hyper- V switch port. Virtual machines in Hyper- V will have their own unique MAC addresses that will be different than the physical adapter. When load balancing mode is set to Hyper- V switch port, traffic to the VM will be well- balanced across the teamed NICs.