Monthly Archives: March 2011

Fix Windows 7 And Windows Server 2008 R2 Random Freezes

An issue has been founded where a computer that is running Windows Server 2008 R2 or Windows 7 stops responding randomly and freeze for no reason (KB2265716). Applications or services that are running on the computer stops working correctly. Additionally, you cannot log on to the computer by using the remote desktop connection utility.

This issue occurs because of a deadlock condition between the Lsass.exe process, the Redirected Drive Buffering Subsystem (Rdbss.sys) driver, and the Winsock kernel. A supported hotfix (Fix322397) is available from Microsoft.

The hotfix might receive additional testing, hence, if you are not severely affected by this problem, Microsoft recommends that you wait for the next software update that contains the final hotfix. To apply the hotfix, your computer must be running Windows Server 2008 R2 or Windows 7. You must restart the computer after applying the hotfix.

The same fix will also be provided with Windows 7 Service Pack 1.

 Download Hotfix Fix322397 to fix Windows 7 and Windows Server 2008 R2 Random Freezes


Performing a Bare Metal Restore with DPM 2010

found this great howto for Microsoft DPM 2010

So, you’ve protected your server with BMR using DPM and now you need to restore. What’s next?

Well, I’ve been playing with the new BMR functionality on DPM 2010 for a while now and while it is very slick in how it operates, it seems like there might be some confusion on exactly how to use it. Granted, creating a BMR backup of a server is quite simple, restoring requires a little bit of extra work outside of the scope of DPM.

I’m not here to talk about what BMR is or when to use it instead of system state as that information is covered on the DPM TechNet Site. I’m more concerned with restoring a BMR backup to a new system.

Since theoretically the system in question is completely down (due to corruption of the operating system) or even new (in the case of hardware replacement), I have discovered is that the easiest way to restore these systems is by booting to the Windows media (Windows 2008 or Windows 2008 R2 DVD) and initiating the recovery.

This will break down into two steps. Step one is done on the DPM server and step two will be performed on the system where the restore will take place. It looks like a lot of steps, but really it’s quite simple so give it a shot some time! Test it out to be sure you understand before you are hit with the need for it and don’t know how to get it to work.

Step 1 – Recover contents of WindowsImageBackup folder (the easy part):

1. In the DPM administrator console, navigate to the Recovery tab and navigate through the Protected data to the computer you need to restore and find the Bare Metal Recovery option:

2. Select the date and time and then select Recover…The Recovery Wizard should start:

3. Recover this data to a network share:

4. Select a destination for the copy. Note the space required in my case is 13.22 GB.There MUST be enough free space at the destination for the copy to be successful:

In this case I chose the E drive:

5. Select to apply the security settings of the destination computer and then click Next:

6. Verify the summary information, then select “Recover”.

7. The recovery process is taking place. If you wish you can verify the progress via the monitoring tab:

8. Since we are going to be recovering to a network share (a folder on a computer that is protected by this DPM server), we will need to go to that location and create the folder where the restore will happen. Select that location and finish then restore.

Remember, this is just restoring the files to a location for use by the recovery process. This will not do any of the work to restore the files onto the actual computer. Also note, as pointed out before, you need to access this location when booted into the WinRE environment from the DVD. This means WinRE must have generic network access (get an IP address) and you must be able to specify valid credentials to access that share and files below it.

9. This is an IMPORTANT step here! Navigate to the location the files were restored to and navigate down the restored folders until you find the folder where the backup is stored.The important note here is that you must share the folder above WindowsImageBackup so that at the root of the shared folder is the WindowsImageBackup folder. If that is not at the root then the restore will not find the backup. This is what it will look like and where the share will be created:

This is the share that will need to be connected to by the client using WinRE. If you feel like I may be beating this dead horse, well I am. It is vital to create the share that you will be able to get to in WinRE (IP connectivity and authentication) and that the WindowsImageBackup folder is on the root of the share. (If I could attach flashy red lights to a blog, they would be around this step).

Step 2 – Restore the files from the WindowsImageBackup folder to a computer (these steps are based on the premise that you can get an IP address and authenticate via WinRE in your environment)

1. Start the computer you are going to restore the image to using the Windows DVD to match the system you are restoring. If you are doing BMR for a Windows Server 2008 system, then boot to the Windows Server 2008
DVD. If you are doing BMR for a Windows Server 2008 R2 system, then boot to the Windows Server 2008 R2 DVD. On the initial screen, verify language/locale settings and click Next.:

2. When the Install screen comes up, select Repair your computer in the lower left of the screen (most people move past this screen quickly so it’s easy to miss).

NOTE: If you need to verify the number or size of disks needed to restore, look in the WindowsImageBackup folder for MBRsystemRequirements.txt file. You should see information similar to the following:

Minimum number of disks: 1
Minimum Disk Sizes: Disk 0: 64423526400 bytes (boot)

3. You now have the System Recovery Options screen. This may list an existing OS if it finds one on the drive. We are going to be looking to do a restore, so we are not worried if there is one listed.

For Windows Server 2008, just click Next.

For Windows Server 2008 R2, be sure that you select the radio button to Restore your computer using a system image that you created earlier before selecting Next.

5. This step is ONLY ON Windows Server 2008: When prompted to choose a recovery tool, select Windows Complete PC Restore:

6. You will now likely be prompted that Windows cannot find a system image, selectCancel.

7. On the next window select “Restore a different backup.”

7. It will now prompt you that you need to point it to an image.

For Windows Server 2008, select Restore different backup and then Next.

For Windows Server 2008 R2, select Select a system image and then Next.

8. This will bring up a screen allowing you to find the image you are going to restore from. Since we have put this on a network share, we need to start networking. To do this, click on Advanced:

9. This brings up a screen that will allow us to search the network.

On Windows Server 2008, choose Search for a backup on the network:

On Windows Server 2008 R2, choose Search for a system image on the network.

10. When prompted if you are sure, click Yes.

11. Now enter the path to the share created in step 8 from Step 1 – Recover contents of WindowsImageBackup folder:. You can also connect using the IP address and share name. Click OK:


12. This will prompt you for credentials needed to access the share. Enter the credentials required for the share and click OK.

13. Now it will show a list of recovery images available from the share you indicated. There should only be one as you shared out only the recovery point specific for this computer. Select that recovery point from the list and click Next:

14. This will now scan the backup you selected for specific backups available in that recovery point. This again should yield just one date/time stamped recovery and list the hard drive associated with it. Select this recovery point and click Next:

15. Check the box to have it Format and repartition disks in order to have it wipe out the local disks before laying down the restore, then click Next:

16. Finally, verify the settings and click Finish to have it begin the restore process:

17. You will then get a confirmation:

18. After clicking OK you will see the progress as it takes place:

Once done the machine will reboot. Bing-Bang-Boom, your system is back up and running just like it never happened. This is such a beautiful thing and as long as you are protecting Windows Server 2008 or Windows Server 2008 R2, you can say goodbye to SRT! Sleep well tonight knowing your servers are just a quick 45 minute restore away.

Symantec Backup Exec Products Disaster Recovery shortcut list

Symantec Technical Services has compiled this list of shortcuts to various Disaster Recovery articles. These articles will assist with most recovery issues that may arise as a result of a power outage, hardware failure, natural disaster, etc…
Windows Server Restore

Disaster Recovery:

Disaster recovery of a local Windows 2000 or 2003 computer (includes non-authoritative restore of Active Directory for a domain controller)

Disaster recovery of a remote Windows 2000 or 2003 computer (includes non-authoritative restore of Active Directory for a domain controller)

Disaster Recovery of a remote Windows 2008 computer (includes both non-authoritative and authoritative restore of Active Directory for a domain controller)

How to perform a local recovery of a Microsoft Windows 2000/2003 Small Business Server

Manual disaster recovery of Windows computers

Manual disaster recovery of a local Windows computer (includes non-authoritative and authoritative restore of Active Directory for a domain controller)

Disaster recovery of a remote Windows computer (includes non-authoritative and authoritative restore of Active Directory for a domain controller)

Intelligent Disaster Recovery (IDR):

The Intelligent Disaster Recovery (IDR) process step by step

Is it possible to recover a full backup to a computer with different hardware using the Intelligent Disaster Recovery (IDR) option?

Backup Exec ™ Intelligent Disaster Recovery (IDR) cannot recover dynamic disks managed by VERITAS Volume Manager ™ or Microsoft Logical Disk Manager as part of a typical IDR restore process.

Microsoft Exchange:

Disaster recovery for Exchange

Disaster Recovery Procedure for Exchange 2000 or 2003

How to restore Exchange 2000 or 2003 to a recovery server in a different forest

How to restore to a Recovery Storage Group in Exchange 2007 using Exchange Management Shell

How to restore to an Exchange 2007 Recovery Storage Group using the Exchange Management Console

Microsoft SQL Restore:

How to perform local and redirected restore of SQL 2000 & 2005 Databases with Backup Exec for Windows Servers (BEWS)

Restoring the SQL master database

How does the automated master restore feature work with Symantec Backup Exec 9.x 10.x 11.x and 12.0

Recovering SQL manually

Microsoft SharePoint Restore:

Disaster recovery procedure for Microsoft Office SharePoint Server 2007 using Symantec Backup Exec for Windows Servers, version 11d or later

Disaster Recovery of a SharePoint Portal Server 2001

Manual disaster recovery of SharePoint server with just the SQL Agent database backup

/apps/media/inquira/resources /resources

Related Articles

    Article URL

    How to install cPanel/WHM on CentOS

    found this great howto for setting up cPanel/WHM on centos.

    Hello Folks,

    Today we will be starting with the basics, how to install cPanel/WHM on CentOS. I will be using CentOS 5.5 64-bit as my base OS; this will be the same for RedHat and for 32-bit also.

    First we need to start with a fresh install of CentOS, you don’t need to install anything like Apache, MySQL, etc as the cPanel installer will take care of it all for us.

    When reading this tutorial, I am going to assuming you know the basics of a shell (aka using SSH as some would say, if your running Windows I recommend PuTTy).

    Ok, the first step is to make sure you are logged into your shell, the screen should look something similar to below.

    Once we are both there, the next command will be to make the ‘cpinstall’ folder in your /home directory, you can do that by running the code below.

    # mkdir /home/cpinstall

    This will make the necessary folder on the file system for us to start with, next we will want to move, or as some will know it as ‘cd’ into that folder. You can do that by running the below command.

    # cd /home/cpinstall

    Now that we are there, we will need to download the appropriate file to start the cPanel install. This file is always updated, so by running the below command you’ll always get the newest version of cPanel installed on your server/VPS.

    # wget

    If everything goes well with that, your shell window will look something like this.

    Now that we are upto this point, we can start the install process. Depending on how fast the system is, and how fast your network connection is, this can take anywhere from 30 minutes to 4 hours. Usually for me, I can say it takes about 2 hours. You can invoke the installer by running the command below.

    # sh latest

    After running that code, you should see a cPanel logo come on the screen and the install will start. There will be no need for any user intervention unless there is a problem. Once the screen looks like below you are safe to walk away.

    Once the install is complete your SSH screen will look something like this:

    In my case, it was telling me I need to restart my server. Once restarted, you can access the WHM by going to http://IPHERE:2086.

    Once this is done, you’ll get a screen like this:

    Your default login username is ‘root’, and your password is your root password

    Building a production-ready Hyper-V Cluster on the cheap

    great article for a cheap hyper-v build.

    My most recent project was to improve the resiliency and uptime for a small-ish server farm consisting of ~18 virtual servers spread across two standalone Hyper-V hosts. The objective was to convert the two hosts (Dell Powerdge T710 servers) into a 2-node Hyper-V cluster as inexpensively as possible while still maintaining acceptable performance, leaving some headroom for moderate growth, and not breaking the bank.

    The workloads being virtualized consist of four Exchange 2010 servers (2-node DAG and 2 CAS/HT servers running NLB), a couple of file server, and a hodgepodge of domain controllers, SQL, and web servers with a few Citrix Xenapp servers thrown in (most of the Citrix servers are physical boxes). The network consists of three distinct internal networks brokered by Juniper firewalls, named DMZ, Trust, and Vault.

    When it comes to clustering on 2008/2008R2, the storage solutions that get the most play are Fiber Channel and iSCSI. Both of these solutions offer the ability to connect multiple nodes and, when configured properly, can perform very well. Both also require quite a bit of hardware and tweaking to do so, however – including a redundant set of FC or gigE switches (to avoid a single point of failure) and in the case of iSCSI, further network tweaks for performance. However, there is a third shared storage option that doesn’t get talked about too much: Shared SAS.

    Shared SAS

    Serial attached SCSI is the successor to the older parallel SCSI interface we all knew and loved (and the command set for which is still used in SAS, FC, even SATA). SAS performs well (even the older 3gbit spec) and is relatively inexpensive, and while parallel SCSI is no longer legal for shared storage clusters under Server 2008 and beyond, SAS is fully supported!

    SAN budgets vary wildly depending on the target application, storage types, fancy features such as snapshots, thin provisioning, etc. but even the cheapest redundant solutions quickly get you into the thousands of dollars range. You can certainly set up software-based solutions such as Starwind or Sanmelody on commodity hardware you have laying around (and I have done this with great success for lab work) however to build those into redundant SANs requires expensive licenses that can rival the cost of turnkey solutions.

    Ebay to the rescue

    Ultimately we decided on purchasing a used Dell MD3000 shared-SAS SAN that still had a year of 4-hour response warranty left on it. The 3U MD3000 is still sold by Dell despite being upstaged by its 6gbps brother, the MD32xx. But while the newer model can only hold twelve 3.5″ drives, the MD3000 can handle 15. The real sweetener on this find was that the SAN came with eight 450GB 15k RPM SAS drives. To buy this MD3000 redundant configuration new with the eight disks and warranty would have cost us $20,068. All told (including the four SAS5/E adapters and cables) we paid under $6,000. When next July rolls around we will have to re-up on the warranty coverage, but that is still a hefty savings (now I know some of you are used to working with SAN budgets that exceed our entire annual operating budget, but this was big savings for us!).


    After storage is nailed down, the next big pain point of clustering (especially with Hyper-V) tends to be figuring out the best networking configuration. Spending a lot of money and effort on a cluster to improve uptime does little good if something as simple as a failed ethernet cable or switch port can bring down a VM. Strangely enough, this is exactly the case – a nic failure or switch failure that causes connectivity failure in a VM is not something a Hyper-V cluster can detect and work around. So your options either become scripting elaborate routines to check network connectivity and migrate the affected virtual machines, or to turn to NIC teaming.

    The Sorrowful Saga of NIC Teaming and Hyper-V

    To the surprise of many, Windows has never had any way to aggregate NICs. Microsoft has always left teaming solutions (and support of them) to the hardware vendors. Both Broadcom and Intel have stepped in, with varying levels of success. Typically teaming is very easy to set up, but virtualization can introduce some quirks that need to be understood and worked around. At this time, both Broadcom and Intel offer support for teaming solutions under Hyper-V. As always, however, your mileage may vary. I have read way too many horror stories about both vendor solutions, but as of this writing Intel seems to have the edge and has worked out the kinks (revision 15 ProSet drivers and later). Intel also offers a special teaming mode called Virtual Machine Load Balancing which is very handy, providing both load balancing and failover options at the VM level.

    Our T710s came with the integrated quad port Broadcom nic, and a PCIe quad port Intel nic. The cluster upgrade plan called for more available ports, so each server was outfitted with an additional quad port Intel nic ($200 each on FleaBay), as well as another Intel single-port PCIe nic, for a lucky total of 13 ports per node.

    What could you possibly need all those NIC ports for?

    The 2008 R2 revision of Hyper-V clustering introduces a number of important network-based features and services that increase the need for dedicated gigE interfaces. The one most people have heard about is Live Migration – Microsoft’s answer to VMWare’s VMotion. Live Migration allows you to seamlessly transition a running virtual machine from one cluster node to another. This is obviously incredibly handy for managing server load, as well as shifting workloads off of physical hosts so that they can be taken offline for hardware or software maintenance. Production-ready Live Migration should have a dedicated gigE nic (10gigE works great too if you have deep pockets).

    Next is Cluster Shared Volumes – a handy feature in 2008 R2 that allows you to store multiple virtual machines on a single LUN (trying to provision per-VM LUNs is a headache I don’t even want to think about). In our case, the use of CSVs allowed us to take our two physical disk arrays on the SAN (8-drive RAID10 and 5-drive RAID5) and format each as a single LUN. I won’t go into too much detail here on CSV as there are million blogs out there with great information. CSV should also have a dedicated NIC available to it, as it can be used to relay storage commands in case a node loses its connection to the shared storage (there must be a performance penalty for this, but I suppose it is good enough to limp along until you can live migrate the workloads to another fully functional node – which is why LM and CSV should have separate nics to work with).

    The nodes themselves need a NIC for basic network connectivity and administration, so now we’re up to 3 NIC ports per node and we haven’t even talked about the virtual machines yet! As previously mentioned, the cluster is pointless if a failed NIC or switch port can bring down a VM, so we decided to implement Intel nic teaming using the VMLB mode. We carved up our two intel quad port nics into three teams (one for each of our aforementioned networks): TeamV for Vault (4 ports), TeamT for Trust (2ports), and TeamD for DMZ (2 ports). This setup provides at least two ports per network to guard against both nic or switch failure, and avoids the need for configuring VLANs (which is a whole other mess for another blog) – we do have VLANs configured
    on our cisco switches, but I like to avoid complicating the Hyper-V configuration with them as it is far easier to train administration staff without having to have them learn the ins and out of networking at that level. If you prefer you could certainly cable all ports into a giant team and use VLANs to separate out the traffic. At any rate, we’re now up to 11 NIC ports used (LM, CSV, host administration, and 8 teamed VM ports).

    Not all VMs need be highly available

    While most of our VMs are fairly critical and will made highly available across the cluster, certain machines are not – either because they just aren’t as important, or because the services they provide offer high availability in other ways. For example, each of our nodes has a domain controller Hyper-V guest configured to run from local storage (not part of the cluster). As long as domain clients are configured with both DCs in their DNS settings, they can deal with one or the other going down for a while. Similarly, each of our nodes has an Exchange 2010 mailbox server (running in a DAG) and an Exchange 2010 CAS/HT server. This way, the failure of an entire physical node does not derail DC or Exchange services. It also allows us to play around with fancy storage options for the Exchange servers, but I will save that for another post…

    I mention Exchange because we use Network Load Balancing to provide high availability across our two CAS servers – and NLB creates port flooding on switches. By dedicating one of our NIC ports to each node’s CAS server, we are able to limit the port flooding on the cisco switches to only the ports those dedicated NICs plug into. There are likely more sophisticated solutions to this problem, but this is what we came up with (had to use those spare broadcom ports for SOMETHING!).

    Our count is now up to 12 NICs, but I happened to have a couple of spare Intel PCIe NICs laying around, so I figured it wouldn’t hurt to get an extra in there. This would allow me to set up a broadcom team to provide switch redundancy on the nodes’ admin IPs, which could always be reconfigured and set up to take over in the event of one of the primary cluster links going down – so 13 it is! Below is a picture of the color coded NIC cabling (the two short orange cables are crossovers for the LM and CSV links) – I have not attacked it with zip ties yet! You can also see each node’s two SAS adapters/cables in the photo.


    Now that the hardware part of things was all set, it was time to adjust the software side. The procedure basically broke down like this:

    1. Join each node to the domain (they were previously not domain members to avoid dependence on the virtual domain controllers), configuring DNS to point first to the other node’s virtual DC and second to an offsite domain controller with a reliable WAN link.
    2. Install latest NIC drivers and configure the teams. Configure private networks as desired (for example I have my crossover networks set to 10.10.10.x and 10.10.11.x).
    3. Remove all the VHDs from VMs to be made highly available, then export their configurations and copy the VHDs to cluster storage (this step can be done many ways but I prefer manually copying the VHDs instead of having them tied to the Hyper-V export process; saves time if you end up having to re-do the import for some reason).
    4. Delete existing virtual networks and create new ones using the nic team interfaces, ensuring that all virtual networks to be used for highly available VMs are named identically across the nodes.
    5. Install the Failover Clustering feature and use its administration tool to run the Cluster Validation wizard – hopefully this will pass!
    6. Go ahead and create the cluster, and the wizard should be smart enough to set up the proper cluster networks (three of them in my case, the two privately addressed interfaces and the admin interface) – the virtual networks have no protocols bound to them other than the Hyper-V protocol so they are ignored by the cluster wizard.
    7. Add any storage disks that were not automatically added by the wizard (mine added all three of mine).
    8. Using the Failover Cluster management tool, enable CSVs and turn the appropriate storage disks into CSVs.
    9. Using the Hyper-V management tool, import the VM configurations you exported onto the CSVs as desired, and copy/move the VHDs into place and re-add them into the configurations, as well as specifying the appropriate virtual network(s).
    10. In Failover Clustering management, configure a service and select Virtual Machine, and select all the VMs you want to make highly available – they should pass validation!
    11. Fire up the VMs, and then attempt to Live Migrate one or two over to the other node. This worked like a charm for me… The only quirk I ran into was a Live Migration failure after editing a VM config from the Hyper-V management tool – it would appear that any settings changes need to be made from within Failover Clustering and not the Hyper-V tool to avoid this problem!cluster

    clusterThe entire procedure took roughly 13 hours, however a decent chunk of that time was spent waiting for large VHDs to copy, and/or dealing with unexpected surprises such as the fact that the MD3000 had the wrong rails with it (despite having the proper part # on the box) – nothing a pair of pliers wasn’t able to fix… The only single point of failure now is the ethernet drop the colo provides – and in the near future we may pony up for a second drop to remove that last potential failure point: