What to write a good scrip and use one that was done here is the link.
http://gallery.technet.microsoft.com/scriptcenter/
Sunday, July 10, 2011
Thursday, April 14, 2011
Five Tips for Better Backup
Five Tips
Tip #1: Minimize the amount of data you protect
You can reduce the amount of data you back up while ensuring 100 percent recovery by using technologies that filter out unchanged and deleted data.
While tools that utilize VMware CBT (Changed Block Tracking) eliminate the backup of some unnecessary data, CBT does not prevent the backup and restore of deleted data. The Windows operating system uses the unused free space that is allocated, but not used, for data to store deleted files. That deleted data is never removed until it is overwritten to make space for new data. VMs that host applications with frequently changing data can have gigabytes of deleted data. Unfortunately, those files are seen as changed data blocks, and backup tools using only CBT will back up that deleted data. That stretches backup times, lengthens restore times, and overloads your network.
Our tip is to select a tool that does not back up deleted data. That way, you can back up often and with greater granularity. You’ll also save substantially on storage space, backup time, bandwidth and recovery time, enabling you to have better recovery point objectives (RPOs) and shorter recovery time objectives (RTOs).
Tip #2: Maximize backup speed and throughput
Many backup administrators manage data protection for their virtual systems as if they were protecting physical systems; this can seriously reduce the efficiency of virtual asset data protection. For example, administrators often put multiple VMs on a server that would have previously hosted only one physical application. This creates increased contention for network resources—particularly when backups and restores are being performed.
Virtualized systems are different and need different techniques for optimal protection. We recommend you use a tool that allows simultaneous backup and restore to avoid bottlenecks. In addition, use a tool that provides flexible backup methods (proxy, direct-to-target, LAN-free) to fit your environment and minimize workload impact.
To further increase network and system efficiency, choose a tool that eliminates the need for a backup server by sending backup images directly to target storage. This approach reduces network load by eliminating intermediate steps.
Tip #3: Keep your recovery options flexible
While agent-based systems have their benefits, they aren’t always most efficient or cost effective for small organizations. When you back up virtual systems with agent-based systems, you typically have to pre-stage your VMs to restore an entire VM. This means you have to spawn a new VM via clone or template, size the memory and disks correctly, name it correctly, and create the appropriate number of virtual disks. Once this is up and running, you must then install an agent, connect to the target, and restore the VM. One alternative to an agent-based system is bare-metal restore routines. However, these are challenging to implement at best, and you may have to maintain duplicate hardware with this option as well.
Fortunately, virtualization brings many simpler and more powerful recovery options. Use a tool that allows you to simply click on a VM to restore it, with no need for pre-staging. Find one that allows you to easily restore files at the file level and to restore application objects. Set up your disaster recovery scheme so you can fail over to a VM on a remote server (either on campus or offsite) with a single click of a button, and ensure the replication is automatically reversed so that once the source site comes back up, you can simply synchronize the changes and failback to source.
What about physical boxes? Almost every virtual environment has some servers that just can’t be virtualized yet. Consider companion tools that work with your virtual data protection tool to offer continuous protection for physical servers. Using continuous protection, you can image physical systems into VMs, which can be then restored to a VM
Five Tips for Effective Backup and Recovery in Virtual Environments 6
or a physical server. This approach gives you the flexibility to get your systems restored and your business back on line fast.
What about long term tape-based retention? Most organizations already have investments in agent-based software and tape systems. All you need is a single agent with visibility to an archive repository to sweep the archives off to tape. Consider a tool that offers sweep-to-tape integration that can be used with a traditional backup tool. Then if you ever need to recover an old archive, you can simply restore it to the repository, import the manifest, and start restoring files or VMs as you please.
Tip #4: Minimize performance drains
As mentioned earlier, many backup administrators manage data protection for virtual machines as if they were managing separate individual physical systems. Another example of this is deploying backup agents on each VM and running backup jobs in defined backup windows in order to avoid hurting the performance of business operations on the system. Often backups are run during off-peak hours, usually at night.
Unfortunately, this approach has a significant impact on the virtual machine host and VMs. The host system must take on the extra processing load and absorb latency increases due to I/O contention during the entire backup window, slowing all VMs on the host until all scheduled backups are complete. Adding to this impact is increased network traffic and latency due to the increased volume of data traveling to the backup server.
Our tip is to use dynamic resource management to free unneeded resources; when resources are taken only when needed, limited or scarce resources can be shared among processes. You can also reduce performance impact by simplifying your backup infrastructure with a flexible tool that can adapt to your network layout (LAN, WAN, or storage network), shifting the load of data protection operations away from the networks critical to business performance. For even greater benefits, choose a tool that provides flexible backup methods (proxy, direct-to-target, LAN-free).
Reducing the impact of backups on your network, servers and applications will enable you to save on hardware and infrastructure costs. It will also help your current infrastructure perform better so you have room for growth without spending more money. In other words, with the right tools, you can do even more with less.
Tip #5: Protect to fit your needs and SLAs
You have different SLAs and infrastructure for different applications and data. Your data protection solution needs to adapt to fit your needs—not the other way around. Your data protection tool shouldn’t force you to conduct your data protection operations in a way that interferes with your production systems and networks. You should back up only as often as you need to meet your SLAs, in order to minimize effort and load on your production systems and networks.
Therefore, choose a flexible tool that offers a choice of networks and a method to be used for data protection: LAN, WAN, server-less. We recommend an image-based data protection tool because images are very portable, allowing you to recover when, where and how you need to for the greatest efficiency. We also advise choosing a tool with flexible licensing to provide the best fit for your environment while costing as little as possible.
Most of all, choose an architecture that fits the SLAs for your organization. The correct architecture for your business depends on the hardware and setup you have today; there is no one-size-fits-all. Understanding the options here is arguably the most important part of the equation when designing a virtualized disaster recovery system. Regardless of which image-based tool you are using, you need to configure it correctly, which includes, among other things, choosing the correct source method and understanding data flow and proper positioning of targets.
Finally, choose a tool that offers a variety of architectural options for deployment: network-based, direct-to-target, iSCSI, fiber and both ESX and ESXi backups. This will ensure you can set up your backup regime in a way that makes sense for your environment.
Tip #1: Minimize the amount of data you protect
You can reduce the amount of data you back up while ensuring 100 percent recovery by using technologies that filter out unchanged and deleted data.
While tools that utilize VMware CBT (Changed Block Tracking) eliminate the backup of some unnecessary data, CBT does not prevent the backup and restore of deleted data. The Windows operating system uses the unused free space that is allocated, but not used, for data to store deleted files. That deleted data is never removed until it is overwritten to make space for new data. VMs that host applications with frequently changing data can have gigabytes of deleted data. Unfortunately, those files are seen as changed data blocks, and backup tools using only CBT will back up that deleted data. That stretches backup times, lengthens restore times, and overloads your network.
Our tip is to select a tool that does not back up deleted data. That way, you can back up often and with greater granularity. You’ll also save substantially on storage space, backup time, bandwidth and recovery time, enabling you to have better recovery point objectives (RPOs) and shorter recovery time objectives (RTOs).
Tip #2: Maximize backup speed and throughput
Many backup administrators manage data protection for their virtual systems as if they were protecting physical systems; this can seriously reduce the efficiency of virtual asset data protection. For example, administrators often put multiple VMs on a server that would have previously hosted only one physical application. This creates increased contention for network resources—particularly when backups and restores are being performed.
Virtualized systems are different and need different techniques for optimal protection. We recommend you use a tool that allows simultaneous backup and restore to avoid bottlenecks. In addition, use a tool that provides flexible backup methods (proxy, direct-to-target, LAN-free) to fit your environment and minimize workload impact.
To further increase network and system efficiency, choose a tool that eliminates the need for a backup server by sending backup images directly to target storage. This approach reduces network load by eliminating intermediate steps.
Tip #3: Keep your recovery options flexible
While agent-based systems have their benefits, they aren’t always most efficient or cost effective for small organizations. When you back up virtual systems with agent-based systems, you typically have to pre-stage your VMs to restore an entire VM. This means you have to spawn a new VM via clone or template, size the memory and disks correctly, name it correctly, and create the appropriate number of virtual disks. Once this is up and running, you must then install an agent, connect to the target, and restore the VM. One alternative to an agent-based system is bare-metal restore routines. However, these are challenging to implement at best, and you may have to maintain duplicate hardware with this option as well.
Fortunately, virtualization brings many simpler and more powerful recovery options. Use a tool that allows you to simply click on a VM to restore it, with no need for pre-staging. Find one that allows you to easily restore files at the file level and to restore application objects. Set up your disaster recovery scheme so you can fail over to a VM on a remote server (either on campus or offsite) with a single click of a button, and ensure the replication is automatically reversed so that once the source site comes back up, you can simply synchronize the changes and failback to source.
What about physical boxes? Almost every virtual environment has some servers that just can’t be virtualized yet. Consider companion tools that work with your virtual data protection tool to offer continuous protection for physical servers. Using continuous protection, you can image physical systems into VMs, which can be then restored to a VM
Five Tips for Effective Backup and Recovery in Virtual Environments 6
or a physical server. This approach gives you the flexibility to get your systems restored and your business back on line fast.
What about long term tape-based retention? Most organizations already have investments in agent-based software and tape systems. All you need is a single agent with visibility to an archive repository to sweep the archives off to tape. Consider a tool that offers sweep-to-tape integration that can be used with a traditional backup tool. Then if you ever need to recover an old archive, you can simply restore it to the repository, import the manifest, and start restoring files or VMs as you please.
Tip #4: Minimize performance drains
As mentioned earlier, many backup administrators manage data protection for virtual machines as if they were managing separate individual physical systems. Another example of this is deploying backup agents on each VM and running backup jobs in defined backup windows in order to avoid hurting the performance of business operations on the system. Often backups are run during off-peak hours, usually at night.
Unfortunately, this approach has a significant impact on the virtual machine host and VMs. The host system must take on the extra processing load and absorb latency increases due to I/O contention during the entire backup window, slowing all VMs on the host until all scheduled backups are complete. Adding to this impact is increased network traffic and latency due to the increased volume of data traveling to the backup server.
Our tip is to use dynamic resource management to free unneeded resources; when resources are taken only when needed, limited or scarce resources can be shared among processes. You can also reduce performance impact by simplifying your backup infrastructure with a flexible tool that can adapt to your network layout (LAN, WAN, or storage network), shifting the load of data protection operations away from the networks critical to business performance. For even greater benefits, choose a tool that provides flexible backup methods (proxy, direct-to-target, LAN-free).
Reducing the impact of backups on your network, servers and applications will enable you to save on hardware and infrastructure costs. It will also help your current infrastructure perform better so you have room for growth without spending more money. In other words, with the right tools, you can do even more with less.
Tip #5: Protect to fit your needs and SLAs
You have different SLAs and infrastructure for different applications and data. Your data protection solution needs to adapt to fit your needs—not the other way around. Your data protection tool shouldn’t force you to conduct your data protection operations in a way that interferes with your production systems and networks. You should back up only as often as you need to meet your SLAs, in order to minimize effort and load on your production systems and networks.
Therefore, choose a flexible tool that offers a choice of networks and a method to be used for data protection: LAN, WAN, server-less. We recommend an image-based data protection tool because images are very portable, allowing you to recover when, where and how you need to for the greatest efficiency. We also advise choosing a tool with flexible licensing to provide the best fit for your environment while costing as little as possible.
Most of all, choose an architecture that fits the SLAs for your organization. The correct architecture for your business depends on the hardware and setup you have today; there is no one-size-fits-all. Understanding the options here is arguably the most important part of the equation when designing a virtualized disaster recovery system. Regardless of which image-based tool you are using, you need to configure it correctly, which includes, among other things, choosing the correct source method and understanding data flow and proper positioning of targets.
Finally, choose a tool that offers a variety of architectural options for deployment: network-based, direct-to-target, iSCSI, fiber and both ESX and ESXi backups. This will ensure you can set up your backup regime in a way that makes sense for your environment.
Wednesday, February 17, 2010
Setup Back for Server 2003

When was the last time you backed up the important files on your computer? Last year when your best friend called in tears after the Blue Screen of Death ate her thesis?
Yeah, I thought so.
Hard drives fail. It's a fact of computing life. It's not a matter of whether or not your computer's disk will fry, it's a matter of when. The question is how much it will disrupt your life.
Don't expect yourself to remember to back up your data, or stack your closet full of burned CD's or DVD's. Today we're going to set up automated nightly, weekly, monthly local and off-site backups for your PC using free software. Once you get this up and running, you'll never have to worry about losing data again.
What you'll need:
1.A Windows PC. (Sorry Mac folks, you're another article.)
2.An external hard drive.
I've had great luck with a LaCie FireWire drive, which of course, requires your computer has a FireWire port. When choosing size, go for 4-5 times the amount of data you want to back up (i.e., 4 times the size of your My Documents folder.)
3.An FTP server.
This is optional, but if you want off-site backup, it's a must. See previous post, Ask Lifehacker Readers: Web hosting provider?, for recommendations on companies that provide not only web hosting, but FTP-able disk space.
Here's how to get your backups up and running.
1.Set up your hardware and software. Download and install the most excellent free software, SyncBack Freeware v3.2.9. SyncBackSE version 4.0 is also available to buy at $25. This tutorial will use v.3 for the cheapies and those of you giving SyncBack a try for the first time. Once your external drive is connected to your computer and turned on, name it "Backup" and browse to it in Explorer. (On my computer, it's the F:/ drive.) Create 3 folders named "Nightly," "Weekly" and "Monthly" We're going to store our backups into these folders.
2.Create the backup profile. Fire up SyncBack. Create a new profile called "Nightly Local Backup." Set the source folder to your documents folder, and the destination to your backup drive's "Nightly" folder, like this:
3.Select the directories to backup. You can backup the entire "My Documents" folder, but I didn't want to do that, because I've got about 75 gigabytes of music, photos and video that don't change too much and aren't world-ending in subdirectories of "My Documents." I don't have the space on my drive to keep copies of multi-gigabyte media in triplicate. So I chose the backup "selected subdirectories" option, which lets me tell SyncBack to ignore "My Music," "My Pictures," and "My Video" each night when it runs. To do so, click on the "Subdirectories" tab. If you've got tons of subdirectories, it'll take SyncBack sometime to traverse the tree and show 'em to you. Go grab a drink of water and come back to check off the directories you want backed up each night.
4.Set up e-mail notification of backup failure. Since we're a bunch of smart cookies, enable the advanced options in SyncBack by hitting the "Expert" button at the bottom. To keep tabs on whether or not your nightly backup is completing successfully, in the E-mail tab, check off "E-mail the log file when the profile is done." I don't want an e-mail every day; I just want one if things go awry. So also check off "Only e-mail the log if an error occurs." Set your SMTP server options as well and hit the "Test E-mail Settings" button to make sure you can receive messages. Click to enlarge image.
5.Schedule the job. Now hit up the "Misc" tab, and hit the Schedule button. Here you'll tell Windows to run this Nightly backup profile, well, nightly. I set mine to run at 2:00AM every night. Be sure to set your Windows password for this scheduled task by hitting the "Set Password" button.
Wash, rinse and repeat twice for Weekly Local Backup and Monthly Local Backup profiles, but point them at the appropriate directories and also set the schedule to, um, weekly and monthly, respectively. Once you're all set up, you can run each job as a test (it'll take a long time, depending on how much data you've got), or just leave things to run on their own. Once all 3 profiles have run, you'll have 3 copies of your most important data on your external drive getting updated every night, week and month. If something goes wrong and the backups fail, you get an email notification letting you know.
This means if your hard drive fries? The most data you'll lose is a day's worth. If you overwrote an important file? Recover last week's or last month's copy.
UPDATE: Reader Patrick points out that if you make bad changes the last day of the week AND month, those changes will replicate to your backups and you can lose data. One way to avoid that is to schedule bi-monthly (every other month) backup as well. Thanks, Patrick!
Now, our backup plan doesn't stop there. If your computer's hard drive buys the farm, you're covered, but what if your house burns down or gets burglarized? You want your most important data somewhere OFF site as well. This is where your FTP server comes in. Create a last SyncBack profile called "Nightly Remote Backup" that sends all your important data over the wire from, say, your hard drive in New York to your FTP server in Atlanta. If you don't like the idea of your data on someone else's server, check out the compression tab: you can have your files zipped up and passworded before they get FTP'ed for a little extra security.
Update: An astute reader points out that compression is not enabled for FTP backup. So, create a profile that compresses and passwords your files and set it to run BEFORE your FTP profile which transfers the zips. Thanks, Ralph!
That's it! Once your automated backup system is up and running you can rest easy knowing that if Something Bad happens, chances are your data will be safe.
Backup geeks and the curious should be sure to paw through all of SyncBack's tabs and options, there's tons of them. For example, in the "Autoclose" tab, set SyncBack to shut down any programs with a word you specify in the title bar before it runs a backup job. The "Programs" tab lets you set commands to run before and after backup happens - handy for database or source repository dumps, exporting your Instiki wiki to HTML and anything else you want to move or mash before you back up.
There are a million and one programs and ways to backup your hard drive, and this is just one of them. How do you ensure your data's security and redundancy? Do tell in the comments or at tips at lifehacker.com.
Wednesday, June 10, 2009
Restore a Backup to new System (Microsoft Server up 2003 v2)

Restore a Backup to new System (Microsoft Server up 2003 v2)
1. Install a Controller Card on to server and load the drivers.
2. Full back up Server w/system state. (only require boot drive backup, data can be backed up but will take longer time)
3. On a workstation install same Controller card.
4. Restore backup w/system state to the Server drive attached to the controller card (see #1).
5. Copy the files from the boot directory to the system drive. i.e. (d:\boot d:\)
6. Copy the files from registry to the system32\config directory. I.e. (d:\registry d:\winnt\system32\config)
7. Install controller card in new server and attach the restored drive.
8. Boot server press F8- go to Active Directory Restore Mode
9. Restore from backup system state only
10. Same as 8
11. Install all drivers as required.
12. create new environment variable devmgr_show_nonpresent_device 1
13. Device Management, view, show hidden devices
14. Uninstall any devices no longer present.
15. Reboot normally. System is now operating as with the old hardware.
16. Make any necessary changes to the ip address.
17. Restore data
Additional note-
Probable situation.
Moving to a new hardware platform w/raid and raid is not portable to a workstation.
Follow above instructions to 16.
18. Make sure raid drivers are installed and working.
19. Backup boot w/system state to a secondary drive. (no partition)
20. Install hdd’s and create the raid, restore the backup made on step 18 w/system state to the raid using steps 5 and 6.
21. Shut down system.
22. Remove the boot drive from the controller and adjust the bios to boot from raid.
23. Follow steps 8 and 9.
24. Shut down server and remove secondary drive, leave controller card.
25. Follow steps 10 thru 16.
26. Shut down system, remove controller card. (bill client for card and leave in server for next time)
27. Start system, all should be working properly.
28. Restore data.
Labels:
crush,
microsoft,
ntbackup,
Restore backup,
server 2003,
window server 2003
Subscribe to:
Posts (Atom)
