Category Archives: Data Domain

Testing Replicated Data on DataDomain appliances using Fastcopy

Many companies replicate data between sites.  The specific methodology may be different; different vendors, different solutions, etc., but the end goal is basically the same….data redundancy to ensure as smooth a transition as possible should a DR event occur.

To look at one specific replication scenario, a number of my clients use Veeam to backup their virtual machines to backup repositories located on DataDomain appliances and then that data is replicated to a DataDomain appliance located at another site.  Now I know, assuming I have green checks indicating my data is in sync in DD Enterprise Manager, that the data is replicated and if need be, usable in the secondary site.  But still, it’s a good idea to test the data thus ensuring its integrity.

Looking at DataDomain articles, seems the first step in testing the replicated data is to break the replication context with the source system so that you can convert the replicated data from read-only to read-write.  But I really don’t want to have to break replication connections in order to test the integrity of my data, so I called DD support who suggested I use fastcopy to make a read-write copy of the replicated data for DR/data integrity testing.

I asked, “What happens if I don’t have enough room to make a copy?” and was told that fastcopy uses “pointers”, “will not consume any additional space” and “the easiest method for r/w access for a DR test would be to fastcopy the replicated data.” Sounds too good to be true but I decided to test it as follows:

1. Created a new VM, a new backup repository on the DD, and then replicated the new DD MTree from the source to the destination DD.  (Wasn’t going to try it out the first time with my production data)

2. After replication was successful, I created a new, blank MTree on the destination DD system.  **If you do not create a blank MTree, the fastcopy command will fail as it is unable to create MTree’s on its own.

3. Execute the fastcopy command, an example of which is shown below:

  • filesys fastcopy source /data/col1/CAN-DR-Test destination /data/col1/VMTest
  • NOTE: The command is case sensitive, remember that when specifying your source and destination MTree

1_FC-Command

4. In DD Enterprise Manager, enable CIFS share for the replicated MTree (VMTest in this example)

5. Add VMTest as a backup repository in Veeam, importing existing backups, and then perform a restore.

In this instance, the process worked perfectly and thus ensured the client that their data was indeed replicating and accessible if needed.

Leave a comment

Filed under BRS/DR, Data Domain, EMC, VMware

Veeam: Replicating Backup Data and Failover with Data Domain appliances

Consider the following scenario.  You purchase (2) Data Domain DD640 appliances, one to be deployed at your production site and the other at a DR or some other remote site.  The DD 640s will also serve as the backup repository for Veeam Backup and Replication, which is being used to backup VMs hosted on VMware servers and the organization has a desire to replication the Veeam backup data from DD01 to DD02.  DR and redundancy for the Veeam backup data can be accomplished by using Data Domain replication to replicate the \\DD01\Veeam CIFS data to \\DD02\Veeam-DR.   

In this example, I’ll use Data Domain MTree replication to replicate data from DD01 to DD02 and show you how to access backup data should DD01 experience a failure.

1. Within DDEM, click Replication | Summary | Create Pair

2.         On the Create Pair screen, select MTree as the Replication Type and then specify the Source Path, the Destination System and the Destination Path.  Click OK.


3.         DDEM begins to build the replication pair as shown below; it also begins the initial replication between the systems.  Click Close.


4.         The replication State will display a warning until replication completes.  The Pre-Comp Remaining value should decline as the initial replication continues: 

5.        When the initial replication completes, check DD01 and DD02 and ensure the replication State reads Normal and the State Description reads Replicating

With the MTree replicated, the next step is to setup the CIFS share on DD02 using DDEM. (A CIFS share was already created on DD01)


6. In DDEM, select DD02 and then click Data Management | CIFS | Shares | Create  

7. On the Create Share screen, enter a Share Name, Directory Path, and Comment then click OK
DD02 Veeam Failover Procedure

In the event of a failure on DD01, failing over Veaam Backups to DD02 is a two-step procedure:

·         Configuring \\DD02\Veeam-DR as a Veeam Backup Repository
·         Re-mapping backup jobs to the new backup repository

 
The process to add \\DD02\Veeam-DR as a backup repository is almost identical to those of creating the first one on DD01.  Open the Veeam Backup and Replication console to perform the following steps:

Configure \\DD02\Veeam-DR as a Veeam Backup Repository

1.         Right-click Backup Repositories and select Add Backup Repository.

2.         On the Name screen, enter a Name for the backup repository and click Next.

3.         On the Type screen, select Shared folder as the backup repository type and click Next.

4.         On the Share screen, enter \\DD02\Veeam-DR as the Shared folder, then specify credentials with read/write access to the share on the Data Domain.  Under the Proxying server section, select Automatic selection and click Next.

 
5.         On the Repository screen, set the Limit maximum concurrent jobs count to a number the backup repository/device can handle and click Next.

6.         On the vPower NFS screen, select Enable vPower NFS server and select This server.  For the Folder, accept the default and click Next.

7.         On the Review screen, review the proposed settings and check Import existing backups automaticallyand Import guest file system index.  Click Nextto create the new backup repository.


8.         On the Apply screen, click Finish to complete the configuration of the backup repository.  It could take a few minutes to create the repository since existing backup jobs are being imported:


Remapping Backup Jobs to DD02


To complete the failover to DD02, existing backup jobs must be remapped:

1.         In the Veeam admin console, right-click an existing backup job and select Edit


2.         Click Storage and then change the Backup Repository to DD02.


3.         With DD02 selected as the backup repository, click Map backup.



4.         On the Select Backup screen, map to the corresponding backup job on DD02 and click OK.  When returned to the Edit Backup Job window, click Finish and repeat steps #1-4 for remaining backup jobs to complete the failover process. 

Leave a comment

Filed under BRS/DR, Data Domain, VMware

Deleting old save sets from NetWorker

This is another in the series of “I’m posting this before I forget.” Had to delete some older NetWorker save sets in order to free some space on a DataDomain DD-160 and if I don’t put these down, I’m certain I’ll forget them and to avoid having to search again, here’s my steps: (using server1.ballfield.local as an example)

1. I didn’t want to remove every old backup job, but just those for a given server that were older than 3 months. To that end, I used MMINFO from the command-line of the NetWorker server to get a list of all backups that are 3 months or older and pipe the output to a text file that I’ll use to built a BAT file for deleting the save sets:

mminfo -avot -c server1.ballfield.local -q “savetime c:\temp\server1_ss_3MO.txt

This command will give you a text file similar to the following:

MMINFO Output 

 2. Next, I opened NetWorker to “spot check” a few of my SSIDs to verify that I was about to backup older save sets as, initially, I was somewhat confused by the savetime command switch.  More about the savetime command switch can be found here.

NetWorker – Show Save Sets

Use SSIDs to verify age of the save set

 3. After verification, I edited my text file as shown below and saved it as a BAT file:

Use NSRMM to detele old save sets

NSRMM is used to remove save sets.  In this case, the command was:
nsrmm -dy -S SSID/CloneID
-d / delete
y / responds with “Yes”…if you do not specify the “y” switch, nsrmm will asked you for verification before each deletion
-S / specifies the SSID and CloneID (if there are any) to delete

4. In the command window, execute the BAT file.  Once the BAT file is completed deleting the older save sets, execute the command nsrim -X which synchronizes the media database and completes the deletion of the save sets from NetWorker.  NetWorker support advised that NSRIM should not be executed when backups are running.

5. Finally, I wanted to “clean” the space on the DataDomain manually as opposed to waiting for it to automatically do so on its schedule.  I logged into the DD and on the Data Management | File System page, clicked Start Cleaning.  In this case, after deleting my older save sets, I had 400GB of space I could recover.

DataDomain – Start Cleaning

DataDomain – Cleaning finished/more space available

As you can imagine, the backups function much better when there is free space to write to.  In addition to removing the older save sets, you may need to adjust the NetWorker browse and retention settings (shown below) on your machines to avoid the same issue again.

NetWorker Client – Browse and Retention Policies

2 Comments

Filed under BRS/DR, Data Domain, EMC

vRanger and Data Domain Appliances

As I’m sure most of you know, vRanger is a backup application that protects virtual machines hosted on VMware ESX servers. Data Domain appliances provide a storage solution which reduces the data storage requirements via inline data deduplication. The Data Domain appliance, in this case, is configured to present itself as a Windows server (BACKUP01) and serves as the target for the vRanger backups.

Together, vRanger and Data Domain storage appliances can achieve very high levels of compression, as much as 100x or higher compression ratios are possible. However, to realize these ratios in a production environment, specific configuration settings must be applied.

Let’s talk about the desired vRanger installation and configuration settings for optimal Data Domain compression and performance. Data Domain strongly recommends applying each of these configuration modifications, as data compression can be severely limited if they are not.

1. On BACKUP01, create a subdirectory in /backup to contain the backup data. For example: \\BACKUP01\backup\vmbackups. If a backup job is submitted directly to /backup, it may not start properly.

2. In vRanger, specify the Data Domain UNC path as the Backup destination.

3. The following graphic displays the proper vRanger GUI settings for a backup with Data Domain:

To ensure optimal compression on the Data Domain appliance, always perform uncompressed backups by enabling the -nozip command switch. In the vRanger GUI, check the Do not create an archive of the file box. Failure to use the –nozip command switch results in extremely poor compression. At this point, you may perform a test backup and view the vRanger backup directory on the Data Domain system. If a single zip file is seen, the –nozip option is not set properly.

Perform Full backups. Differential backups are not needed with Data Domain, as storage savings are realized through the data de-duplication compression technology.

Set the Archive Name option to create unique directories for each full backup to avoid overwrites, which may severely degrade the compression ratio. The ideal setting can be done quickly by checking and then un-checking the Enable Automatic Differential Backup option. The Archive Name option should be set to [config]_[year][month][day][hour][minute][second].

4. Schedule the backup job as required by your environment.

5. Should a restore be necessary, the Restore process of a de-duplicated backup is no different than that of an “regular” backup. Click on Restore within the vRanger GUI to perform the restore process.

Leave a comment

Filed under Data Domain