Example Migration Job
When you specify a Migration Job, you must specify all the standard directives as for a Job. However, certain such as the Level, Client, and FileSet, though they must be defined, are ignored by the Migration job because the values from the original job used instead.
As an example, suppose you have the following Job that you run every night. To note: there is no Storage directive in the Job resource; there is a Storage directive in each of the Pool resources; the Pool to be migrated (File) contains a Next Pool directive that defines the output Pool (where the data is written by the migration job).
# Define the backup Job
Job {
Name = "NightlySave"
Type = Backup
Level = Incremental # default
Client = rufus-fd
FileSet = "Full Set"
Schedule = "WeeklyCycle"
Messages = Standard
Pool = Default
}
# Default pool definition
Pool {
Name = Default
Pool Type = Backup
AutoPrune = yes
Recycle = yes
Next Pool = Tape
Storage = File
LabelFormat = "File"
}
# Tape pool definition
Pool {
Name = Tape
Pool Type = Backup
AutoPrune = yes
Recycle = yes
Storage = DLTDrive
}
# Definition of File storage device
Storage {
Name = File
Address = rufus
Password = "xxx"
Device = "File" # same as Device in Storage daemon
Media Type = File # same as MediaType in Storage daemon
}
# Definition of DLT tape storage device
Storage {
Name = DLTDrive
Address = rufus
Password = "yyy"
Device = "HP DLT 80" # same as Device in Storage daemon
Media Type = DLT8000 # same as MediaType in Storage daemon
}
Where we have included only the essential information – i.e. the Director, FileSet, Catalog, Client, Schedule, and Messages resources are omitted.
As you can see, by running the NightlySave Job, the data will be backed up to File storage using the Default pool to specify the Storage as File.
Now, if we add the following Job resource to this conf file.
Job {
Name = "migrate-volume"
Type = Migrate
Level = Full
Client = rufus-fd
FileSet = "Full Set"
Messages = Standard
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Volume
Selection Pattern = "File"
}
and then run the job named migrate-volume, all volumes in the Pool named Default (as specified in the migrate-volume Job that match the regular expression pattern File will be migrated to tape storage DLTDrive because the Next Pool in the Default Pool specifies that Migrations should go to the pool named Tape, which uses Storage DLTDrive.
If instead, we use a Job resource as follows:
Job {
Name = "migrate"
Type = Migrate
Level = Full
Client = rufus-fd
FileSet = "Full Set"
Messages = Standard
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Job
Selection Pattern = ".*Save"
}
All jobs ending with the name Save will be migrated from the File Default to the Tape Pool, or from File storage to Tape storage.
Go back to the Replication: Copy/Migration Jobs.
Go back to the main Advanced Features Usage page.