Job Resource

The Job resource defines a Job (Backup, Restore, etc.) that Bacula must perform. Each Job resource definition contains the name of a Client and a FileSet to backup, the Schedule for the Job, where the data are to be stored, and what media Pool can be used. In effect, each Job resource must specify What, Where, How, and When or FileSet, Storage, Backup/Restore/Level, and Schedule respectively.

Note

The FileSet must be specified for a restore job for historical reasons, but it is no longer used.

Only a single type (Backup, Restore, …) can be specified for any job. If you want to backup multiple FileSets on the same Client or multiple Clients, you must define a Job for each one. In addition to job defined with a job resource, Bacula uses “Internal system jobs” and “Console connection jobs”. Those are for internal processes, among others the ones with “JobID = 0”, and should be considered as meaningful when logs are displayed.

Note

You define only a single Job to do the Full, Differential, and Incremental backups since the different backup levels are tied together by a unique Job name. Normally, you will have only one Job per Client, but if a client has a really huge number of files (more than several million), you might want to split it into to Jobs each with a different FileSet covering only part of the total files.

Multiple Storage Daemons are not currently supported for Jobs, so if you do want to use multiple Storage Daemons, you will need to create a different Job and ensure that for each Job that the combination of Client and FileSet are unique. The Client and FileSet are what Bacula uses to restore a Client, so if there are multiple Jobs with the same Client and FileSet or multiple Storage daemons that are used, the restore will not work. This problem can be resolved by defining multiple FileSet definitions (the names must be different, but the contents of the FileSets may be the same).

Job Start of the Job resource. At least one Job resource is required.

Name

Name = <name> The Job name. This name can be specified on the run command in the console program to start a job. If the name contains spaces, it must be specified between quotes. It is generally a good idea to give your job the same name as the Client that it will backup. This permits easy identification of jobs.

When the job actually runs, the unique Job Name will consist of the name you specify here followed by the date and time the job was scheduled for execution. This directive is required.

Enabled

Enabled = <yes|no> This directive allows you to enable or disable a resource. When the resource of the Job is disabled, the Job will no longer be scheduled and it will not be available in the list of Jobs to be run. To be able to use the Job you must enable it.

Tag

Tag = <string, string2, string3> The Tag directive specifies a list of tags to create when creating a new Job record. This directive is optional.

Type

Type = <job-type> The directive specifies the Job type, which may be one of the following: Backup, Restore, Verify, or Admin. This directive is required. Within a particular Job Type, there are also Levels as discussed in the next item.

  • Backup Run a backup Job. Normally you will have at least one Backup job for each client you want to save. Normally, unless you turn off cataloging, most all the important statistics and data concerning files backed up will be placed in the Catalog.

  • Restore Run a restore Job. Normally, you will specify only one Restore job which acts as a sort of prototype that you will modify using the console program in order to perform restores. Although certain basic information from a Restore job is saved in the catalog, it is very minimal compared to the information stored for a Backup job – for example, no File database entries are generated since no Files are saved.

    Restore jobs cannot be automatically started by the scheduler as is the case for Backup, Verify and Admin jobs. To restore files, you must use the restore command in the console.

  • Verify Run a Verify Job. In general, Verify jobs permit you to compare the contents of the catalog to the file system, or to what was backed up. In addition, to verifying that a tape that was written can be read, you can also use Verify as a sort of tripwire intrusion detection.

  • Admin Run an Admin Job. Only Director’s runscripts will be executed. The Client is not involved in an Admin job, so features such as Client Run Before Job are not available. Although an Admin job is recorded in the catalog, very little data is saved. An Admin job can be used to periodically run catalog pruning, if you do not want to do it at the end of each Backup Job.

  • Migration Run a Migration Job (similar to a backup job) that reads data that was previously backed up to a Volume and writes it to another Volume (see).

  • Copy Run a Copy Job that essentially creates two identical copies of the same backup. The Copy process is essentially identical to the Migration feature with the exception that the Job that is copied is left unchanged (see).

Level

Level = <job-level> The Level directive specifies the default Job level to be run. Each different Job Type (Backup, Restore, …) has a different set of Levels that can be specified. The Level is normally overridden by a different value that is specified in the resource. This directive is not required, but must be specified either by a Level directive or as an override specified in the resource.

For a Backup Job, the Level may be one of the following:

  • Full When the Level is set to Full all files in the FileSet whether or not they have changed will be backed up.

  • Incremental When the Level is set to Incremental all files specified in the FileSet that have changed since the last successful backup of the the same Job using the same FileSet and Client, will be backed up. If the Director cannot find a previous valid Full backup then the job will be upgraded into a Full backup. When the Director looks for a valid backup record in the catalog database, it looks for a previous Job with:

    • The same Job name.

    • The same Client name.

    • The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet.

    • The Job was a Full, Differential, or Incremental backup.

    • The Job terminated normally (i.e. did not fail or was not canceled).

    • The Job started no longer ago than Max Full Interval.

    If all the above conditions do not hold, the Director will upgrade the Incremental to a Full save. Otherwise, the Incremental backup will be performed as requested.

    The File daemon (Client) decides which files to backup for an Incremental backup by comparing start time of the prior Job (Full, Differential, or Incremental) against the time each file was last “modified” (st_mtime) and the time its attributes were last “changed”(st_ctime). If the file was modified or its attributes changed on or after this start time, it will then be backed up.

    Some virus scanning software may change st_ctime while doing the scan. For example, if the virus scanning program attempts to reset the access time (st_atime), which Bacula does not use, it will cause st_ctime to change and hence Bacula will backup the file during an Incremental or Differential backup. In the case of Sophos virus scanning, you can prevent it from resetting the access time (st_atime) and hence changing st_ctime by using the –no-reset-atime option. For other software, please see their manual.

    When Bacula does an Incremental backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the Bacula catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save.

    In addition, if you move a directory rather than copy it, the files in it do not have their modification time (st_mtime) or their attribute change time (st_ctime) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original.

    However, to manage deleted files or directories changes in the catalog during an Incremental backup you can use accurate mode. This is quite memory consuming process. See Accurate mode for more details.

  • Differential When the Level is set to Differential all files specified in the FileSet that have changed since the last successful Full backup of the same Job will be backed up. If the Director cannot find a valid previous Full backup for the same Job, FileSet, and Client, backup, then the Differential job will be upgraded into a Full backup. When the Director looks for a valid Full backup record in the catalog database, it looks for a previous Job with:

    • The same Job name.

    • The same Client name.

    • The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet.

    • The Job was a FULL backup.

    • The Job terminated normally (i.e. did not fail or was not canceled).

    • The Job started no longer ago than Max Full Interval.

    If all the above conditions do not hold, the Director will upgrade the Differential to a Full save. Otherwise, the Differential backup will be performed as requested.

    The File Daemon (Client) decides which files to backup for a differential backup by comparing the start time of the prior Full backup Job against the time each file was last “modified” (st_mtime) and the time its attributes were last “changed” (st_ctime). If the file was modified or its attributes were changed on or after this start time, it will then be backed up. The start time used is displayed after the Since on the Job report. In rare cases, using the start time of the prior backup may cause some files to be backed up twice, but it ensures that no change is missed. As with the Incremental option, you should ensure that the clocks on your server and client are synchronized or as close as possible to avoid the possibility of a file being skipped. Note, on versions 1.33 or greater Bacula automatically makes the necessary adjustments to the time between the server and the client so that the times Bacula uses are synchronized.

    When Bacula does a Differential backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the Bacula catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save. However, to remove deleted files from the catalog during a Differential backup is quite a time consuming process and not currently implemented in Bacula. It is, however, a planned future feature.

    As noted above, if you move a directory rather than copy it, the files in it do not have their modification time (st_mtime) or their attribute change time (st_ctime) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original. Alternatively, you can move the directory, then use the touch program to update the timestamps.

    However, to manage deleted files or directories changes in the catalog during an Differential backup you can use accurate mode. This is quite memory consuming process. See Accurate mode for more details.

    Every once and a while, someone asks why we need Differential backups as long as Incremental backups pickup all changed files. There are possibly many answers to this question, but the one that is the most important for me is that a Differential backup effectively merges all the Incremental and Differential backups since the last Full backup into a single Differential backup. This has two effects:

    1. It gives some redundancy since the old backups could be used if the merged backup cannot be read.

    2. More importantly, it reduces the number of Volumes that are needed to do a restore effectively eliminating the need to read all the volumes on which the preceding Incremental and Differential backups since the last Full are done.

  • VirtualFull When the backup Level is set to VirtualFull, Bacula will consolidate the previous Full backup plus the most recent Differential backup and any subsequent Incremental backups into a new Full backup. This new Full backup will then be considered as the most recent Full for any future Incremental or Differential backups. The VirtualFull backup is accomplished without contacting the client by reading the previous backup data and writing it to a volume in a different pool.

    Bacula’s virtual backup feature is often called Synthetic Backup or Consolidation in other backup products.

For a Restore Job, no level needs to be specified.

For a Verify Job, the Level may be one of the following:

  • InitCatalog does a scan of the specified FileSet and stores the file attributes in the Catalog database. Since no file data is saved, you might ask why you would want to do this. It turns out to be a very simple and easy way to have a Tripwire like feature using Bacula. In other words, it allows you to save the state of a set of files defined by the and later check to see if those files have been modified or deleted and if any new files have been added. This can be used to detect system intrusion. Typically you would specify a that contains the set of system files that should not change (e.g. /sbin, /boot, /lib, /bin, etc.). Normally, you run the InitCatalog level verify one time when your system is first setup, and then once again after each modification (upgrade) to your system. Thereafter, when your want to check the state of your system files, you use a Verify level=Catalog. This compares the results of your InitCatalog with the current state of the files.

  • Catalog Compares the current state of the files against the state previously saved during an InitCatalog. Any discrepancies are reported. The items reported are determined by the Verify options specified on the Include directive in the specified (see the resource below for more details). Typically this command will be run once a day (or night) to check for any changes to your system files.

    Note

    If you run two Verify Catalog jobs on the same client at the same time, the results will certainly be incorrect. This is because Verify Catalog modifies the Catalog database while running in order to track new files.

  • Data Read back the data stored on volumes and check data attributes such as size and the checksum of all the files.

    To run the Verify job, it is possible to use the “jobid” parameter of the “run” command.

    Note

    The current Verify Data implementation requires specifying the correct Storage resource in the Verify job. The Storage resource can be changed with the bconsole command line and with the menu.

    It is also possible to use the accurate option to check catalog records at the same time. When using a Verify job with level=Data and accurate=yes can replace the level=VolumeToCatalog option.

    To run a Verify Job with the accurate option, it is possible to set the option in the Job definition or set use the accurate=yes on the command line.

    * run job=VerifyData level=Data jobid=10 accurate=yes
    
  • VolumeToCatalog This level causes Bacula to read the file attribute data written to the Volume from the last backup Job for the job specified on the directive. The file attribute data are compared to the values saved in the Catalog database and any differences are reported. This is similar to the DiskToCatalog level except that instead of comparing the disk file attributes to the catalog database, the attribute data written to the Volume is read and compared to the catalog database. Although the attribute data including the signatures (MD5 or SHA1) are compared, the actual file data is not compared (it is not in the catalog).

    Note

    If you run two Verify VolumeToCatalog jobs on the same client at the same time, the results will certainly be incorrect. This is because the Verify VolumeToCatalog modifies the Catalog database while running.

  • DiskToCatalog This level causes Bacula to read the files as they currently are on disk, and to compare the current file attributes with the attributes saved in the catalog from the last backup for the job specified on the VerifyJob directive. This level differs from the VolumeToCatalog level described above by the fact that it doesn’t compare against a previous Verify job but against a previous backup. When you run this level, you must supply the verify options on your Include statements. Those options determine what attribute fields are compared.

    This command can be very useful if you have disk problems because it will compare the current state of your disk against the last successful backup, which may be several jobs.

Accurate

Accurate = <yes|no> In accurate mode, the File Daemon knows exactly which files were present after the last backup. So it is able to handle deleted or renamed files.

When restoring a FileSet for a specified date (including “most recent”), Bacula is able to restore exactly the files and directories that existed at the time of the last backup prior to that date including ensuring that deleted files are actually deleted, and renamed directories are restored properly.

If no “accurate” keyword is specified in the FileSet Options resource, Bacula will use by default the “mcs” options:

  • m compare the modification time (st_mtime)

  • c compare the change time (st_ctime)

  • s compare the size

In this mode, the File daemon must keep data concerning all files in memory. So If you do not have sufficient memory, the backup may either be terribly slow or fail.

For 500.000 files, it will require approximately 64 Megabytes of RAM on your File Daemon to hold the required information.

Verify Job

VerifyJob = <Job-Resource-Name> If you run a verify job without this directive, the last job run will be compared with the catalog, which means that you must immediately follow a backup by a verify command. If you specify a Verify Job Bacula will find the last job with that name that ran. This permits you to run all your backups, then run Verify jobs on those that you wish to be verified (most often a VolumeToCatalog) so that the tape just written is re-read.

Plugin Options

PluginOptions = <Plugin-Command-Line> If you run a Verify Job with the level Data, it is possible to specify a Plugin command that will be used during the Job. For example, it can be used in conjunction with the Antivirus plugin. The directive can be overwritten from the run menu, or from the run command line with the PluginOptions= keyword.

Job Defs

JobDefs = <JobDefs-Resource-Name> If a <JobDefs-Resource-Name> is specified, all the values contained in the named resource will be used as the defaults for the current Job. Any value that you explicitly define in the current Job resource, will override any defaults specified in the resource. The use of this directive permits writing much more compact resources where the bulk of the directives are defined in one or more JobDefs. This is particularly useful if you have many similar Jobs but with minor variations such as different Clients. A simple example of the use of JobDefs is provided in the default bacula-dir.conf file.

Bootstrap

Bootstrap = <bootstrap-file> The Bootstrap directive specifies a bootstrap file that, if provided, will be used during Restore Jobs and is ignored in other Job types. The <bootstrap-file> contains the list of tapes to be used in Restore a Job as well as which files are to be restored. Specification of this directive is optional, and if specified, it is used only for a restore job. In addition, when running a Restore job from the console, this value can be changed.

If you use the restore command in the bconsole program, to start a Restore job, the <bootstrap-file> will be created automatically from the files you select to be restored.

For additional details of the bootstrap directive, see Restoring Files with the Bootstrap File chapter of this manual.

Write Bootstrap

Write Bootstrap = <bootstrap-file-specification> The directive specifies a file name where Bacula will write a bootstrap file for each Backup job run. This directive applies only to Backup Jobs. If the Backup job is a Full save, Bacula will erase any current contents of the specified file before writing the bootstrap records. If the Job is an Incremental or Differential save, Bacula will append the current bootstrap record to the end of the file.

Using this feature, permits you to constantly have a bootstrap file that can recover the current state of your system. Normally, the file specified should be a mounted drive on another machine, so that if your hard disk is lost, you will immediately have a bootstrap record available. Alternatively, you should copy the bootstrap file to another machine after it is updated. Note, it is a good idea to write a separate bootstrap file for each Job backed up including the job that backs up your catalog database.

If it begins with a vertical bar (), Bacula will use the specification as the name of a program to which it will pipe the bootstrap record. It could for example be a shell script that emails you the bootstrap record.

Before opening the file or executing the specified command, Bacula performs character substitution like in RunScript directive. To automatically manage your bootstrap files, you can use this in your resources:

JobDefs {
    Write Bootstrap = "%c_%n.bsr"
    ...
    }

For more details on using this file, see the The Bootstrap File chapter.

Client

Client = <client-resource-name> The Client directive specifies the Client (File Daemon) that will be used in the current Job. Only a single Client may be specified in any one Job. The Client runs on the machine to be backed up, and sends the requested files to the Storage daemon for backup, or receives them when restoring. For additional details, see the Client Resource section of this chapter. This directive is required.

Fileset

FileSet = <FileSet-resource-name> The FileSet directive specifies the FileSet that will be used in the current Job. The FileSet specifies which directories (or files) are to be backed up, and what options to use (e.g. compression, etc.). Only a single FileSet resource may be specified in any one Job. For additional details, see the FileSet Resource section. This directive is required.

Base

Base = <job-resource-name> The Base directive permits to specify the list of jobs that will be used during Full backup as base. This directive is optional. See the Base Job chapter for more information.

Messages

Messages = <messages-resource-name> The Messages directive defines what Messages resource should be used for this job, and thus how and where the various messages are to be delivered. For example, you can direct some messages to a log file, and others can be sent by email. For additional details, see the Messages Resource Chapter of this manual. This directive is required.

Snapshot Retention

SnapshotRetention = <time-period-specification> The Snapshot Retention directive defines the length of time that Bacula will keep Snapshots in the Catalog database and on the Client after the Snapshot creation. When this time period expires, and if using the snapshot prune command, Bacula will prune (remove) Snapshot records that are older than the specified Snapshot Retention period and will contact the FileDaemon to delete Snapshots from the system.

The Snapshot retention period is specified as seconds, minutes, hours, days, weeks, months, quarters, or years. See the Configuration chapter for additional details of time specification.

The default is 0 second , Snapshots are deleted at the end of the backup. The Job SnapshotRetention directive overwrites the Client SnapshotRetention directive.

Pool

Pool = <pool-resource-name> The Pool directive defines the pool of Volumes where your data can be backed up. Many Bacula installations will use only the Default pool. However, if you want to specify a different set of Volumes for different Clients or different Jobs, you will probably want to use Pools. For additional details, see the Pool Resource section of this chapter. This directive is required.

Full Backup Pool

FullBackupPool = <pool-resource-name> The Full Backup Pool specifies a Pool to be used for Full backups. It will override any Pool specification during a Full backup. This directive is optional.

Differential Backup Pool

DifferentialBackupPool = <pool-resource-name> The Differential Backup specifies a Pool to be used for Differential backups. It will override any Pool specification during a Differential backup. This directive is optional.

Incremental Backup Pool

IncrementalBackupPool = <pool-resource-name> The Incremental Backup Pool specifies a Pool to be used for Incremental backups. It will override any Pool specification during an Incremental backup. This directive is optional.

Virtual Full Backup Pool

VirtualFullBackupPool = <pool-resource-name> The VirtualFull Backup Pool specifies a Pool to be used for VirtualFull backups. It will override any Pool specification during an VirtualFull backup. This directive is optional.

Next Pool

NextPool = <pool-resource-name> This directive, used on copy, migration or virtual full jobs, specifies the pool to which Job data will be written to. This directive is required in such job types/level, otherwise it is optional. The Next Pool directive may also be specified in the Pool resource or on a Run directive in the Schedule resource. Any Next Pool directive in the Job resource will take precedence over the Pool definition, and any Next Pool specification on the Run directive in a Schedule resource will take ultimate precedence.

Backups To Keep

BackupsToKeep = <number> When this directive is present during a Virtual Full (it is ignored for other Job types), it will look for a Full backup that has more subsequent backups than the value specified. In the example below, the Job will simply terminate unless there is a Full back followed by at least 31 backups of either level Differential or Incremental.

Job {
    Name = "VFull"
    Type = Backup
    Level = VirtualFull
    Client = "my-fd"
    File Set = "FullSet"
    Accurate = Yes
    Backups To Keep = 30
}

Assuming that the last Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full. The result is that you will end up with a Full followed by 30 Incremental backups.

Setting “BackupsToKeep = 0” will cause Bacula to run the Virtual Full if you have at least 1 subsequent incr/diff job after the Full level job and no incr/diff job is kept after the Virtual Full job has run. This is the default behavior if this directive is not set.

Delete Consolidated Jobs

DeleteConsolidatedJobs = <yes/no> If set to yes, it will cause any old Job that is consolidated during a Virtual Full to be deleted. In the example above we saw that a Full plus one other job (either an Incremental or Differential) were consolidated into a new Full backup. The original Full plus the other Job consolidated will be deleted. The default value is no.

Schedule

Schedule = <schedule-name> The Schedule directive defines what schedule is to be used for the Job. The schedule in turn determines when the Job will be automatically started and what Job level (i.e. Full, Incremental, etc.) is to be run. This directive is optional, and if left out, the Job can only be started manually using the Console program. Although you may specify only a single Schedule resource for any one job, the Schedule resource may contain multiple Run directives, which allow you to run the Job at many different times, and each Run directive permits overriding the default Job Level, Pool, Storage, and Messages resources. This gives considerable flexibility in what can be done with a single Job. For additional details, see the Schedule Resource chapter.

Storage

Storage = <storage-resource-name> The Storage directive defines the name of the storage services where you want to backup the FileSet data. For additional details, see the Storage Resource Chapter. The Storage resource may also be specified in the Job’s Pool resource, in which case the value in the Pool resource overrides any value in the Job. This Storage resource definition is not required by either the Job resource or in the Pool, but it must be specified in one or the other, if not an error will result. Storage can be either specified by an single item or it can be specified as a comma separated list of storages to use according to StorageGroupPolicy. If some number of first storage daemons on the list are unavailable due to network problems, broken or unreachable for some other reason, Bacula will take first available one from the list (which is sorted according to the policy used) which is network reachable and healthy.

Storage Group Policy

StorageGroupPolicy = <Storage Group Policy Name> Storage Group Policy determines how Storage resources (from the ’Storage’ directive) are being chosen from the Storage list. If no StoragePolicy is specified Bacula always tries to use first available Storage from the provided list. If the first few storage daemons are unavailable due to network problems; broken or unreachable for some other reason, Bacula will take the first one from the list (sorted according to the policy used) which is network reachable and healthy. Currently supported policies are:

ListedOrder - This is the default policy, which uses first available storage from the list provided.

LeastUsed - This policy scans all storage daemons from the list and chooses the one with the least number of jobs being currently run.

FreeSpace - This policy queries each Storage Daemon in the list for its FreeSpace (as a sum of devices specified in the SD config) and sorts the list according to the FreeSpace returned, so that first item in the list is the SD with the biggest amount of FreeSpace while the last one in the list is the one with the least amount of FreeSpace available.

LastBackedUpTo - This policy ensures that a job will use the same storage that the previous Full/Differential/Incremental has used. It will cause a single chain of Full+Differential+Incremental jobs to use the same storage. When doing a Full backup the policy dictates that the job should choose the storage that was utilized earliest by the same job at Full level. The goal is to split the jobs to improve redundancy, but keep a single job chain in the same storage. This policy is compatible with the Single Item Restore feature.

FreeSpaceLeastUsed - This policy ensures that job is backed up to the storage with more free space and less running jobs than others. Within the candidate storages, the least used will be selected. The candidate storages are determined by the StorageGroupPolicyThreshold directive. The storage will be a candidate if its free space is comprised between MaxFreeSpace-StorageGroupPolicyThreshold and MaxFreeSpace, where MaxFreeSpace is the highest value of free space for all storages in the group.

Example:

with StorageGroupPolicyThreshold=100MB

and storages free space being:
Storage1 = 500GB free
Storage2 = 200GB free
Storage3 = 400GB free
storage4 = 500GB free

In this case MaxFreeSpace=500GB.

Storage 1, 4 and 3 are candidates.

If 5 jobs are running on Storage1, 2 on Storage4, and 3 on Storage3, then Storage4
will be the selected storage.

StorageGroupPolicyThreshold = <Threshold size> Used in conjunction with the FreeSpaceLeastUsed StorageGroupPolicy to specify the range of free space Candidate storages.

Max Start Delay

MaxStartDelay = <time> The time specifies the maximum delay between the scheduled time and the actual start time for the Job. For example, a job can be scheduled to run at 1:00am, but because other jobs are running, it may wait to run. If the delay is set to 3600 (one hour) and the job has not begun to run by 2:00am, the job will be canceled. This can be useful, for example, to prevent jobs from running during day time hours. The default is 0 which indicates no limit.

Max Run Time

MaxRunTime = <time> The time specifies the maximum allowed time that a job may run, counted from when the job starts, (not necessarily the same as when the job was scheduled).

By default, the watchdog thread will kill any Job that has run more than 200 days. The maximum watchdog timeout is independent of MaxRunTime and cannot be changed.

Incremental Max Run Time

IncrementalMaxRunTime = <time> The time specifies the maximum allowed time that an Incremental backup job may run, counted from when the job starts, (not necessarily the same as when the job was scheduled).

Differential Max Run Time

DifferentialMaxRunTime = <time> The time specifies the maximum allowed time that a Differential backup job may run, counted from when the job starts, (not necessarily the same as when the job was scheduled).

Max Run Sched Time

MaxRunSchedTime = <time> The time specifies the maximum allowed time that a job may run, counted from when the job was scheduled. This can be useful to prevent jobs from running during working hours. We can see it like Max Start Delay + Max Run Time.

Max Wait Time

MaxWaitTime = <time> The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (not necessarily the same as when the job was scheduled).

Job time control directives

Job time control directives

Maximum Spawned Jobs

MaximumSpawnedJobs = <nb> The Job resource now permits specifying a number of Maximum Spawn Jobs. The default is 600. This directive can be useful if you have big hardware and you do a lot of Migration/Copy jobs which start at the same time.

Maximum Bandwidth

MaximumBandwidth = <speed> The speed parameter specifies the maximum allowed bandwidth in bytes per second that a job may use. You may specify the following (case-insensitive) speed parameter modifiers: kb/s (1,000 bytes per second), k/s (1,024 bytes per second), mb/s (1,000,000 bytes per second), or m/s (1,048,576 bytes per second).

The use of TLS, TLS PSK, CommLine compression and Deduplication can interfere with the value set with the Directive.

This functionality affects only the data transfers between File Daemon and Storage Daemon, and was introduced with Bacula 6.0.0.

Max Full Interval

MaxFullInterval = <time> The time specifies the maximum allowed age (counting from start time) of the most recent successful Full backup that is required in order to run Incremental or Differential backup jobs. If the most recent Full backup is older than this interval, Incremental and Differential backups will be upgraded to Full backups automatically. If this directive is not present, or specified as 0, then the age of the previous Full backup is not considered.

Max Virtual Full Interval

MaxVirtualFullInterval = <time> The time specifies the maximum allowed age (counting from start time) of the most recent successful Full backup that is required in order to run Incremental, Differential or Full backup jobs. If the most recent Full backup is older than this interval, Incremental, Differential and Full backups will be converted to a VirtualFull backup automatically. If this directive is not present, or specified as 0, then the age of the previous Full backup is not considered.

Note

VirtualFull job is not a real backup job. A VirtualFull will merge exiting jobs to create a new virtual Full job in the catalog and will copy the exiting data to new volumes.

The Client is not used in a VirtualFull job, so when using this directive, the Job that was supposed to run and save recently modified data on the Client will not run. Only the next regular Job defined in the Schedule will backup the data. It will not be possible to restore the data that was modified on the Client between the last Incremental/Differential and the VirtualFull.

Prefer Mounted Volumes

PreferMountedVolumes = <yes|no> If the Prefer Mounted Volumes directive is set to yes (default yes), the Storage daemon is requested to select either an Autochanger or a drive with a valid Volume already mounted in preference to a drive that is not ready. This means that all jobs will attempt to append to the same Volume (providing the Volume is appropriate – right Pool, … for that job), unless you are using multiple pools. If no drive with a suitable Volume is available, it will select the first available drive. Note, any Volume that has been requested to be mounted, will be considered valid as a mounted volume by another job. This if multiple jobs start at the same time and they all prefer mounted volumes, the first job will request the mount, and the other jobs will use the same volume.

If the directive is set to no, the Storage daemon will prefer finding an unused drive, otherwise, each job started will append to the same Volume (assuming the Pool is the same for all jobs). Setting Prefer Mounted Volumes to no can be useful for those sites with multiple drive autochangers that prefer to maximize backup throughput at the expense of using additional drives and Volumes. This means that the job will prefer to use an unused drive rather than use a drive that is already in use.

Despite the above, we recommend against setting this directive to no since it tends to add a lot of swapping of Volumes between the different drives and can easily lead to deadlock situations in the Storage daemon.

A better alternative for using multiple drives is to use multiple pools so that Bacula will be forced to mount Volumes from those Pools on different drives.

Prune Jobs

PruneJobs = <yes|no> Normally, pruning of Jobs from the Catalog is specified on a Client by Client basis in the Client resource with the AutoPrune directive. If this directive is specified (not normally) and the value is yes, it will override the value specified in the Client resource. The default is no.

Prune Files

PruneFiles = <yes|no> Normally, pruning of Files from the Catalog is specified on a Client by Client basis in the Client resource with the AutoPrune directive. If this directive is specified (not normally) and the value is yes, it will override the value specified in the Client resource. The default is no.

Prune Volumes

PruneVolumes = <yes|no> Normally, pruning of Volumes from the Catalog is specified on a Pool by Pool basis in the Pool resource with the AutoPrune directive. Note, this is different from File and Job pruning which is done on a Client by Client basis. If this directive is specified (not normally) and the value is yes, it will override the value specified in the Pool resource. The default is no.

Runscript

RunScript {<body-of-runscript>} The RunScript directive behaves like a resource in that it requires opening and closing braces around a number of directives that make up the body of the runscript.

The specified Command (see below for details) is run as an external program prior or after the current Job. This is optional. By default, the program is executed on the Client side like with the older ClientRunXXXJob directives.

Console options are special commands that are sent to the director instead of the OS. At this time, console command ouputs are redirected to log with the jobid 0.

You can use following console command: delete, disable, enable, estimate, list, llist, memory, prune, purge, reload, status, setdebug, show, time, trace, update, version, .client, .jobs, .pool, .storage.

See the console chapter for more information. You need to specify needed information on command line, nothing will be prompted for.

Example:

Console = "prune files client=%c"
Console = "update stats age=3"

You can specify more than one Command/Console option per RunScript.

You can use following options may be specified in the body of the runscript:

Table 12.1: Options for Run Script

Option

Value

Default

Information

Runs On Success

Yes / No

Yes

Run command if JobStatus is successful

Runs On Failure

Yes / No

No

Run command if JobStatus isn’t successful

Runs On Client

Yes / No

Yes

Run command on client

Runs When

Before / After / Always / Never / AfterVSS / AtJobCompletion / Queued

Never

When to run commands

Fail Jon On Error

Yes / No

Yes

Fail job if script returns something different from 0

Command

String

Path to your script

Console

String

Console command

Timeout

Number

Timeout for the command (default 0)

Important

Regarding the Runs on Client option, scripts will run on Client only with Jobs that use a Client (Backup, Restore, some Verify Jobs). For other Jobs (Copy, Migration, Admin), RunsOnClient should be set to No.

Any output sent by the command to standard output will be included in the Bacula job report. The command string must be a valid program name or name of a shell script.

In addition, the command string is parsed then fed to the OS, which means that the path will be searched to execute your specified command, but there is no shell interpretation, as a consequence, if you invoke complicated commands or want any shell features such as redirection or piping, you must call a shell script and do it inside that script.

Before submitting the specified command to the operating system, Bacula performs character substitution of the following characters:

%% = %
%b = Job bytes
%c = Client’s name
%C = If the job is a cloned job (Only on director side)
%d = Daemon’s name (Such as host-dir or host-fd)
%D = Director’s name (Also valid on file daemon)
%e = Job exit status
%E = Non-fatal job errors
%f = Job file set (Only on director side)
%F = Job files
%h = Client address
%i = Numerical job id
%I = Migration/Copy job id (Only within job Type = "Copy/Migration" scope)
%j = Unique job id
%l = Job level
%n = Job name
%o = Job priority
%p = Pool name (Only on director side)
%P = Current process id
%R = Read bytes
%s = Since time
%S = Previous job name (Only on file daemon side)
%t = Job type (Backup, ...)
%v = Volume name (Only on director side)
%w = Storage name (Only on director side)
%x = Spooling enabled? ("yes" or "no")

Some character substitutions are not available in all situations. The job exit status code %e expands to the following values:

  • OK

  • Error

  • Fatal Error

  • Canceled

  • Differences

  • Unknown term code

Thus if you use it as a command parameter or option, you may need to enclose it within some sort of quotes.

You can use these following shortcuts which map to the older single directive functionality:

Table 12.2: RunScript shortcuts

Keyword

Runs On Succes

Runs On Failure

FailJob On Error

Runs On Client

Runs When

Run Before Job

Yes

No

Before

Run After Job

Yes

No

No

After

Run After Filed Job

No

Yes

No

After

Client Run Before Job

Yes

Before

Cient Run After Job

Yes

No

No

Yes

After

Examples:

RunScript {
    RunsWhen = Before
    FailJobOnError = No
    Command = "systemctl stop postgresql"
}
RunScript {
    RunsWhen = After
    RunsOnFailure = yes
    Command = "systemctl start postgresql"
}

Notes about the Run Queue Advanced Control with RunsWhen=Queued

It is possible to have an advanced control of the Bacula Director run queue. When a new starting Job is added to the run queue, the Director will control a certain number of conditions to let the Job start. In the condition list, we can find :

  • Execution time (when parameter)

  • Maximum Concurrent Jobs (for Director, Storage, Client, etc.)

  • Priority

  • etc.

By default, when all the conditions are met, the Director will move the Job to the active run queue and start the Job. The maximum execution time of the script is 15s by default.

It is possible to add custom conditions via a RunScript defined at the RunsWhen level Queued via the exit code of the script. The Director will determine if a Job must stay in the queue, or can analyze the other conditions. The following exit code can be used by the script:

  • 0 Job can run, the script can be executed again if other conditions are not met.

  • 1 Job must wait, the script will be executed again in 90 seconds.

  • 2 Job will be canceled.

  • -1 Job can run, no more call of the script will be done while trying to acquire resources.

It can be used to control very precisely the Job execution flow. The output of the script is sent to the Job log.

For example, you might want to block new jobs to start during a maintenance. The maintenance mode is represented by a file on disk.

#!/bin/sh

if [ -f /opt/bacula/working/maintenance-mode ]; then
   echo "System under maintenance..."
   exit 1
fi
exit 0
 Job {
  Name = Backup1
  RunScript {
    RunsWhen = Queued
    RunsOnClient = no
    FailJobOnError = no
    Command = /opt/bacula/bin/maintenance-check.sh
  }
  JobDefs = Defaults
  Client = xxxx-fd
  FileSet = FS_xxxx
}

In the following example, the script will control the number of Jobs running for a given Client, but not take in account the restore Jobs.

#!/usr/bin/perl -w
use strict;
use JSON;
my $client = shift || '';
my $status = `echo -ne ".api 2 api_opts=j\n.status dir running client=$client\n" | bconsole -u10 | grep '{'`;
my $info = JSON::decode_json($status);
my $nb_running = scalar( grep { $_->{status} eq 'R' && $_->{type} eq 'B' }  @{ $info->{running}} );
if ($nb_running >= 10) {
   print("Found $nb_running Jobs for $client\n");
   exit 1;
}
exit 0;
Job {
 Name = Backup1
 RunScript {
   RunsWhen = Queued
   RunsOnClient = no
   FailJobOnError = no
   Command = "/opt/bacula/bin/running-jobs.sh %c"
 }
 JobDefs = Defaults
 Client = xxxx-fd
 FileSet = FS_xxxx
 }
}

Notes about ClientRunBeforeJob

For compatibility reasons, with this shortcut, the command is executed directly when the client receive it. If the command ends with an error, other remote runscripts will be discarded. To be sure that all commands will be sent and executed, you have to use the RunScript syntax.

Special Shell Considerations

A “Command =” can be one of:

  • The full path to an executable program.

  • The name of an executable program that can be found in the $PATH

  • A complex shell command in the form of: “sh -c \”your commands go here\””

Special Windows Considerations

You can run scripts just after snapshots initializations with AfterVSS keyword.

In addition, for a Windows client, please take note that you must ensure a correct path to your script. The script or program can be a .com, .exe or a .bat file. If you just put the program name in then Bacula will search using the same rules that cmd.exe uses (current directory, Bacula bin directory, and PATH). It will even try the different extensions in the same order as cmd.exe. The command can be anything that cmd.exe or command.com will recognize as an executable file.

However, if you have slashes in the program name then Bacula figures you are fully specifying the name, so you must also explicitly add the three character extension.

System environment variables can be referenced as %var% and can be used as either part of the command name or arguments.

So if you have a script in the Bacula installation directory then the following lines should work fine:

    Client Run Before Job = systemstate
or
    Client Run Before Job = systemstate.bat
or
    Client Run Before Job = "systemstate"
or
    Client Run Before Job = "systemstate.bat"
or
    ClientRunBeforeJob = "\"C:/Program Files/Bacula/systemstate.bat\""

The outer set of quotes is removed when the configuration file is parsed. You need to escape the inner quotes so that they are there when the code that parses the command line for execution runs so it can tell what the program name is.

The special characters

&<>()@^\|

will need to be quoted, if they are part of a filename or argument.

If someone is logged in, a blank “command” window running the commands may appear during the execution of the command.

Suggestions for running external programs on Windows machines are:

  1. You might want the wrap more complex commands in a .bat or .cmd file which runs the actual commands, rather than trying to run (for example) regedit /e directly, because quoting and escaping command parameters correctly is nearly impossible on Windows.

  2. The script file should explicitly “exit 0” on successful completion.

  3. The path to the script file should be specified in Unix form:

ClientRunBeforeJob = "\"c:/Program Files/Bacula/systemstate.bat\""

rather than DOS/Windows form:

ClientRunBeforeJob = "C:\Program Files\Bacula\systemstate.bat" # INCORRECT

For Windows, note that there are certain limitations:

There are limitations of cmd.exe that is used to execute the commands. Bacula prefixes the string you supply with cmd.exe /c. To test that your command works you should type cmd /c “C:/Program Files/test.exe” at a cmd prompt and see what happens. Once the command is correct insert a backslash () before each double quote (“), and then put quotes around the whole thing when putting it in the director’s configuration file. You either need to have only one set of quotes or else use the short name and don’t put quotes around the command path. Note that this implies that command parameters with spaces may not be passed correctly.

Below is the output from cmd’s help as it relates to the command line passed to the /c option.

If /C or /K is specified, then the remainder of the command line after the switch is processed as a command line, where the following logic is used to process quote (”) characters:

  1. If all of the following conditions are met, then quote characters on the command line are preserved:

    • no /S switch

    • exactly two quote characters

    • no special characters between the two quote characters, where special is one of:

       &<>()@^\|
    
    * there are one or more whitespace characters between the the two quote
      characters
    * the string between the two quote characters is the name of an executable file.
    
  2. Otherwise, old behavior is to see if the first character is a quote character and if so, strip the leading character and remove the last quote character on the command line, preserving any text after the last quote character.

The following example of the use of the Client Run Before Job directive was submitted by a user:

You could write a shell script to back up a DB2 database to a FIFO. The shell script is:

#!/bin/sh
# ===== backupdb.sh
DIR=/u01/mercuryd
mkfifo $DIR/dbpipe
db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING &
sleep 1

The following line in the Job resource in the bacula-dir.conf file:

Client Run Before Job = "su - mercuryd -c /̈u01/mercuryd/backupdb.sh ’

When the job is run, you will get messages from the output of the script stating that the backup has started. Even though the command being run is backgrounded with &, the job will block until the command, thus the backup stalls.

To remedy this situation, the “db2 BACKUP DATABASE” line should be changed to the following:

db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING >`\ DIR/backup.log 2>&1 < /dev/null &

Run Before Job

RunBeforeJob = <command> The specified <command> is run as an external program prior to running the current Job. This directive is not required, but if it is defined, and if the exit code of the program run is non-zero, the current Bacula job will be canceled.

Run Before Job = "echo test"

it’s equivalent to :

RunScript Command = "echo test" RunsOnClient = No RunsWhen = Before

Run After Job

RunAfterJob = <Command> The specified <Command> is run as an external program if the current job terminates normally (without error or without being canceled). This directive is not required. If the exit code of the program run is non-zero, Bacula will print a warning message. Before submitting the specified command to the operating system, Bacula performs character substitution as described above for the RunScript directive.

See the Run After Failed Job if you want to run a script after the job has terminated with any non-normal status.

Run After Failed Job

RunAfterFailedJob = <Command> The specified is run as an external program after the current job terminates with any error status. This directive is not required. The command string must be a valid program name or name of a shell script. If the exit code of the program run is non-zero, Bacula will print a warning message. Before submitting the specified command to the operating system, Bacula performs character substitution as described above for the directive.

Note

If you wish that your script will run regardless of the exit status of the

Job, you can use this:

RunScript {
    Command = "echo test"
    RunsWhen = After
    RunsOnFailure = yes
    RunsOnClient = no
    RunsOnSuccess = yes # default, you can drop this line
}

An example of the use of this directive is given in the Tips chapter of the Bacula Enterprise Problems Resolution Guide.

Client Run Before Job

ClientRunBeforeJob = <Command> This directive is the same as Run Before Job except that the program is run on the client machine. The same restrictions apply to Unix systems as noted above for the RunScript. ClientRunBeforeJob can be used with Backup and Restore jobs.

Client Run After Job

ClientRunAfterJob = <Command> The specified is run on the client machine as soon as data spooling is complete in order to allow restarting applications on the client as soon as possible. ClientRunBeforeJob can be used with Backup and Restore jobs.

Note

See the notes above in RunScript concerning Windows clients.

Rerun Failed Levels

RerunFailedLevels = <yes|no> If this directive is set to yes (default no), and Bacula detects that a previous job at a higher level (i.e. Full or Differential) has failed, the current job level will be upgraded to the higher level. This is particularly useful for Laptops where they may often be unreachable, and if a prior Full save has failed, you wish the very next backup to be a Full save rather than whatever level it is started as.

There are several points that must be taken into account when using this directive: first, a failed job is defined as one that has not terminated normally, which includes any running job of the same name (you need to ensure that two jobs of the same name do not run simultaneously); secondly, the IgnoreFileSet Changes directive is not considered when checking for failed levels, which means that any FileSet change will trigger a rerun.

Spool Data

SpoolData = <yes|no> If this directive is set to yes (default no), the Storage Daemon will be requested to spool the data for this Job to disk rather than write it directly to the Volume (normally a tape).

Thus the data is written in large blocks to the Volume rather than small blocks. This directive is particularly useful when running multiple simultaneous backups to tape. Once all the data arrives or the spool files’ maximum sizes are reached, the data will be despooled and written to tape.

Spooling data prevents interleaving date from several job and reduces or eliminates tape drive stop and start commonly known as “shoe-shine”.

We don’t recommend using this option if you are writing to a disk file using this option will probably just slow down the backup jobs.

Note

When this directive is set to yes, Spool Attributes is also automatically set to yes.

Spool Attributes

SpoolAttributes = <yes|no> The default is set to yes, the Storage daemon will buffer the File attributes and Storage coordinates to a temporary file in the Working Directory, then when writing the Job data to the tape is completed, the attributes and storage coordinates will be sent to the Director. If set to no the File attributes are sent by the Storage daemon to the Director as they are stored on tape.

Note

When Spool Data is set to yes, Spool Attributes is also automatically set to yes.

Spool Size

SpoolSize=bytes where the bytes specify the maximum spool size for this job. The default is take from Device Maximum Spool Size limit.

Where

Where = <directory> This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored. This permits files to be restored in a different location from which they were saved. If Where is not specified or is set to slash (/), the files will be restored to their original location. By default, we have set Where in the example configuration files to be /tmp/bacula-restores. This is to prevent accidental overwriting of your files.

Add Prefix

AddPrefix = <directory> This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored.

Add Suffix

AddSuffix = <extention> This directive applies only to a Restore job and specifies a suffix to all files being restored.

Using Add Suffix=.old /etc/passwd, will be restored to /etc/passwsd.old

Strip Prefix

StripPrefix = <directory> This directive applies only to a Restore job and specifies a prefix to remove from the directory name of all files being restored.

Using Strip Prefix=/etc, /etc/passwd will be restored to /passwd

Under Windows, if you want to restore c:/files to d:/files, you can use :

Strip Prefix = c:
Add Prefix = d:

Regex Where

RegexWhere = <expressions> This directive applies only to a Restore job and specifies a regex filename manipulation of all files being restored.

Replace

Replace = <replace-option> This directive applies only to a Restore job and specifies what happens when Bacula wants to restore a file or directory that already exists. You have the following options for <replace-option>:

  • always when the file to be restored already exists, it is deleted and then replaced by the copy that was backed up. This is the default value.

  • ifnewer if the backed up file (on tape) is newer than the existing file, the existing file is deleted and replaced by the back up.

  • ifolder if the backed up file (on tape) is older than the existing file, the existing file is deleted and replaced by the back up.

  • never if the backed up file already exists, Bacula skips restoring this file.

Restore Client

RestoreClient = <client-resource-name> The RestoreClient directive specifies the default Client (File Daemon) that will be used with the restore job. If this directive is not set then a default restore client will be set to a backup client as usual. It is possible to define a dedicated restore job and run an automatic (scheduled) restore tests of your backups which will be redirected to the restore test Client.

PrefixLinks=<yes|no> If a Where path prefix is specified for a recovery job, apply it to absolute links as well. The default is no. When set to yes then while restoring files to an alternate directory, any absolute soft links will also be modified to point to the new alternate directory. Normally this is what is desired – i.e. everything is self consistent. However, if you wish to later move the files to their original locations, all files linked with absolute names will be broken.

Maximum Concurrent Jobs

MaximumConcurrentJobs = <number> where is the maximum number of Jobs from the current Job resource that can run concurrently. Note, this directive limits only Jobs with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the Director, Client, or Storage resources will also apply in addition to the limit specified here. The default is set to 1, but you may set it to a larger number. We strongly recommend that you read the WARNING documented under Maximum Concurrent Jobs in the Director’s resource.

Reschedule On Error

RescheduleOnError = <yes|no> If this directive is enabled, and the job terminates in error, the job will be rescheduled as determined by the Reschedule Interval and Reschedule Times directives. If you cancel the job, it will not be rescheduled. The default is no (i.e. the job will not be rescheduled).

This specification can be useful for portables, laptops, or other machines that are not always connected to the network or switched on.

Reschedule Incomplete Jobs

Reschedule Incomplete Jobs = <yes|no> If this directive is enabled, and the job terminates in incomplete status, the job will be rescheduled as determined by the RescheduleInterval and RescheduleTimes directives. If you cancel the job, it will not be rescheduled. The default is yes (i.e. Incomplete jobs will be rescheduled).

Reschedule Interval

RescheduleInterval = <time-specification> If you have specified RescheduleOnError = yes and the job terminates in error, it will be rescheduled after the interval of time specified by <time-specification>. See the time specification formats in the Configuration chapter for details of time specifications. If no interval is specified, the job will not be rescheduled on error. The default Reschedule Interval is 30 minutes (1800 seconds).

Reschedule Times

RescheduleTimes = <count> This directive specifies the maximum number of times to reschedule the job. If it is set to zero (0, the default) the job will be rescheduled an indefinite number of times.

Allow Incomplete Jobs

AllowIncompleteJobs = <yes|no> If this directive is disabled, and the job terminates in incomplete status, the data of the job will be discarded and the job will be marked in error. Bacula will treat this job like a regular job in error. The default is yes.

Allow Duplicate Jobs

AllowDuplicateJobs = <yes|no> A duplicate job in the sense we use it here means a second or subsequent job with the same name starts. This happens most frequently when the first job runs longer than expected because no tapes are available. The default is yes.

If this directive is enabled duplicate jobs will be run. If the directive is set to no then only one job of a given name may run at one time, and the action that Bacula takes to ensure only one job runs is determined by the other directives (see below).

Allow Duplicate Jobs usage

Allow Duplicate Jobs usage

If AllowDuplicateJobs is set to no and two jobs are present and none of the three directives given below permit cancelling a job, then the current job (the second one started) will be cancelled.

Cancel Lower Level Duplicates

CancelLowerLevelDuplicates = <yes|no> If AllowDuplicateJobs is set to no and this directive is set to yes, Bacula will choose between duplicated jobs the one with the highest level. For example, it will cancel a previous Incremental to run a Full backup. It works only for Backup jobs. The default is no. If the levels of the duplicated jobs are the same, nothing is done and the other Cancel XXX Duplicate directives will be examined.

Cancel Queued Duplicates

CancelQueuedDuplicates = <yes|no> If AllowDuplicateJobs is set to no and if this directive is set to yes any job that is already queued to run but not yet running will be canceled. The default is no.

Cancel Running Duplicates

CancelRunningDuplicates = <yes|no> If AllowDuplicateJobs is set to no and if this directive is set to yes any job that is already running will be canceled. The default is no.

Run

Run = <job-name> The Run directive (not to be confused with the Run option in a Schedule) allows you to start other jobs or to clone jobs. By using the cloning keywords (see below), you can backup the same data (or almost the same data) to two or more drives at the same time. The is normally the same name as the current Job resource (thus creating a clone). However, it may be any Job name, so one job may start other related jobs.

The part after the equal sign must be enclosed in double quotes, and can contain any string or set of options (overrides) that you can specify when entering the run command from the console. For example storage=DiskAutochanger … . In addition, there are two special keywords that permit you to clone the current job. They are level=%l and since=%s. The %l in the level keyword permits entering the actual level of the current job and the %s in the since keyword permits putting the same time for comparison as used on the current job. Note, in the case of the since keyword, the %s must be enclosed in double quotes, and thus they must be preceded by a backslash since they are already inside quotes. For example:

run = "LinuxHome level=%l since=\"%s\" storage=DiskAutochanger"

A cloned job will not start additional clones, so it is not possible to recurse.

Note

All cloned jobs, as specified in the Run directives are submitted for running before the original job is run (while it is being initialized). This means that any clone job will actually start before the original job, and may even block the original job from starting until the clone job finishes unless you allow multiple simultaneous jobs. Even if you set a lower priority on the clone job, if no other jobs are running, it will start before the original job.

If you are trying to prioritize jobs by using the clone feature (Run directive), you will find it much easier to do using a RunScript resource, or a RunBeforeJob directive.

Priority

Priority = <number> This directive permits you to control the order in which your jobs will be run by specifying a positive non-zero number. The higher the number, the lower the job priority. Assuming you are not running concurrent jobs, all queued jobs of priority 1 will run before queued jobs of priority 2 and so on, regardless of the original scheduling order.

The priority only affects waiting jobs that are queued to run, not jobs that are already running. If one or more jobs of priority 2 are already running, and a new job is scheduled with priority 1, the currently running priority 2 jobs must complete before the priority 1 job is run, unless Allow Mixed Priority is set.

The default priority is 10.

If you want to run concurrent jobs you should keep these points in mind:

  • See Running Concurrent Jobs section on how to setup concurrent jobs in the Bacula Enterprise Problems Resolution Guide.

  • Bacula concurrently runs jobs of only one priority at a time. It will not simultaneously run a priority 1 and a priority 2 job.

  • If Bacula is running a priority 2 job and a new priority 1 job is scheduled, it will wait until the running priority 2 job terminates even if the Maximum Concurrent Jobs settings would otherwise allow two jobs to run simultaneously.

  • Suppose that Bacula is running a priority 2 job and a new priority 1 job is scheduled and queued waiting for the running priority 2 job to terminate. If you then start a second priority 2 job, the waiting priority 1 job will prevent the new priority 2 job from running concurrently with the running priority 2 job. That is, as long as there is a higher priority job waiting to run, no new lower priority jobs will start even if the Maximum Concurrent Jobs settings would normally allow them to run. This ensures that higher priority jobs will be run as soon as possible.

If you have several jobs of different priority, it may not best to start them at exactly the same time, because Bacula must examine them one at a time. If by Bacula starts a lower priority job first, then it will run before your high priority jobs. If you experience this problem, you may avoid it by starting any higher priority jobs a few seconds before lower priority ones. This insures that Bacula will examine the jobs in the correct order, and that your priority scheme will be respected.

Allow Mixed Priority

AllowMixedPriority = <yes|no> When set to yes (default no ), this job may run even if lower priority jobs are already running. This means a high priority job will not have to wait for other jobs to finish before starting. The scheduler will only mix priorities when all running jobs have this set to true.

Note

Only higher priority jobs will start early. Suppose the director will allow two concurrent jobs, and that two jobs with priority 10 are running, with two more in the queue. If a job with priority 5 is added to the queue, it will be run as soon as one of the running jobs finishes. However, new priority 10 jobs will not be run until the priority 5 job has finished.

The following is an example of a valid Job resource definition:

Job {
    Name = "LinuxHome"
    Type = Backup
    Level = Incremental   # default
    Client = bacula-fd
    FileSet="LinuxHome-fileset"
    Storage = DiskAutochanger
    Pool = DiskBackup365d
    Schedule = "Daily-schedule"
    Messages = Standard
}

Check Malware

CheckMalware = <yes|no> When set to yes (default no), the job will check the files recorded in the catalog against a Malware database. The Malware database (see MalwareDatabaseCommand) will be updated automatically when needed. To use the Malware detection, the FileSet must be configured with Signature=MD5 or Signature=SHA256. See the Malware Detection section of this manual for more information.

Go back to the Director Resource Types page.

Go back to the Technical Reference for Director.