Features in Bacula Community

Community

This chapter presents the new features that have been added to the previous versions of Bacula Community.

New Features in 11.0.0

Catalog Performance Improvements

There is a new Bacula database format (schema) in this version of Bacula that eliminates the FileName table by placing the Filename into the File record of the File table. This substantially improves performance, particularly for large databases.

The update_xxx_catalog script will automatically update the Bacula database format, but you should realize that for very large databases (greater than 50GB), it may take some time and it will double the size of the database on disk during the migration.

This database format change can provide very significant improvements in the speed of metadata insertion into the database, and in some cases (backup of large email servers) can significantly reduce the size of the database.

Automatic TLS Encryption

Starting with Bacula 11.0.6, all daemons and consoles are now using TLS automatically for all network communications. It is no longer required to setup TLS keys in advance. It is possible to turn off automatic TLS PSK encryption using the TLS PSK Enable directive.

Client Behind NAT Support with the Connect To Director Directive

A Client can now initiate a connection to the Director (permanently or scheduled) to allow the Director to communicate to the Client when a new Job is started or a bconsole command such as status client or estimate is issued.

This new network configuration option is particularly useful for Clients that are not directly reachable by the Director.

../../../_images/RNimage01.png
# cat /opt/bacula/etc/bacula-fd.conf
Director {
  Name = bac-dir
  Password = aigh3wu7oothieb4geeph3noo  # Password used to connect

  # New directives
  Address = bac-dir.mycompany.com       # Director address to connect
  Connect To Director = yes                  # FD will call the Director
}


# cat /opt/bacula/etc/bacula-dir.conf
Client {
  Name = bac-fd
  Password = aigh3wu7oothieb4geeph3noo

  # New directive
  Allow FD Connections = yes
}

It is possible to schedule the Client connection at certain periods of the day:

# cat /opt/bacula/etc/bacula-fd.conf
Director {
  Name = bac-dir
  Password = aigh3wu7oothieb4geeph3noo  # Password used to connect

  # New directives
  Address = bac-dir.mycompany.com       # Director address to connect
  Connect To Director = yes             # FD will call the Director
  Schedule = WorkingHours
}

Schedule {
  Name = WorkingHours
  # Connect the Director between 12:00 and 14:00
  Connect = MaxConnectTime=2h on mon-fri at 12:00
}

Note that in the current version, if the File Daemon is started after 12:00, the next connection to the Director will occur at 12:00 the next day.

A Job can be scheduled in the Director around 12:00, and if the Client is connected, the Job will be executed as if the Client was reachable from the Director.

Continuous Data Protection Plugin

Continuous Data Protection (CDP), also called continuous backup or real-time backup, refers to backup of Client data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time.

../../../_images/RNimage02.png

The Bacula CDP feature is composed of two components: An application (cdp-client or tray-monitor) that will monitor a set of directories configured by the user, and a Bacula FileDaemon plugin responsible to secure the data using Bacula infrastructure.

The user application (cdp-client or tray-monitor) is responsible for monitoring files and directories. When a modification is detected, the new data is copied into a spool directory. At a regular interval, a Bacula backup job will contact the FileDaemon and will save all the files archived by the cdp-client. The locally copied data can be restored at any time without a network connection to the Director.

See the CDP (Continuous Data Protection) chapter for more information.

Global Autoprune Control Directive

The Director Autoprune directive can now globally control the Autoprune feature. This directive will take precedence over Pool or Client Autoprune directives.

Director {
  Name = mydir-dir
  ...
  AutoPrune = no     # switch off Autoprune globally
}

Event and Auditing

The Director daemon can now record events such as:

  • Console connection/disconnection

  • Daemon startup/shutdown

  • Command execution

The events may be stored in a new catalog table, to disk, or sent via syslog.

Messages {
  Name = Standard
  catalog = all, events
  append = /opt/bacula/working/bacula.log = all, !skipped
  append = /opt/bacula/working/audit.log = events, !events.bweb
}

Messages {
  Name = Daemon
  catalog = all, events
  append = /opt/bacula/working/bacula.log = all, !skipped
  append = /opt/bacula/working/audit.log = events, !events.bweb
  append = /opt/bacula/working/bweb.log = events.bweb
}

The new message category “events” is not included in the default configuration files by default.

It is possible to filter out some events using “!events.” form. It is possible to specify 10 custom events per Messages resource.

All event types are recorded by default.

When stored in the catalog, the events can be listed with the “list events” command.

 * list events [type=<str> | limit=<int> | order=<asc|desc> | days=<int> |
                start=<time-specification> | end=<time-specification>]
+---------------------+------------+-----------+--------------------------------+
| time                | type       | source    | event                          |
+---------------------+------------+-----------+--------------------------------+
| 2020-04-24 17:04:07 | daemon     | *Daemon*  | Director startup               |
| 2020-04-24 17:04:12 | connection | *Console* | Connection from 127.0.0.1:8101 |
| 2020-04-24 17:04:20 | command    | *Console* | purge jobid=1                  |
+---------------------+------------+-----------+--------------------------------+

The .events command can be used to record an external event. The source recorded will be recorded as “source”. The events type can have a custom name.

* .events type=baculum source=joe text="User login"

New Prune Command Option

The prune jobs all command will query the catalog to find all combinations of Client/Pool, and will run the pruning algorithm on each of them. At the end, all files and jobs not needed for restore that have passed the relevant retention times will be pruned.

The prune command prune jobs all yes can be scheduled in a RunScript to prune the catalog once per day for example. All Clients and Pools will be analyzed automatically.

Job {
  ...
  RunScript {
    Console = "prune jobs all yes"
    RunsWhen = Before
    failjobonerror = no
    runsonclient = no
  }
}

Dynamic Client Address Directive

It is now possible to use a script to determine the address of a Client when dynamic DNS option is not a viable solution:

Client {
  Name = my-fd
  ...
  Address = "|/opt/bacula/bin/compute-ip my-fd"
}

The command used to generate the address should return one single line with a valid address and end with the exit code 0. An example would be:

Address = "|echo 127.0.0.1"

This option might be useful in some complex cluster environments.

Volume Retention Enhancements

The Pool/Volume parameter Volume Retention can now be disabled to never prune a volume based on the Volume Retention time. When Volume Retention is disabled, only the Job Retention time will be used to prune jobs.

Pool {
  Volume Retention = 0
 ...
}

Windows Enhancements

  • Support for Windows files with non-UTF16 names.

  • Snapshot management has been improved, and a backup job now relies exclusively on the snapshot tree structure.

  • Support for the system.cifs_acl extended attribute backup with Linux CIFS has been added. It can be used to backup Windows security attributes from a CIFS share mounted on a Linux system. Note that only recent Linux kernels can handle the system.cifs_acl feature correctly. The FileSet must use the XATTR Support=yes option, and the CIFS share must be mounted with the cifsacl options. See mount.cifs(8) for more information.

GPFS ACL Support

The new Bacula FileDaemon supports the GPFS filesystem specific ACL. The GPFS libraries must be installed in the standard location. To know if the GPFS support is available on your system, the following commands can be used.

*setdebug level=1 client=stretch-amd64-fd
Connecting to Client stretch-amd64-fd at stretch-amd64:9102
2000 OK setdebug=1 trace=0 hangup=0 blowup=0 options= tags=

*st client=stretch-amd64-fd
Connecting to Client stretch-amd64-fd at stretch-amd64:9102

stretch-amd64-fd Version: 11.0.0 (01 Dec 2020)  x86_64-pc-linux-gnu-bacula-enterprise debian 9.11
Daemon started 21-Jul-20 14:42. Jobs: run=0 running=0.
 Ulimits: nofile=1024 memlock=65536 status=ok
 Heap: heap=135,168 smbytes=199,993 max_bytes=200,010 bufs=104 max_bufs=105
 Sizes: boffset_t=8 size_t=8 debug=1 trace=0 mode=0,2010 bwlimit=0kB/s
 Crypto: fips=no crypto=OpenSSL 1.0.2u  20 Dec 2019
 APIs: GPFS
 Plugin: bpipe-fd.so(2)

The APIs line will indicate if the /usr/lpp/mmfs/libgpfs.so was loaded at the start of the Bacula FD service or not.

The standard ACL Support (cf (here)) directive can be used to enable automatically the support for the GPFS ACL backup.

New Baculum Features

Multi-user interface improvements

There have been added new functions and improvements to the multi-user interface and restricted access.

The Security page has new tabs:

  • Console ACLs

  • OAuth2 clients

  • API hosts

These new tabs help to configure OAuth2 accounts, create restricted Bacula Console for users and create API hosts. They ease the process of creating users with a restricted Bacula resources access.

Add searching jobs by filename in the restore wizard

In the restore wizard now is possible to select job to restore by filename of file stored in backups. There is also possible to limit results to specific path.

Show more detailed job file list

The job file list now displays file details like: file attributes, UID, GID, size, mtime and information if the file record for saved or deleted file.

Add graphs to job view page

On the job view page, new pie and bar graphs for selected job are available.

Implement graphical status storage

On the storage status page are available two new types of the status (raw and graphical). The graphical status page is modern and refreshed asynchronously.

Add Russian translations

Global messages log window

There has been added new window to browse Bacula logs in a friendly way.

Job status weather

Add the job status weather on job list page to express current job condition.

Restore wizard improvements

In the restore wizard has been added listing and browsing names encoded in non-UTF encoding.

New API endpoints

  • /oauth2/clients

  • /oauth2/clients/client_id

  • /jobs/files

New parameters in API endpoints

  • /jobs/jobid/files - ‘details’ parameter

  • /storages/show - ‘output’ parameter

  • /storages/storageid/show - ‘output’ parameter

New Features in 9.6.0

Building 9.6.4 and later

Version 9.6.4 is a major security and bug fix release. We suggest everyone to upgrade as soon as possible.

One significant improvement in this version is for the AWS S3 cloud driver. First the code base has been brought much closer to the Enterprise version (still a long ways to go). Second major change is that the community code now uses the latest version of libs3 as maintained by Bacula Systems. The libs3 code is available as a tar file for Bacula version 9.6.4 at:

http://www.bacula.org/downloads/libs3-20200523.tar.gzhttp://www.bacula.org/downloads/libs3-20200523.tar.gz

Note: Version 9.6.4 must be compiled with the above libs3 version or later. To build libs3:

  • Remove any libs3 package loaded by your OS

  • Download above link

  • tar xvfz libs3-20200523.tar.gz

  • cd libs3-20200523

  • make # should have no errors

  • sudo make install

Then when you do your Bacula ./configure <args> it should automatically detect and use the libs3. The output from the ./configure will show whether or not libs3 was found during the configuration. E.g.

S3 support:                yes

in the output from ./configure.

Docker Plugin

Containers is a relatively new system level virtualization concept that has less overhead than traditional virtualation. This is true because Container use the underlying operating system to provide all the needed services thus eliminating the need for multiple operating systems.

Docker containers rely on sophisticated file system level data abstraction with a number of read-only images to create templates used for container initialization.

With its Docker Plugin, the Bacula will save the full container image including all read-only and writable layers into a single image archive.

It is not necessary to install a Bacula File daemon in each container, so each container can be backed up from a common image repository.

The Bacula Docker Plugin will contact the Docker service to read and save the contents of any system image or container image using snapshots (default behavior) and dump them using the Docker API.

The Docker Plugin whitepaper provides more detailed information.

Real-Time Statistics Monitoring

All Bacula daemons can now collect internal performance statistics periodically and provide mechanisms to store the values to a CSV file or to send the values to a Graphite daemon via the network. Graphite is an enterprise-ready monitoring tool (https://graphiteapp.org).

../../../_images/RNimage03.png

To activate the statistic collector feature, simply define a Statistics resource in the daemon of your choice:

Statistics {
 Name = "Graphite"
 Type = Graphite

# Graphite host information
 Host = "localhost"
 Port = 2003
}

It is possible to change the interval that is used to collect the statistics with the Interval directive (5 mins by default), and use the Metrics directive to select the data to collect (all by default).

If the Graphite daemon cannot be reached, the statistics data are spooled on disk and are sent automatically when the Graphite daemon is available again.

The bconsole statistics command can be used to display the current statistics in various formats (text or json for now).

*statistics
Statistics available for:
     1: Director
     2: Storage
     3: Client
Select daemon type for statistics (1-3): 1
bacula.dir.config.clients=1
bacula.dir.config.jobs=3
bacula.dir.config.filesets=2
bacula.dir.config.pools=3
bacula.dir.config.schedules=2
...
*statistics storage
...
bacula.storage.bac-sd.device.File1.readbytes=214
bacula.storage.bac-sd.device.File1.readtime=12
bacula.storage.bac-sd.device.File1.readspeed=0.000000
bacula.storage.bac-sd.device.File1.writespeed=0.000000
bacula.storage.bac-sd.device.File1.status=1
bacula.storage.bac-sd.device.File1.writebytes=83013529
bacula.storage.bac-sd.device.File1.writetime=20356
...

The statistics bconsole command can accept parameters to be scripted, for example it is possible to export the data in JSON, or to select which metrics to display.

*statistics bacula.dir.config.clients bacula.dir.config.jobs json
[
  {
    "name": "bacula.dir.config.clients",
    "value": 1,
    "type": "Integer",
    "unit": "Clients",
    "description": "The number of defined clients in the Director."
  },
  {
    "name": "bacula.dir.config.jobs",
    "value": 3,
    "type": "Integer",
    "unit": "Jobs",
    "description": "The number of defined jobs in the Director."
  }
]

The .status statistics command can be used to query the status of the Statistic collector thread.

*.status dir statistics
Statistics backend: Graphite is running
 type=2 lasttimestamp=12-Sep-18 09:45
 interval=300 secs
 spooling=in progress
 lasterror=Could not connect to localhost:2003 Err=Connection refused

Update Statistics: running interval=300 secs lastupdate=12-Sep-18 09:45
*

New Features in 9.4.0

Cloud Backup

A major problem of Cloud backup is that data transmission to and from the Cloud is very slow compared to traditional backup to disk or tape. The Bacula Cloud drivers provide a means to quickly finish the backups and then to transfer the data from the local cache to the Cloud in the background. This is done by first splitting the data Volumes into small parts that are cached locally then uploading those parts to the Cloud storage service in the background, either while the job continues to run or after the backup Job has terminated. Once the parts are written to the Cloud, they may either be left in the local cache for quick restores or they can be removed (truncate cache).

Cloud Volume Architecture

Note: Regular Bacula disk Volumes are implemented as standard files that reside in the user defined Archive Directory. On the other hand, Bacula Cloud Volumes are directories that reside in the user defined Archive Directory. Each Cloud Volume’s directory contains the cloud Volume parts which are implemented as numbered files (part.1, part.2, …).

Cloud Restore

During a restore, if the needed parts are in the local cache, they will be immediately used, otherwise, they will be downloaded from the Cloud as needed. The restore starts with parts already in the local cache but will wait in turn for any part that needs to be downloaded. The Cloud part downloads proceed while the restore is running.

With most Cloud providers, uploads are usually free of charge, but downloads of data from the Cloud are billed. By using local cache and multiple small parts, you can configure Bacula to substantially reduce download costs.

The MaximumFileSize Device directive is still valid within the Storage Daemon and defines the granularity of a restore chunk. In order to limit volume parts to download during restore (specially when restoring single files), it might be useful to set the MaximumFileSize to a value smaller than or equal to the MaximumPartSize.

Compatibility

Since a Cloud Volume contains the same data as an ordinary Bacula Volume, all existing types of Bacula data may be stored in the cloud - that is client encrypted, compressed data, plugin data, etc. All existing Bacula functionality, with the exception of deduplication, is available with the Bacula Cloud drivers.

Deduplication and the Cloud

At the current time, Bacula Global Endpoint Backup does not support writing to the cloud because the cloud would be too slow to support large hashed and indexed containers of deduplication data.

Virtual Autochangers and Disk Autochangers

If you use a Bacula Virtual Autochanger you will find it compatible with the new Bacula Cloud drivers. However, if you use a third party disk autochanger script such as Vchanger, unless or until it is modified to handle Volume directories, it may not be compatible with Bacula Cloud drivers.

Security

All data that is sent to and received from the cloud by default uses the HTTPS protocol, so your data is encrypted while being transmitted and received. However, data that resides in the Cloud is not encrypted by default. If you wish extra security of your data while it resides in the cloud, you should consider using Bacula’s PKI data encryption feature during the backup.

Cache and Pruning

The Cache is treated much like a normal Disk based backup, so that in configuring Cloud the administrator should take care to set “Archive Device” in the Device resource to a directory where he/she would normally start data backed up to disk. Obviously, unless he/she uses the truncate/prune cache commands, the Archive Device will continue to fill.

The cache retention can be controlled per Volume with the CacheRetention attribute. The default value is 0, meaning that the pruning of the cache is disabled.

The CacheRetention value for a volume can be modified with the update command or via the Pool directive CacheRetention for newly created volumes.

New Commands, Resource, and Directives for Cloud

To support Cloud, in Bacula Enterprise 8.8 there are new bconsole commands, new Storage Daemon directives and a new Cloud resource that is specified in the Storage Daemon’s Device resource.

New Cloud Bconsole Commands

Cloud The new cloud bconsole command allows you to do a number of things with cloud volumes. The options are the following:

  • None. If you specify no arguments to the command, bconsole will prompt with:

Cloud choice:

   1: List Cloud Volumes in the Cloud
   2: Upload a Volume to the Cloud
   3: Prune the Cloud Cache
   4: Truncate a Volume Cache
   5: Done
Select action to perform on Cloud (1-5):

The different choices should be rather obvious.

  • Truncate This command will attempt to truncate the local cache for the specified Volume. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:

    • Storage=xxx

    • Volume=xxx

    • AllPools

    • AllFromPool

    • Pool=xxx

    • MediaType=xxx

    • Drive=xxx

    • Slots=nnn

  • Prune This command will attempt to prune the local cache for the specified Volume. Bacula will respect the CacheRetention volume attribute to determine if the cache can be truncated or not. Only parts that are uploaded to the cloud will be deleted from the cache. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:

    • Storage=xxx

    • Volume=xxx

    • AllPools

    • AllFromPool

    • Pool=xxx

    • MediaType=xxx

    • Drive=xxx

    • Slots=nnn

  • Upload This command will attempt to upload the specified Volumes. It will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, you may specify any of the following additional command line options:

    • Storage=xxx

    • Volume=xxx

    • AllPools

    • AllFromPool

    • Pool=xxx

    • MediaType=xxx

    • Drive=xxx

    • Slots=nnn

  • List This command will list volumes stored in the Cloud. If a volume name is specified, the command will list all parts for the given volume. To avoid the prompts, you may specify any of the following additional command line options:

    • Storage=xxx

    • Volume=xxx

    • Storage=xxx

Cloud Additions to the DIR Pool Resource

Within the bacula-dir.conf file each Pool resource there is an additional keyword CacheRetention that can be specified.

Cloud Additions to the SD Device Resource

Within the bacula-sd.conf file each Device resource there is an additional keyword Cloud that must be specified on the Device Type directive, and two new directives Maximum Part Size and Cloud.

New Cloud SD Device Directives

  • Device Type The Device Type has been extended to include the new keyword Cloud to specify that the device supports cloud Volumes. Example:

    Device Type = Cloud
    
  • Cloud The new Cloud directive permits specification of a new Cloud Resource. As with other Bacula resource specifications, one specifies the name of the Cloud resource. Example:

    Cloud = S3Cloud
    
  • Maximum Part Size This directive allows one to specify the maximum size for each part. Smaller part sizes will reduce restore costs, but may require a small additional overhead to handle multiple parts. The maximum number of parts permitted in a Cloud Volume is 524,288. The maximum size of any given part is approximately 17.5TB.

Example Cloud Device Specification

An example of a Cloud Device Resource might be:

Device {
  Name = CloudStorage
  Device Type = Cloud
  Cloud = S3Cloud
  Archive Device = /opt/bacula/backups
  Maximum Part Size = 10000000
  Media Type = CloudType
  LabelMedia = yes
  Random Access = Yes;
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

As you can see from the above, the Cloud directive in the Device resource contains the name (S3Cloud) of the Cloud resource that is shown below.

Note also the Archive Device is specified in the same manner as one would use for a File device. However, in place of containing files with Volume names, the archive device for the Cloud drivers will contain the local cache, which consists of directories with the Volume name; and these directories contain the parts associated with the particular Volume. So with the above Device resource, and the two cache Volumes shown in figure fig:cloud0ay2 above would have the following layout on disk:

/opt/bacula/backups
   /opt/bacula/backups/Volume0001
      /opt/bacula/backups/Volume0001/part.1
      /opt/bacula/backups/Volume0001/part.2
      /opt/bacula/backups/Volume0001/part.3
      /opt/bacula/backups/Volume0001/part.4
   /opt/bacula/backups/Volume0002
      /opt/bacula/backups/Volume0002/part.1
      /opt/bacula/backups/Volume0002/part.2
      /opt/bacula/backups/Volume0002/part.3

The Cloud Resource

The Cloud resource has a number of directives that may be specified as exemplified in the following example:

default east USA location:

Cloud {
  Name = S3Cloud
  Driver = "S3"
  HostName = "s3.amazonaws.com"
  BucketName = "BaculaVolumes"
  AccessKey = "BZIXAIS39DP9YNER5DFZ"
  SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0"
  Protocol = HTTPS
  UriStyle = VirtualHost
  Truncate Cache = No
  Upload = EachPart
  Region = "us-east-1"
  MaximumUploadBandwidth = 5MB/s
}

central europe location:

Cloud {
  Name = S3Cloud
  Driver = "S3"
  HostName = "s3-eu-central-1.amazonaws.com"
  BucketName = "BaculaVolumes"
  AccessKey = "BZIXAIS39DP9YNER5DFZ"
  SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0"
  Protocol = HTTPS
  UriStyle = VirtualHost
  Truncate Cache = No
  Upload = EachPart
  Region = "eu-central-1"
  MaximumUploadBandwidth = 4MB/s
}

For Amazon Cloud, refer to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region to get a complete list of regions and corresponding endpoints and use them respectively as Region and HostName directive.

For CEPH S3 interface:

Cloud {
  Name = CEPH_S3
  Driver = "S3"
  HostName = ceph.mydomain.lan
  BucketName = "CEPHBucket"
  AccessKey = "xxxXXXxxxx"
  SecretKey = "xxheeg7iTe0Gaexee7aedie4aWohfuewohxx0"
  Protocol = HTTPS
  Upload = EachPart

  UriStyle = Path            # Must be set for CEPH
}

The directives of the above Cloud resource for the S3 driver are defined as follows:

Name = Device-Name

The name of the Cloud resource. This is the logical Cloud name, and may be any string up to 127 characters in length. Shown as S3Cloud above.

Description = Text

The description is used for display purposes as is the case with all resource.

Driver = DriverName

This defines which driver to use. It can be S3. There is also a File driver, which is used mostly for testing.

Host Name = Name

This directive specifies the hostname to be used in the URL. Each Cloud service provider has a different and unique hostname. The maximum size is 255 characters and may contain a tcp port specification.

Bucket Name = Name

This directive specifies the bucket name that you wish to use on the Cloud service. This name is normally a unique name name that identifies where you want to place your Cloud Volume parts. With Amazon S3, the bucket must be created previously on the Cloud service. The maximum bucket name size is 255 characters.

Access Key = String

The access key is your unique user identifier given to you by your cloud service provider.

Secret Key = String

The secret key is the security key that was given to you by your cloud service provider. It is equivalent to a password.

Protocol = HTTP | HTTPS

The protocol defines the communications protocol to use with the cloud service provider. The two protocols currently supported are: HTTPS and HTTP. The default is HTTPS.

Uri Style = VirtualHost | Path

This directive specifies the URI style to use to communicate with the cloud service provider. The two Uri Styles currently supported are: VirtualHost and Path. The default is VirtualHost.

Truncate Cache = Truncate-kw

This directive specifies when Bacula should automatically remove (truncate) the local cache parts. Local cache parts can only be removed if they have been uploaded to the cloud. The currently implemented values are:

No Do not remove cache. With this option you must manually delete the cache parts with a bconsole Truncate Cache command, or do so with an Admin Job that runs an Truncate Cache command. This is the default. AfterUpload Each part will be removed just after it is uploaded. Note, if this option is specified, all restores will require a download from the Cloud. Note: Not yet implemented. AtEndOfJob With this option, at the end of the Job, every part that has been uploaded to the Cloud will be removed (truncated). Note: Not yet implemented.

Upload = Upload-kw

This directive specifies when local cache parts will be uploaded to the Cloud. The options are:

No Do not upload cache parts. With this option you must manually upload the cache parts with a bconsole Upload command, or do so with an Admin Job that runs an Upload command. This is the default. EachPart With this option, each part will be uploaded when it is complete i.e. when the next part is created or at the end of the Job. AtEndOfJob With this option all parts that have not been previously uploaded will be uploaded at the end of the Job. Note: Not yet implemented.

Maximum Upload Bandwidth = speed

The default is unlimited, but by using this directive, you may limit the upload bandwidth used globally by all devices referencing this Cloud resource.

Maximum Download Bandwidth = speed

The default is unlimited, but by using this directive, you may limit the download bandwidth used globally by all devices referencing this Cloud resource.

Region = String

The Cloud resource can be configured to use a specific endpoint within a region. This directive is required for AWS-V4 regions. ex: Region=”eu-central-1”

File Driver for the Cloud

As mentioned above, one may specify the keyword File on the Driver directive of the Cloud resource. Instead of writing to the Cloud, Bacula will instead create a Cloud Volume but write it to disk. The rest of this section applies to the Cloud resource directives when the File driver is specified.

The following Cloud directives are ignored: Bucket Name, Access Key, Secret Key, Protocol, Uri Style. The directives Truncate Cache and Upload work on the local cache in the same manner as they do for the S3 driver.

The main difference to note is that the Host Name, specifies the destination directory for the Cloud Volume files, and this Host Name must be different from the Archive Device name, or there will be a conflict between the local cache (in the Archive Device directory) and the destination Cloud Volumes (in the Host Name directory).

As noted above, the File driver is mostly used for testing purposes, and we do not particularly recommend using it. However, if you have a particularly slow backup device you might want to stage your backup data into an SSD or disk using the local cache feature of the Cloud device, and have your Volumes transferred in the background to a slow File device.

WORM Tape Support

Automatic WORM (Write Once Read Multiple) tapes detection has been added in 10.2.

When a WORM tape is detected, the catalog volume entry is changed automatically to set Recycle=no. It will prevent the volume from being automatically recycled by Bacula.

There is no change in how the Job and File records are pruned from the catalog as that is a separate issue that is currently adequately implemented in Bacula.

When a WORM tape is detected, the SD will show WORM on the device state output (must have debug greater or equal to 6) otherwise the status shows as !WORM

Device state:
   OPENED !TAPE LABEL APPEND !READ !EOT !WEOT !EOF WORM !SHORT !MOUNTED ...

The output of the used volume status has been modified to include the worm state. It shows worm=1 for a worm cassette and worm=0 otherwise. Example:

Used Volume status:
Reserved volume: TestVolume001 on Tape device "nst0" (/dev/nst0)
   Reader=0 writers=0 reserves=0 volinuse=0 worm=1

The following programs are needed for the WORM tape detection:

  • sdparm

  • tapeinfo

The new Storage Device directive Worm Command must be configured as well as the Control Device directive (used with the Tape Alert feature).

Device {
  Name = "LTO-0"
  Archive Device = "/dev/nst0"
  Control Device = "/dev/sg0"    # from lsscsi -g
  Worm Command = "/opt/bacula/scripts/isworm %l"
...
}

New Features in 9.2.0

This chapter describes new features that have been added to the current version of Bacula in version 9.2.0

In general, this is a fairly substantial release because it contains a very large number of bug fixes backported from the Bacula Enterprise version. There are also a few new features backported from Bacula Enterprise.

Enhanced Autochanger Support

Note: this feature was actually backported into version 9.0.0, but the documentation was added much after the 9.0.0 release. To call your attention to this new feature, we have also included the documentation here.

To make Bacula function properly with multiple Autochanger definitions, in the Director’s configuration, you must adapt your bacula-dir.conf Storage directives.

Each autochanger that you have defined in an Autochanger resource in the Storage daemon’s bacula-sd.conf file, must have a corresponding Autochanger resource defined in the Director’s bacula-dir.conf file. Normally you will already have a Storage resource that points to the Storage daemon’s Autochanger resource. Thus you need only to change the name of the Storage resource to Autochanger. In addition the Autochanger = yes directive is not needed in the Director’s Autochanger resource, since the resource name is Autochanger, the Director already knows that it represents an autochanger.

In addition to the above change (Storage to Autochanger), you must modify any additional Storage resources that correspond to devices that are part of the Autochanger device. Instead of the previous Autochanger = yes directive they should be modified to be Autochanger = xxx where you replace the xxx with the name of the Autochanger.

For example, in the bacula-dir.conf file:

Autochanger {             # New resource
  Name = Changer-1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO-Changer-1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 50
}

Storage {
  Name = Changer-1-Drive0
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive0
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

Storage {
  Name = Changer-1-Drive1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

...

Note that Storage resources Changer-1-Drive0 and Changer-1-Drive1 are not required since they make up part of an autochanger, and normally, Jobs refer only to the Autochanger resource. However, by referring to those Storage definitions in a Job, you will use only the indicated drive. This is not normally what you want to do, but it is very useful and often used for reserving a drive for restores. See the Storage daemon example .conf below and the use of AutoSelect = no.

So, in summary, the changes are:

  • Change Storage to Autochanger in the LTO4 resource.

  • Remove the Autochanger = yes from the Autochanger LTO4 resource.

  • Change the Autochanger = yes in each of the Storage device that belong to the Autochanger to point to the Autochanger resource with for the example above the directive Autochanger = LTO4.

    Please note that if you define two different autochangers, you must give a unique Media Type to the Volumes in each autochanger. More specifically, you may have multiple Media Types, but you cannot have Volumes with the same Media Type in two different autochangers. If you attempt to do so, Bacula will most likely reference the wrong autochanger (Storage) and not find the correct Volume.

New Prune Command Option

The bconsole prune command can now run the pruning algorithm on all volumes from a Pool or on all Pools.

  • prune allfrompool pool=Default yes

  • prune allfrompool allpools yes

BConsole Features

Delete a Client

The delete client bconsole command delete the database record of a client that is no longer defined in the configuration file. It also removes all other records (Jobs, Files, …) associated with the client that is deleted.

Status Schedule Enhancements

The status schedule command can now accept multiple client or job keywords on the command line. The limit parameter is disabled when the days parameter is used. The output is now ordered by day.

Restore option noautoparent

During a bconsole restore session, parent directories are automatically selected to avoid issues with permissions. It is possible to disable this feature with the noautoparent command line parameter.

Tray Monitor Restore Screen

It is now possible to restore files from the Tray Monitor GUI program.

../../../_images/RNimage04.png

New Features in 9.0.0

This chapter describes new features that have been added to the current version of Bacula in version 9.0.0

Enhanced Autochanger Support

To make Bacula function properly with multiple Autochanger definitions, in the Director’s configuration, you must adapt your bacula-dir.conf Storage directives.

Each autochanger that you have defined in an Autochanger resource in the Storage daemon’s bacula-sd.conf file, must have a corresponding Autochanger resource defined in the Director’s bacula-dir.conf file. Normally you will already have a Storage resource that points to the Storage daemon’s Autochanger resource. Thus you need only to change the name of the Storage resource to Autochanger. In addition the Autochanger = yes directive is not needed in the Director’s Autochanger resource, since the resource name is Autochanger, the Director already knows that it represents an autochanger.

In addition to the above change (Storage to Autochanger), you must modify any additional Storage resources that correspond to devices that are part of the Autochanger device. Instead of the previous Autochanger = yes directive they should be modified to be Autochanger = xxx where you replace the xxx with the name of the Autochanger.

For example, in the bacula-dir.conf file:

Autochanger {             # New resource
  Name = Changer-1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO-Changer-1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 50
}

Storage {
  Name = Changer-1-Drive0
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive0
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

Storage {
  Name = Changer-1-Drive1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

...

Note that Storage resources Changer-1-Drive0 and Changer-1-Drive1 are not required since they make up part of an autochanger, and normally, Jobs refer only to the Autochanger resource. However, by referring to those Storage definitions in a Job, you will use only the indicated drive. This is not normally what you want to do, but it is very useful and often used for reserving a drive for restores. See the Storage daemon example .conf below and the use of AutoSelect = no.

So, in summary, the changes are:

  • Change Storage to Autochanger in the LTO4 resource.

  • Remove the Autochanger = yes from the Autochanger LTO4 resource.

  • Change the Autochanger = yes in each of the Storage device that belong to the Autochanger to point to the Autochanger resource with for the example above the directive Autochanger = LTO4.

Source Code for Windows

With this version of Bacula, we have included the old source code for Windows and also updated it to contain the code from the latest Bacula Enterprise version. The project is also directly distributing binaries for Windows rather than relying on Bacula Systems to supply them.

Maximum Virtual Full Interval Option

Two new director directives have been added: Max Virtual Full Interval and Virtual Full Backup Pool.

The Max Virtual Full Interval directive should behave similar to the Max Full Interval, but for Virtual Full jobs. If Bacula sees that there has not been a Full backup in Max Virtual Full Interval time then it will upgrade the job to Virtual Full. If you have both Max Full Interval and Max Virtual Full Interval set then Max Full Interval should take precedence.

The Virtual Full Backup Pool directive allows one to change the pool as well. You probably want to use these two directives in conjunction with each other but that may depend on the specifics of one’s setup. If you set the Max Full Interval without setting Max Virtual Full Interval then Bacula will use whatever the “default” pool is set to which is the same behavior as with the Max Full Interval.

Progressive Virtual Full

In Bacula version 9.0.0, we have added a new Directive named Backups To Keep that permits you to implement Progressive Virtual Fulls within Bacula. Sometimes this feature is known as Incremental Forever with Consolidation.

../../../_images/RNimage05.png

To implement the Progressive Virtual Full feature, simply add the Backups To Keep directive to your Virtual Full backup Job resource. The value specified on the directive indicates the number of backup jobs that should not be merged into the Virtual Full (i.e. the number of backup jobs that should remain after the Virtual Full has completed. The default is zero, which reverts to a standard Virtual Full than consolidates all the backup jobs that it finds.

Backups To Keep Directive

The new BackupsToKeep directive is specified in the Job Resource and has the form:

Backups To Keep = 30

where the value (30 in the above figure and example) is the number of backups to retain. When this directive is present during a Virtual Full (it is ignored for other Job types), it will look for the most recent Full backup that has more subsequent backups than the value specified. In the above example the Job will simply terminate unless there is a Full back followed by at least 31 backups of either level Differential or Incremental.

Assuming that the last Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full. The result is that you will end up with a Full followed by 30 Incremental backups. The Job Resource in bacula-dir.conf to accomplish this would be:

Job {
  Name = "VFull"
  Type = Backup
  Level = VirtualFull
  Client = "my-fd"
  File Set = "FullSet"
  Accurate = Yes
  Backups To Keep = 10
}

Delete Consolidated Jobs

The new directive Delete Consolidated Jobs expects a yes or no value that if set to yes will cause any old Job that is consolidated during a Virtual Full to be deleted. In the example above we saw that a Full plus one other job (either an Incremental or Differential) were consolidated into a new Full backup. The original Full plus the other Job consolidated will be deleted. The default value is no.

Virtual Full Compatibility

Virtual Full as well as Progressive Virtual Full works with any standard backup Job.

However, it should be noted that Virtual Full jobs are not compatible with any plugins that you may be using.

TapeAlert Enhancements

There are some significant enhancements to the TapeAlert feature of Bacula. Several directives are used slightly differently, which unfortunately causes a compatibility problem with the old TapeAlert implementation. Consequently, if you are already using TapeAlert, you must modify your bacula-sd.conf in order for Tape Alerts to work. See below for the details …

What is New

First, you must define a Alert Command directive in the Device resource that calls the new tapealert script that is installed in the scripts directory (normally: /opt/bacula/scripts). It is defined as follows:

Device {
  Name = ...
  Archive Device = /dev/nst0
  Alert Command = "/opt/bacula/scripts/tapealert %l"
  Control Device = /dev/sg1 # must be SCSI ctl for /dev/nst0
  ...
}

In addition the Control Device directive in the Storage Daemon’s conf file must be specified in each Device resource to permit Bacula to detect tape alerts on a specific devices (normally only tape devices).

Once the above mentioned two directives (Alert Command and Control Device) are in place in each of your Device resources, Bacula will check for tape alerts at two points:

  • After the Drive is used and it becomes idle.

  • After each read or write error on the drive.

At each of the above times, Bacula will call the new tapealert script, which uses the tapeinfo program. The tapeinfo utility is part of the apt sg3-utils and rpm sg3_utils packages that must be installed on your systems. Then after each alert that Bacula finds for that drive, Bacula will emit a Job message that is either INFO, WARNING, or FATAL depending on the designation in the Tape Alert published by the T10 Technical Committee on SCSI Storage Interfaces (www.t10.org). For the specification, please see: www.t10.org/ftp/t10/document.02/02-142r0.pdf

As a somewhat extreme example, if tape alerts 3, 5, and 39 are set, you will get the following output in your backup job.

17-Nov 13:37 rufus-sd JobId 1: Error: block.c:287
Write error at 0:17 on device "tape"
(/home/kern/bacula/k/regress/working/ach/drive0)
Vol=TestVolume001. ERR=Input/output error.

17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
Volume="TestVolume001" alert=3: ERR=The operation has stopped because
an error has occurred while reading or writing data which the drive
cannot correct.  The drive had a hard read or write error

17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
Volume="TestVolume001" alert=5: ERR=The tape is damaged or the drive
is faulty.  Call the tape drive supplier helpline.  The drive can no
longer read data from the tape

17-Nov 13:37 rufus-sd JobId 1: Warning: Disabled Device "tape"
(/home/kern/bacula/k/regress/working/ach/drive0) due to tape alert=39.

17-Nov 13:37 rufus-sd JobId 1: Warning: Alert: Volume="TestVolume001"
alert=39: ERR=The tape drive may have a fault.  Check for availability
of diagnostic information and run extended diagnostics if applicable.
The drive may have had a failure which may be identified by stored
diagnostic information or by running extended diagnostics (eg Send
Diagnostic).  Check the tape drive users manual for instructions on
running extended diagnostic tests and retrieving diagnostic data.

Without the tape alert feature enabled, you would only get the first error message above, which is the error return Bacula received when it gets the error. Notice also, that in the above output the alert number 5 is a critical error, which causes two things to happen. First the tape drive is disabled, and second the Job is failed.

If you attempt to run another Job using the Device that has been disabled, you will get a message similar to the following:

17-Nov 15:08 rufus-sd JobId 2: Warning:
     Device "tape" requested by DIR is disabled.

and the Job may be failed if no other drive can be found.

Once the problem with the tape drive has been corrected, you can clear the tape alerts and re-enable the device with the Bacula bconsole command such as the following:

enable Storage=Tape

Note, when you enable the device, the list of prior tape alerts for that drive will be discarded.

Since is is possible to miss tape alerts, Bacula maintains a temporary list of the last 8 alerts, and each time Bacula calls the tapealert script, it will keep up to 10 alert status codes. Normally there will only be one or two alert errors for each call to the tapealert script.

Once a drive has one or more tape alerts, you can see them by using the bconsole status command as follows:

status storage=Tape

which produces the following output:

Device Vtape is "tape" (/home/kern/bacula/k/regress/working/ach/drive0)
mounted with:
    Volume:      TestVolume001
    Pool:        Default
    Media type:  tape
    Device is disabled. User command.
    Total Bytes Read=0 Blocks Read=1 Bytes/block=0
    Positioned at File=1 Block=0
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Hard Error
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Read Failure
    Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Diagnostics Required

if you want to see the long message associated with each of the alerts, simply set the debug level to 10 or more and re-issue the status command:

setdebug storage=Tape level=10
status storage=Tape

    ...
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
      flags=0x0 alert=The operation has stopped because an error has occurred
       while reading or writing data which the drive cannot correct. The drive had
       a hard read or write error
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       flags=0x0 alert=The tape is damaged or the drive is faulty. Call the tape
       drive supplier helpline.  The drive can no longer read data from the tape
    Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x1
       alert=The tape drive may have a fault. Check for availability of diagnostic
       information and run extended diagnostics if applicable.   The drive may
       have had a failure which may be identified by stored diagnostic information
       or by running extended diagnostics (eg Send Diagnostic). Check the tape
       drive users manual for instructions on running extended diagnostic tests
       and retrieving diagnostic data.
    ...

The next time you enable the Device by either using bconsole or you restart the Storage Daemon, all the saved alert messages will be discarded.

Handling of Alerts

Tape Alerts numbered 7,8,13,14,20,22,52,53, and 54 will cause Bacula to disable the current Volume.

Tape Alerts numbered 14,20,29,30,31,38, and 39 will cause Bacula to disable the drive.

Please note certain tape alerts such as 14 have multiple effects (disable the Volume and disable the drive).

New Console ACL Directives

By default, if a Console ACL directive is not set, Bacula will assume that the ACL list is empty. If the current Bacula Director configuration uses restricted Consoles and allows restore jobs, it is mandatory to configure the new directives.

DirectoryACL

This directive is used to specify a list of directories that can be accessed by a restore session. Without this directive, a restricted console cannot restore any file. Multiple directories names may be specified by separating them with commas, and/or by specifying multiple DirectoryACL directives. For example, the directive may be specified as:

DirectoryACL = /home/bacula/, "/etc/", "/home/test/*"

With the above specification, the console can access the following directories:

  • /etc/password

  • /etc/group

  • /home/bacula/.bashrc

  • /home/test/.ssh/config

  • /home/test/Desktop/Images/something.png

But not to the following files or directories:

  • /etc/security/limits.conf

  • /home/bacula/.ssh/id_dsa.pub

  • /home/guest/something

  • /usr/bin/make

If a directory starts with a Windows pattern (ex: c:/), Bacula will automatically ignore the case when checking directory names.

New Bconsole list Command Behavior

The bconsole list commands can now be used safely from a restricted bconsole session. The information displayed will respect the ACL configured for the Console session. For example, if a restricted Console has access to JobA, JobB and JobC, information about JobD will not appear in the list jobs command.

New Console ACL Directives

It is now possible to configure a restricted Console to distinguish Backup and Restore job permissions. The BackupClientACL can restrict backup jobs on a specific set of clients, while the RestoreClientACL can restrict restore jobs.

# cat /opt/bacula/etc/bacula-dir.conf
...

Console {
 Name = fd-cons             # Name of the FD Console
 Password = yyy
...
 ClientACL = localhost-fd           # everything allowed
 RestoreClientACL = test-fd         # restore only
 BackupClientACL = production-fd    # backup only
}

The ClientACL directive takes precedence over the RestoreClientACL and the BackupClientACL. In the Console resource resource above, it means that the bconsole linked to the Console named “fd-cons” will be able to run:

  • backup and restore for localhost-fd

  • backup for production-fd

  • restore for test-fd

At the restore time, jobs for client localhost-fd, test-fd and production-fd will be available.

If all is set for ClientACL, backup and restore will be allowed for all clients, despite the use of RestoreClientACL or BackupClientACL.

Client Initiated Backup

A console program such as the new tray-monitor or bconsole can now be configured to connect a File Daemon. There are many new features available (see the New Tray Monitor section below), but probably the most important is the ability for the user to initiate a backup of his own machine. The connection established by the FD to the Director for the backup will be used by the Director for the backup, thus not only can clients (users) initiate backups, but a File Daemon that is NATed (cannot be reached by the Director) can now be backed up without using advanced tunneling techniques providing that the File Daemon can connect to the Director.

../../../_images/RNimage06.png

Configuring Client Initiated Backup

In order to ensure security, there are a number of new directives that must be enabled in the new tray-monitor, the File Daemon and in the Director. A typical configuration might look like the following:

# cat /opt/bacula/etc/bacula-dir.conf
...

Console {
 Name = fd-cons             # Name of the FD Console
 Password = yyy

 # These commands are used by the tray-monitor, it is possible to restrict
 CommandACL = run, restore, wait, .status, .jobs, .clients
 CommandACL = .storages, .pools, .filesets, .defaults, .estimate

 # Adapt for your needs
 jobacl = *all*
 poolacl = *all*
 clientacl = *all*
 storageacl = *all*
 catalogacl = *all*
 filesetacl = *all*
}

# cat /opt/bacula/etc/bacula-fd.conf
...

Console {              # Console to connect the Director
  Name = fd-cons
  DIRPort = 9101
  address = localhost
  Password = "yyy"
}

Director {
  Name = remote-cons   # Name of the tray monitor/bconsole
  Password = "xxx"     # Password of the tray monitor/bconsole
  Remote = yes         # Allow to use send commands to the Console defined
}

cat /opt/bacula/etc/bconsole-remote.conf
....

Director {
  Name = localhost-fd
  address = localhost        # Specify the FD address
  DIRport = 9102             # Specify the FD Port
  Password = "notused"
}

Console {
  Name = remote-cons         # Name used in the auth process
  Password = "xxx"
}

cat ~/.bacula-tray-monitor.conf
Monitor {
  Name = remote-cons
}

Client {
  Name = localhost-fd
  address = localhost     # Specify the FD address
  Port = 9102             # Specify the FD Port
  Password = "xxx"
  Remote = yes
}

New Tray Monitor

A new tray monitor has been added to the 9.0 release, the tray monitor offers the following features:

  • Director, File and Storage Daemon status page

  • Support for the Client Initiated Backup protocol. To use the Client Initiated Backup option from the tray monitor, the Client option Remote should be checked in the configuration.

  • Wizard to run new job

  • Display an estimation of the number of files and the size of the next backup job

  • Ability to configure the tray monitor configuration file directly from the GUI

  • Ability to monitor a component and adapt the tray monitor task bar icon if a jobs are running.

  • TLS Support

  • Better network connection handling

  • Default configuration file is stored under $HOME/.bacula-tray-monitor.conf

  • Ability to schedule jobs

  • Available on Linux and Windows platforms

../../../_images/RNimage07.png ../../../_images/RNimage08.png ../../../_images/RNimage09.png ../../../_images/RNimage10.png

Schedule Jobs via the Tray Monitor

The Tray Monitor can scan periodically a specific directory Command Directory and process *.bcmd files to find jobs to run.

The format of the file.bcmd command file is the following:

<component name>:<run command>
<component name>:<run command>
...

<component name> = string
<run command>    = string (bconsole command line)

For example:

localhost-fd: run job=backup-localhost-fd level=full
localhost-dir: run job=BackupCatalog

The command file should contain at least one command. The component specified in the first part of the command line should be defined in the tray monitor. Once the command file is detected by the tray monitor, a popup is displayed to the user and it is possible for the user to cancel the job directly.

The file can be created with tools such as cron or the task scheduler on Windows. It is possible to verify the network connection at that time to avoid network errors.

#!/bin/sh
if ping -c 1 director &> /dev/null
then
   echo "my-dir: run job=backup" > /path/to/commands/backup.bcmd
fi

Accurate Option for Verify Volume Data Job

Since Bacula version 8.4.1, it has been possible to have a Verify Job configured with level=Data that will reread all records from a job and optionally check the size and the checksum of all files. Starting with

Bacula version 9.0, it is now possible to use the accurate option to check catalog records at the same time. When using a Verify job with level=Data and accurate=yes can replace the level=VolumeToCatalog option.

For more information on how to setup a Verify Data job, see label:verifyvolumedata.

To run a Verify Job with the accurate option, it is possible to set the option in the Job definition or set use the accurate=yes on the command line.

* run job=VerifyData jobid=10 accurate=yes

FileDaemon Saved Messages Resource Destination

It is now possible to send the list of all saved files to a Messages resource with the saved message type. It is not recommended to send this flow of information to the director and/or the catalog when the client FileSet is pretty large. To avoid side effects, the all keyword doesn’t include the saved message type. The saved message type should be explicitly set.

# cat /opt/bacula/etc/bacula-fd.conf
...
Messages {
  Name = Standard
  director = mydirector-dir = all, !terminate, !restored, !saved
  append = /opt/bacula/working/bacula-fd.log = all, saved, restored
}

Minor Enhancements

New Bconsole “.estimate” Command

The new .estimate command can be used to get statistics about a job to run. The command uses the database to approximate the size and the number of files of the next job. On a PostgreSQL database, the command uses regression slope to compute values. On MySQL, where these statistical functions are not available, the command uses a simple average estimation. The correlation number is given for each value.

*.estimate job=backup
level=I
nbjob=0
corrbytes=0
jobbytes=0
corrfiles=0
jobfiles=0
duration=0
job=backup

*.estimate job=backup level=F
level=F
nbjob=1
corrbytes=0
jobbytes=210937774
corrfiles=0
jobfiles=2545
duration=0
job=backup

Traceback and Lockdump

After the reception of a signal, traceback and lockdump information are now stored in the same file.

Bconsole list jobs command options

The list jobs bconsole command now accepts new command line options:

  • joberrors Display jobs with JobErrors

  • jobstatus=T Display jobs with the specified status code

  • client=cli Display jobs for a specified client

  • order=asc/desc Change the output format of the job list. The jobs are sorted by start time and JobId, the sort can use ascendant (asc) or descendant (desc) (default) value.

Minor Enhancements

New Bconsole “Tee All” Command

The @tall command allows logging all input/output from a console session.

*@tall /tmp/log
*st dir
...

Bconsole list jobs command options

The list jobs bconsole command now accepts new command line options:

  • joberrors Display jobs with JobErrors

  • jobstatus=T Display jobs with the specified status code

  • client=cli Display jobs for a specified client

  • order=asc/desc Change the output format of the job list. The jobs are sorted by start time and JobId, the sort can use ascendant (asc) or descendant (desc) (default) value.

New Bconsole “Tee All” Command

The @tall command allows logging all input/output from a console session.

*@tall /tmp/log
*st dir
...

New Job Edit Codes %I

In various places such as RunScripts, you have now access to %I to get the JobId of the copy or migration job started by a migrate job.

Job {
  Name = Migrate-Job
  Type = Migrate
  ...
  RunAfter = "echo New JobId is %I"
}

.api version 2

In Bacula version 9.0 and later, we introduced a new .api version to help external tools to parse various Bacula bconsole output.

The api_opts option can use the following arguments:

  • C Clear current options

  • tn Use a specific time format (1 ISO format, 2 Unix Timestamp, 3 Default Bacula time format)

  • sn Use a specific separator between items (new line by default).

  • Sn Use a specific separator between objects (new line by default).

  • o Convert all keywords to lowercase and convert all non isalpha characters to _

  .api 2 api_opts=t1s43S35
  .status dir running
==================================
jobid=10
job=AJob
...

New Debug Options

In Bacula version 9.0 and later, we introduced a new options parameter for the setdebug bconsole command.

The following arguments to the new option parameter are available to control debug functions.

  • 0 Clear debug flags

  • i Turn off, ignore bwrite() errors on restore on File Daemon

  • d Turn off decomp of BackupRead() streams on File Daemon

  • t Turn on timestamps in traces

  • T Turn off timestamps in traces

  • c Truncate trace file if trace file is activated

  • l Turn on recoding events on P() and V()

  • p Turn on the display of the event ring when doing a bactrace

The following command will enable debugging for the File Daemon, truncate an existing trace file, and turn on timestamps when writing to the trace file.

* setdebug level=10 trace=1 options=ct fd

It is now possible to use a class of debug messages called tags to control the debug output of Bacula daemons.

  • all Display all debug messages

  • bvfs Display BVFS debug messages

  • sql Display SQL related debug messages

  • memory Display memory and poolmem allocation messages

  • scheduler Display scheduler related debug messages

* setdebug level=10 tags=bvfs,sql,memory
* setdebug level=10 tags=!bvfs

# bacula-dir -t -d 200,bvfs,sql

The tags option is composed of a list of tags. Tags are separated by , or + or - or !. To disable a specific tag, use - or ! in front of the tag. Note that more tags are planned for future versions.

Communication Line Compression

Bacula version 9.0.0 and later now includes communication line compression. It is turned on by default, and if the two Bacula components (Dir, FD, SD, bconsole) are both version 6.6.0 or greater, communication line compression) will be enabled, by default. If for some reason, you do not want communication line compression, you may disable it with the following directive:

Comm Compression = no

This directive can appear in the following resources:

bacula-dir.conf: Director resource
bacula-fd.conf Client (or FileDaemon) resource
bacula-sd.conf: Storage resource
bconsole.conf: Console resource
bat.conf: Console resource

In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is enabled (default) In the case that the compression is not effective, Bacula turns it off on a. record by record basis.

If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, Bacula reports None in the Job report.

Deduplication Optimized Volumes

This version of Bacula includes a new alternative (or additional) volume format that optimizes the placement of files so that an underlying deduplicating filesystem such as ZFS can optimally deduplicate the backup data that is written by Bacula. These are called Deduplication Optimized Volumes or Aligned Volumes for short. The details of how to use this feature and its considerations are in the Deduplication Optimized Volumes whitepaper.

This feature is available if you have Bacula Community produced binaries and the Aligned Volumes plugin.

baculabackupreport

I have added a new script called baculabackupreport to the scripts directory. This script was written by Bill Arlofski. It prints a backup summary of the backups that occurred in the prior number of hours specified on the command line. You need to edit the first few lines of the file to ensure that your email address is correct and the database type you are using is correct (default is PostgreSQL). Once you do that, you can manually implement it with:

/opt/bacula/scripts/baculabackupreport 24

I have put the above line in my scripts/delete_catalog_backup script so that it will be mailed to me nightly.

New Message Identification Format

We are starting to add unique message identifiers to each message (other than debug and the Job report) that Bacula prints. At the current time only two files in the Storage Daemon have these message identifiers and over time with subsequent releases we will modify all messages.

The message identifier will be kept unique for each message and once assigned to a message it will not change even if the text of the message changes. This means that the message identifier will be the same no matter what language the text is displayed in, and more importantly, it will allow us to make listing of the messages with in some cases, additional explanation or instructions on how to correct the problem. All this will take several years since it is a lot of work and requires some new programs that are not yet written to manage these message identifiers.

The format of the message identifier is:

[AAnnnn]

where A is an upper case character and nnnn is a four digit number, where the first character indicates the software component (daemon); the second letter indicates the severity, and the number is unique for a given componet and severity.

For example:

[SF0001]

The first character representing the component at the current time one of the following:

S      Storage daemon
D      Director
F      File daemon

The second character representing the severity or level can be:

A      Abort
F      Fatal
E      Error
W      Warning
S      Security
I      Info
D      Debug
O      OK (i.e. operation completed normally)

So in the example above [SF0001] indicates it is a message id, because of the brackets and because it is at the beginning of the message, and that it was generated by the Storage daemon as a fatal error.

As mentioned above it will take some time to implement these message ids everywhere, and over time we may add more component letters and more severity levels as needed.

New Features in 7.4.3

RunScripts

There are two new RunScript short cut directives implemented in the Director. They are:

Job {
  ...
  ConsoleRunBeforeJob = "console-command"
  ...
}

Job {
  ...
  ConsoleRunAfterJob = "console-command"
  ...
}

As with other RunScript commands, you may have multiple copies of either the ConsoleRunBeforeJob or the ConsoleRunAfterJob in the same Job resource definition.

Please note that not all console commands are permitted, and that if you run a console command that requires a response, the results are not determined (i.e. it will probably fail).

New Features in 7.4.0

Verify Volume Data

It is now possible to have a Verify Job configured with level=Data to reread all records from a job and optionally check the size and the checksum of all files.

# Verify Job definition
Job {
  Name = VerifyData
  Level = Data
  Client = 127.0.0.1-fd     # Use local file daemon
  FileSet = Dummy           # Will be adapted during the job
  Storage = File            # Should be the right one
  Messages = Standard
  Pool = Default
}

# Backup Job definition
Job {
  Name = MyBackupJob
  Type = Backup
  Client = windows1
  FileSet = MyFileSet
  Pool = 1Month
  Storage = File
}

FileSet {
  Name = MyFileSet
  Include {
    Options {
      Verify = s5
      Signature = MD5
    }
  File = /
}

To run the Verify job, it is possible to use the ``jobid’’ parameter of the ``run’’ command.

*run job=VerifyData jobid=10
Run Verify Job
JobName:     VerifyData
Level:       Data
Client:      127.0.0.1-fd
FileSet:     Dummy
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  MyBackupJob.2015-11-11_09.41.55_03
Verify List: /opt/bacula/working/working/VerifyVol.bsr
When:        2015-11-11 09:47:38
Priority:    10
OK to run? (yes/mod/no): yes
Job queued. JobId=14

...

11-Nov 09:46 my-dir JobId 13: Bacula 7.4.0 (13Nov15):
  Build OS:               x86_64-unknown-linux-gnu archlinux
  JobId:                  14
  Job:                    VerifyData.2015-11-11_09.46.29_03
  FileSet:                MyFileSet
  Verify Level:           Data
  Client:                 127.0.0.1-fd
  Verify JobId:           10
  Verify Job:q
  Start time:             11-Nov-2015 09:46:31
  End time:               11-Nov-2015 09:46:32
  Files Expected:         1,116
  Files Examined:         1,116
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  Verify differences
  SD termination status:  OK
  Termination:            Verify Differences

The current Verify Data implementation requires specifying the correct Storage resource in the Verify job. The Storage resource can be changed with the bconsole command line and with the menu.

Bconsole ``list jobs’’ command options

The list jobs bconsole command now accepts new command line options:

  • joberrors Display jobs with JobErrors

  • jobstatus=T Display jobs with the specified status code

  • client=cli Display jobs for a specified client

  • order=asc/desc Change the output format of the job list. The jobs are sorted by start time and JobId, the sort can use ascendant (asc) or descendant (desc) (default) value.

Minor Enhancements

New Bconsole “Tee All” Command

The ``@tall’’ command allows logging all input/output from a console session.

*@tall /tmp/log
*st dir
...

Windows Encrypted File System (EFS) Support

The Bacula Enterprise Windows File Daemon for the community version 7.4.0 now automatically supports files and directories that are encrypted on Windows filesystem.

SSL Connections to MySQL

There are five new Directives for the Catalog resource in the bacula-dir.conf file that you can use to encrypt the communications between Bacula and MySQL for additional security.

dbsslkey

takes a string variable that specifies the filename of an SSL key file.

dbsslcert

takes a string variable that specifies the filename of an SSL certificate file.

dbsslca

takes a string variable that specifies the filename of a SSL CA (certificate authority) certificate.

dbsslcipher

takes a string variable that specifies the cipher to be used.

Max Virtual Full Interval

This is a new Job resource directive that specifies the time in seconds that is a maximum time between Virtual Full jobs. It is much like the Max Full Interval directive but applies to Virtual Full jobs rather that Full jobs.

New List Volumes Output

The list and llist commands have been modified so that when listing Volumes a new pseudo field expiresin will be printed. This field is the number of seconds in which the retention period will expire. If the retention period has already expired the value will be zero. Any non-zero value means that the retention period is still in effect.

An example with many columns shorted for display purpose is:

*list volumes
Pool: Default
*list volumes
Pool: Default
+----+---------------+-----------+---------+-------------+-----------+
| id | volumename    | volstatus | enabled | volbytes    | expiresin |
+----+---------------+-----------+---------+-------------+-----------+
|  1 | TestVolume001 | Full      |       1 | 249,940,696 |         0 |
|  2 | TestVolume002 | Full      |       1 | 249,961,704 |         1 |
|  3 | TestVolume003 | Full      |       1 | 249,961,704 |         2 |
|  4 | TestVolume004 | Append    |       1 | 127,367,896 |         3 |
+----+---------------+-----------+---------+-------------+-----------+

New Features in 7.2.0

New Job Edit Codes %E %R

In various places such as RunScripts, you have now access to %E to get the number of non-fatal errors for the current Job and %R to get the number of bytes read from disk or from the network during a job.

Enable/Disable commands

The bconsole enable and disable commands have been extended from enabling/disabling Jobs to include Clients, Schedule, and Storage devices. Examples:

disable Job=NightlyBackup Client=Windows-fd

will disable the Job named NightlyBackup as well as the client named Windows-fd.

disable Storage=LTO-changer Drive=1

will disable the first drive in the autochanger named LTO-changer.

Please note that doing a reload command will set any values changed by the enable/disable commands back to the values in the bacula-dir.conf file.

The Client and Schedule resources in the bacula-dir.conf file now permit the directive Enable = yes or Enable = no.

Snapshot Management

Bacula 7.2 is now able to handle Snapshots on Linux/Unix systems. Snapshots can be automatically created and used to backup files. It is also possible to manage Snapshots from Bacula’s bconsole tool through a unique interface.

Snapshot Backends

The following Snapshot backends are supported with Bacula Enterprise 8.2:

  • BTRFS

  • ZFS

  • LVM

By default, Snapshots are mounted (or directly available) under .snapshots directory on the root filesystem (On ZFS, the default is .zfs/snapshots).

The Snapshot backend program is called bsnapshot and is available in the bacula-enterprise-snapshot package. In order to use the Snapshot Management feature, the package must be installed on the Client.

The bsnapshot program can be configured using /opt/bacula/etc/bsnapshot.conf file. The following parameters can be adjusted in the configuration file:

  • trace=<file> Specify a trace file

  • debug=<num> Specify a debug level

  • sudo=<yes/no> Use sudo to run commands

  • disabled=<yes/no> Disable snapshot support

  • retry=<num> Configure the number of retries for some operations

  • snapshot_dir=<dirname> Use a custom name for the Snapshot directory. (.SNAPSHOT, .snapdir, etc…)

  • lvm_snapshot_size=<lvpath:size> Specify a custom snapshot size for a given LVM volume

# cat /opt/bacula/etc/bsnapshot.conf
trace=/tmp/snap.log
debug=10
lvm_snapshot_size=/dev/ubuntu-vg/root:5%

Application Quiescing

When using Snapshots, it is very important to quiesce applications that are running on the system. The simplest way to quiesce an application is to stop it. Usually, taking the Snapshot is very fast, and the downtime is only about a couple of seconds. If downtime is not possible and/or the application provides a way to quiesce, a more advanced script can be used. An example is described on SnapRunScriptExample.

New Director Directives

The use of the Snapshot Engine on the FileDaemon is determined by the new Enable Snapshot FileSet directive. The default is no.

FileSet {
  Name = LinuxHome

  Enable Snapshot = yes

  Include {
    Options = { Compression = LZO }
    File = /home
  }
}

By default, Snapshots are deleted from the Client at the end of the backup. To keep Snapshots on the Client and record them in the Catalog for a determined period, it is possible to use the Snapshot Retention directive in the Client or in the Job resource. The default value is 0 seconds. If, for a given Job, both Client and Job Snapshot Retention directives are set, the Job directive will be used.

Client {
   Name = linux1
   ...

   Snapshot Retention = 5 days
}

To automatically prune Snapshots, it is possible to use the following RunScript command:

Job {
   ...
   Client = linux1
   ...
   RunScript {
      RunsOnClient = no
      Console = "prune snapshot client=%c yes"
      RunsAfter = yes
   }
}

In RunScripts, the AfterSnapshot keyword for the RunsWhen directive will allow a command to be run just after the Snapshot creation. AfterSnapshot is a synonym for the AfterVSS keyword.

Job {
 ...
  RunScript {
    Command = "/etc/init.d/mysql start"
    RunsWhen = AfterSnapshot
    RunsOnClient = yes
  }
  RunScript {
    Command = "/etc/init.d/mysql stop"
    RunsWhen = Before
    RunsOnClient = yes
  }
}

Job Output Information

Information about Snapshots are displayed in the Job output. The list of all devices used by the Snapshot Engine is displayed, and the Job summary indicates if Snapshots were available.

JobId 3:    Create Snapshot of /home/build
JobId 3:    Create Snapshot of /home/build/subvol
JobId 3:    Delete snapshot of /home/build
JobId 3:    Delete snapshot of /home/build/subvol
...
JobId 3: Bacula 127.0.0.1-dir 7.2.0 (23Jul15):
  Build OS:               x86_64-unknown-linux-gnu archlinux
  JobId:                  3
  Job:                    Incremental.2015-02-24_11.20.27_08
  Backup Level:           Full
...
  Snapshot/VSS:           yes
...
  Termination:            Backup OK

New ``snapshot’’ Bconsole Commands

The new snapshot command will display by default the following menu:

*snapshot
Snapshot choice:
     1: List snapshots in Catalog
     2: List snapshots on Client
     3: Prune snapshots
     4: Delete snapshot
     5: Update snapshot parameters
     6: Update catalog with Client snapshots
     7: Done
Select action to perform on Snapshot Engine (1-7):

The snapshot command can also have the following parameters:

[client=<client-name> | job=<job-name> | jobid=<jobid>]
 [delete | list | listclient | prune | sync | update]

It is also possible to use traditional list, llist, update, prune or delete commands on Snapshots.

*llist snapshot jobid=5
 snapshotid: 1
       name: NightlySave.2015-02-24_12.01.00_04
 createdate: 2015-02-24 12:01:03
     client: 127.0.0.1-fd
    fileset: Full Set
      jobid: 5
     volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
     device: /home/btrfs
       type: btrfs
  retention: 30
    comment:

* snapshot listclient
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:8102
Snapshot      NightlySave.2015-02-24_12.01.00_04:
  Volume:     /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
  Device:     /home
  CreateDate: 2015-02-24 12:01:03
  Type:       btrfs
  Status:     OK
  Error:

With the Update catalog with Client snapshots option (or snapshot sync), the Director contacts the FileDaemon, lists snapshots of the system and creates catalog records of the Snapshots.

*snapshot sync
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:8102
Snapshot      NightlySave.2015-02-24_12.35.47_06:
  Volume:     /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
  Device:     /home
  CreateDate: 2015-02-24 12:35:47
  Type:       btrfs
  Status:     OK
  Error:
Snapshot added in Catalog

*llist snapshot
 snapshotid: 13
       name: NightlySave.2015-02-24_12.35.47_06
 createdate: 2015-02-24 12:35:47
     client: 127.0.0.1-fd
    fileset:
      jobid: 0
     volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
     device: /home
       type: btrfs
  retention: 0
    comment:

LVM Backend Restrictions

LVM Snapshots are quite primitive compared to ZFS, BTRFS, NetApp and other systems. For example, it is not possible to use Snapshots if the Volume Group (VG) is full. The administrator must keep some free space in the VG to create Snapshots. The amount of free space required depends on the activity of the Logical Volume (LV). bsnapshot uses 10% of the LV by default. This number can be configured per LV in the bsnapshot.conf file.

[root@system1]# vgdisplay
  --- Volume group ---
  VG Name               vg_ssd
  System ID
  Format                lvm2
...
  VG Size               29,81 GiB
  PE Size               4,00 MiB
  Total PE              7632
  Alloc PE / Size       125 / 500,00 MiB
  Free  PE / Size       7507 / 29,32 GiB
...

It is also not advisable to leave snapshots on the LVM backend. Having multiple snapshots of the same LV on LVM will slow down the system.

Debug Options

To get low level information about the Snapshot Engine, the debug tag ``snapshot’’ should be used in the setdebug command.

* setdebug level=10 tags=snapshot client
* setdebug level=10 tags=snapshot dir

Minor Enhancements

Storage Daemon Reports Disk Usage

The status storage command now reports the space available on disk devices:

...
Device status:

Device file: "FileStorage" (/bacula/arch1) is not open.
    Available Space=5.762 GB
==

Device file: "FileStorage1" (/bacula/arch2) is not open.
    Available Space=5.862 GB

Data Encryption Cipher Configuration

Bacula Enterprise version 8.0 and later now allows configuration of the data encryption cipher and the digest algorithm. Previously, the cipher was forced to AES 128, but it is now possible to choose between the following ciphers:

  • AES128 (default)

  • AES192

  • AES256

  • blowfish

The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL options. We advise you to not modify the PkiDigest default setting. Please, refer to the OpenSSL documentation to understand the pros and cons regarding these options.

FileDaemon {
  ...
  PkiCipher = AES256
}

New Option Letter ``M’’ for Accurate Directive in FileSet

Added in version 8.0.5, the new ``M’’ option letter for the Accurate directive in the FileSet Options block, which allows comparing the modification time and/or creation time against the last backup timestamp. This is in contrast to the existing options letters ``m’’ and/or ``c’’, mtime and ctime, which are checked against the stored catalog values, which can vary across different machines when using the BaseJob feature.

The advantage of the new ``M’’ option letter for Jobs that refer to BaseJobs is that it will instruct Bacula to backup files based on the last backup time, which is more useful because the mtime/ctime timestamps may differ on various Clients, causing files to be needlessly backed up.

  Job {
    Name = USR
    Level = Base
    FileSet = BaseFS
...
  }

  Job {
    Name = Full
    FileSet = FullFS
    Base = USR
...
  }

  FileSet {
    Name = BaseFS
    Include {
      Options {
        Signature = MD5
      }
      File = /usr
    }
  }

  FileSet {
    Name = FullFS
    Include {
      Options {
        Accurate = Ms      # check for mtime/ctime of last backup timestamp and Size
        Signature = MD5
      }
      File = /home
      File = /usr
    }
  }

New Debug Options

In Bacula Enterprise version 8.0 and later, we introduced a new options parameter for the setdebug bconsole command.

The following arguments to the new option parameter are available to control debug functions.

  • 0 Clear debug flags

  • i Turn off, ignore bwrite() errors on restore on File Daemon

  • d Turn off decomp of BackupRead() streams on File Daemon

  • t Turn on timestamps in traces

  • T Turn off timestamps in traces

  • c Truncate trace file if trace file is activated

  • l Turn on recoding events on P() and V()

  • p Turn on the display of the event ring when doing a bactrace

The following command will enable debugging for the File Daemon, truncate an existing trace file, and turn on timestamps when writing to the trace file.

* setdebug level=10 trace=1 options=ct fd

It is now possible to use a class of debug messages called tags to control the debug output of Bacula daemons.

  • all Display all debug messages

  • bvfs Display BVFS debug messages

  • sql Display SQL related debug messages

  • memory Display memory and poolmem allocation messages

  • scheduler Display scheduler related debug messages

* setdebug level=10 tags=bvfs,sql,memory
* setdebug level=10 tags=!bvfs

# bacula-dir -t -d 200,bvfs,sql

The tags option is composed of a list of tags. Tags are separated by , or + or - or !. To disable a specific tag, use - or ! in front of the tag. Note that more tags are planned for future versions.

Read Only Storage Devices

This version of Bacula allows you to define a Storage daemon device to be read-only. If the Read Only directive is specified and enabled, the drive can only be used for read operations. The Read Only directive can be defined in any bacula-sd.conf Device resource, and is most useful for reserving one or more drives for restores. An example is:

Read Only = yes

New Truncate Command

We have added a new truncate command to bconsole which will truncate a volume if the volume is purged, and if the volume is also marked Action On Purge = Truncate. This feature was originally added in Bacula version 5.0.1, but the mechanism for actually doing the truncate required the user to enter a complicated command such as:

purge volume action=truncate storage=File pool=Default

The above command is now simplified to be:

truncate storage=File pool=Default

New Resume Command

The new resume command does exactly the same thing as a restart command, but for some users the name may be more logical because in general the restart command is used to resume running a Job that was incomplete.

New Prune Expired Volume Command

In Bacula Enterprise 6.4, it is now possible to prune all volumes (from a pool, or globally) that are ``expired’’. This option can be scheduled after or before the backup of the catalog and can be combined with the Truncate On Purge option. The prune expired volume command may be used instead of the manual_prune.pl script.

* prune expired volume

* prune expired volume pool=FullPool

To schedule this option automatically, it can be added to the Catalog backup job definition.

Job {
  Name = CatalogBackup
  ...
  RunScript {
    Console = "prune expired volume yes"
    RunsWhen = Before
  }
}

New Job Edit Codes %P %C

In various places such as RunScripts, you have now access to %P to get the current Bacula process ID (PID) and %C to know if the current job is a cloned job.

Enhanced Status and Error Messages

We have enhanced the Storage daemon status output to be more readable. This is important when there are a large number of devices. In addition to formatting changes, it also includes more details on which devices are reading and writing.

A number of error messages have been enhanced to have more specific data on what went wrong.

If a file changes size while being backed up the old and new size are reported.

Miscellaneous New Features

  • Allow unlimited line lengths in .conf files (previously limited to 2000 characters).

  • Allow /dev/null in ChangerCommand to indicated a Virtual Autochanger.

  • Add a -fileprune option to the manual_prune.pl script.

  • Add a -m option to make_catalog_backup.pl to do maintenance on the catalog.

  • Safer code that cleans up the working directory when starting the daemons. It limits what files can be deleted, hence enhances security.

  • Added a new .ls command in bconsole to permit browsing a client’s filesystem.

  • Fixed a number of bugs, includes some obscure seg faults, and a race condition that occurred infrequently when running Copy, Migration, or Virtual Full backups.

  • Upgraded to a newer version of Qt4 for bat. All indications are that this will improve bat’s stability on Windows machines.

  • The Windows installers now detect and refuse to install on an OS that does not match the 32/64 bit value of the installer.

FD Storage Address

When the Director is behind a NAT, in a WAN area, to connect to the StorageDaemon, the Director uses an external ip address, and the FileDaemon should use an internal IP address to contact the StorageDaemon.

The normal way to handle this situation is to use a canonical name such as storage-server that will be resolved on the Director side as the WAN address and on the Client side as the LAN address. This is now possible to configure this parameter using the new directive FDStorageAddress in the Storage or Client resource.

Storage {
     Name = storage1
     Address = 65.1.1.1
     FD Storage Address = 10.0.0.1
     SD Port = 9103
     ...
}

 Client {
      Name = client1
      Address = 65.1.1.2
      FD Storage Address = 10.0.0.1
      FD Port = 9102
      ...
 }

Note that using the Client FDStorageAddress directive will not allow to use multiple Storage Daemon, all Backup or Restore requests will be sent to the specified FDStorageAddress.

Maximum Concurrent Read Jobs

This is a new directive that can be used in the bacula-dir.conf file in the Storage resource. The main purpose is to limit the number of concurrent Copy, Migration, and VirtualFull jobs so that they don’t monopolize all the Storage drives causing a deadlock situation where all the drives are allocated for reading but none remain for writing. This deadlock situation can occur when running multiple simultaneous Copy, Migration, and VirtualFull jobs.

The default value is set to 0 (zero), which means there is no limit on the number of read jobs. Note, limiting the read jobs does not apply to Restore jobs, which are normally started by hand. A reasonable value for this directive is one half the number of drives that the Storage resource has rounded down. Doing so, will leave the same number of drives for writing and will generally avoid over committing drives and a deadlock.

Incomplete Jobs

During a backup, if the Storage daemon experiences disconnection with the File daemon during backup (normally a comm line problem or possibly an FD failure), under conditions that the SD determines to be safe it will make the failed job as Incomplete rather than failed. This is done only if there is sufficient valid backup data that was written to the Volume. The advantage of an Incomplete job is that it can be restarted by the new bconsole restart command from the point where it left off rather than from the beginning of the jobs as is the case with a cancel.

The Stop Command

Bacula has been enhanced to provide a stop command, very similar to the cancel command with the main difference that the Job that is stopped is marked as Incomplete so that it can be restarted later by the restart command where it left off (see below). The stop command with no arguments, will like the cancel command, prompt you with the list of running jobs allowing you to select one, which might look like the following:

*stop
Select Job:
     1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07
     2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08
     3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09
Choose Job to stop (1-3): 2
2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped.
3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.

The Restart Command

The new Restart command allows console users to restart a canceled, failed, or incomplete Job. For canceled and failed Jobs, the Job will restart from the beginning. For incomplete Jobs the Job will restart at the point that it was stopped either by a stop command or by some recoverable failure.

If you enter the restart command in bconsole, you will get the following prompts:

*restart
You have the following choices:
     1: Incomplete
     2: Canceled
     3: Failed
     4: All
Select termination code:  (1-4):

If you select the All option, you may see something like:

Select termination code:  (1-4): 4
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
| jobid | name        | starttime           | type | level | jobfiles |
jobbytes  | jobstatus |
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
|     1 | Incremental | 2012-03-26 12:15:21 | B    | F     |        0 |
    0 | A         |
|     2 | Incremental | 2012-03-26 12:18:14 | B    | F     |      350 |
4,013,397 | I         |
|     3 | Incremental | 2012-03-26 12:18:30 | B    | F     |        0 |
    0 | A         |
|     4 | Incremental | 2012-03-26 12:18:38 | B    | F     |      331 |
3,548,058 | I         |
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
Enter the JobId list to select:

Then you may enter one or more JobIds to be restarted, which may take the form of a list of JobIds separated by commas, and/or JobId ranges such as 1-4, which indicates you want to restart JobIds 1 through 4, inclusive.

Job Bandwidth Limitation

The new Job Bandwidth Limitation directive may be added to the File daemon’s and/or Director’s configuration to limit the bandwidth used by a Job on a Client. It can be set in the File daemon’s conf file for all Jobs run in that File daemon, or it can be set for each Job in the Director’s conf file. The speed is always specified in bytes per second.

For example:

FileDaemon {
  Name = localhost-fd
  Working Directory = /some/path
  Pid Directory = /some/path
  ...
  Maximum Bandwidth Per Job = 5Mb/s
}

The above example would cause any jobs running with the FileDaemon to not exceed 5 megabytes per second of throughput when sending data to the Storage Daemon. Note, the speed is always specified in bytes per second (not in bits per second), and the case (upper/lower) of the specification characters is ignored (i.e. 1MB/s = 1Mb/s).

You may specify the following speed parameter modifiers: k/s (1,000 bytes per second), kb/s (1,024 bytes per second), m/s (1,000,000 bytes per second), or mb/s (1,048,576 bytes per second).

For example:

Job {
  Name = locahost-data
  FileSet = FS_localhost
  Accurate = yes
  ...
  Maximum Bandwidth = 5Mb/s
  ...
}

The above example would cause Job localhost-data to not exceed 5MB/s of throughput when sending data from the File daemon to the Storage daemon.

A new console command setbandwidth permits to set dynamically the maximum throughput of a running Job or for future jobs of a Client.

* setbandwidth limit=1000 jobid=10

Please note that the value specified for the limit command line parameter is always in units of 1024 bytes (i.e. the number is multiplied by 1024 to give the number of bytes per second). As a consequence, the above limit of 1000 will be interpreted as a limit of 1000 * 1024 = 1,024,000 bytes per second.

Always Backup a File

When the Accurate mode is turned on, you can decide to always backup a file by using then new A Accurate option in your FileSet. For example:

Job {
   Name = ...
   FileSet = FS_Example
   Accurate = yes
   ...
}

FileSet {
 Name = FS_Example
 Include {
   Options {
     Accurate = A
   }
   File = /file
   File = /file2
 }
 ...
}

This project was funded by Bacula Systems based on an idea of James Harper and is available with the Bacula Enterprise Edition.

Setting Accurate Mode at Runtime

You are now able to specify the Accurate mode on the run command and in the Schedule resource.

* run accurate=yes job=Test

Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 23:05
  Run = Differential accurate=yes 2nd-5th sun at 23:05
  Run = Incremental  accurate=no  mon-sat at 23:05
}

It can allow you to save memory and and CPU resources on the catalog server in some cases.

These advanced tuning options are available with the Bacula Enterprise Edition.

Additions to RunScript variables

You can have access to JobBytes, JobFiles and Director name using %b, %F and %D in your runscript command. The Client address is now available through %h.

RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"

LZO Compression

LZO compression was added in the Unix File Daemon. From the user point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).

For example:

Include {
   Options { compression=LZO }
   File = /home
   File = /data
}

LZO provides much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the built-in compression may be a better option.

LZO is a good alternative for GZIP1 when you don’t want to slow down your backup. On a modern CPU it should be able to run almost as fast as:

  • your client can read data from disk. Unless you have very fast disks like SSD or large/fast RAID array.

  • the data transfers between the file daemon and the storage daemon even on a 1Gb/s link.

Note that bacula only use one compression level LZO1X-1.

The code for this feature was contributed by Laurent Papier.

Purge Migration Job

The new Purge Migration Job directive may be added to the Migration Job definition in the Director’s configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.

For example:

Job {
  Name = "migrate-job"
  Type = Migrate
  Level = Full
  Client = localhost-fd
  FileSet = "Full Set"
  Messages = Standard
  Storage = DiskChanger
  Pool = Default
  Selection Type = Job
  Selection Pattern = ".*Save"
...
  Purge Migration Job = yes
}

This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.

Changes in the Pruning Algorithm

We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won’t prune automatically a Job if this particular Job is needed to restore data. Example:

JobId: 1  Level: Full
JobId: 2  Level: Incremental
JobId: 3  Level: Incremental
JobId: 4  Level: Differential
.. Other incrementals up to now

In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.

Important, this change affect only the automatic pruning step after a Job and the prune jobs Bconsole command. If a volume expires after the VolumeRetention period, important jobs can be pruned.

Ability to Verify any specified Job

You now have the ability to tell Bacula which Job should verify instead of automatically verify just the last one.

This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.

To verify a given job, just specify the Job jobid in argument when starting the job.

*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName:     VerifyVolume
Level:       VolumeToCatalog
Client:      127.0.0.1-fd
FileSet:     Full Set
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  VerifyVol.2010-09-08_14.17.17_03
Verify List: /tmp/regress/working/VerifyVol.bsr
When:        2010-09-08 14:17:31
Priority:    10
OK to run? (yes/mod/no):

New Features in 7.0.0

Storage daemon to Storage daemon

Bacula version 7.0 permits SD to SD transfer of Copy and Migration Jobs. This permits what is commonly referred to as replication or off-site transfer of Bacula backups. It occurs automatically, if the source SD and destination SD of a Copy or Migration job are different. The following picture shows how this works.

../../../_images/RNimage11.png

SD Calls Client

If the SD Calls Client directive is set to true in a Client resource any Backup, Restore, Verify, Copy, or Migration Job where the client is involved, the client will wait for the Storage daemon to contact it. By default this directive is set to false, and the Client will call the Storage daemon. This directive can be useful if your Storage daemon is behind a firewall that permits outgoing connections but not incoming one. The following picture shows the communications connection paths in both cases.

../../../_images/RNimage12.png

Next Pool

In previous versions of Bacula the Next Pool directive could be specified in the Pool resource for use with Migration and Copy Jobs. The Next Pool concept has been extended in Bacula version 7.0.0 to allow you to specify the Next Pool directive in the Job resource as well. If specified in the Job resource, it will override any value specified in the Pool resource.

In addition to being permitted in the Job resource, the nextpool=xxx specification can be specified as a run override in the run directive of a Schedule resource. Any nextpool specification in a run directive will override any other specification in either the Job or the Pool.

In general, more information is displayed in the Job log on exactly which Next Pool specification is ultimately used.

status storage

The bconsole status storage has been modified to attempt to eliminate duplicate storage resources and only show one that references any given storage daemon. This might be confusing at first, but tends to make a much more compact list of storage resource from which to select if there are multiple storage devices in the same storage daemon.

If you want the old behavior (always display all storage resources) simply add the keyword select to the command - i.e. use status select storage.

status schedule

A new status command option called scheduled has been implemented in bconsole. By default it will display 20 lines of the next scheduled jobs. For example, with the default bacula-dir.conf configuration file, a bconsole command status scheduled produces:

Scheduled Jobs:
Level        Type   Pri  Scheduled        Job Name     Schedule
======================================================================
Differential Backup 10  Sun 30-Mar 23:05 BackupClient1 WeeklyCycle
Incremental  Backup 10  Mon 24-Mar 23:05 BackupClient1 WeeklyCycle
Incremental  Backup 10  Tue 25-Mar 23:05 BackupClient1 WeeklyCycle
...
Full         Backup 11  Mon 24-Mar 23:10 BackupCatalog WeeklyCycleAfterBackup
Full         Backup 11  Wed 26-Mar 23:10 BackupCatalog WeeklyCycleAfterBackup
...
====

Note, the output is listed by the Jobs found, and is not sorted chronologically.

This command has a number of options, most of which act as filters:

  • days=nn This specifies the number of days to list. The default is 10 but can be set from 0 to 500.

  • limit=nn This specifies the limit to the number of lines to print. The default is 100 but can be any number in the range 0 to 2000.

  • time=”YYYY-MM-DD HH:MM:SS” Sets the start time for listing the scheduled jobs. The default is to use the current time. Note, the time value must be specified inside double quotes and must be in the exact form shown above.

  • schedule=schedule-name This option restricts the output to the named schedule.

  • job=job-name This option restricts the output to the specified Job name.

Data Encryption Cipher Configuration

Bacula version 7.0 and later now allows to configure the data encryption cipher and the digest algorithm. The cipher was forced to AES 128, and it is now possible to choose between the following ciphers:

  • AES128 (default)

  • AES192

  • AES256

  • blowfish

The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL options. We advise you to not modify the PkiDigest default setting. Please, refer to OpenSSL documentation to know about pro and cons on these options.

FileDaemon {
  ...
  PkiCipher = AES256
}

New Truncate Command

We have added a new truncate command to bconsole, which will truncate a Volume if the Volume is purged and if the Volume is also marked Action On Purge = Truncate. This feature was originally added in Bacula version 5.0.1, but the mechanism for actually doing the truncate required the user to enter a command such as:

purge volume action=truncate storage=File pool=Default

The above command is now simplified to be:

truncate storage=File pool=Default

Migration/Copy/VirtualFull Performance Enhancements

The Bacula Storage daemon now permits multiple jobs to simultaneously read the same disk Volume, which gives substantial performance enhancements when running Migration, Copy, or VirtualFull jobs that read disk Volumes. Our testing shows that when running multiple simultaneous jobs, the jobs can finish up to ten times faster with this version of Bacula. This is built-in to the Storage daemon, so it happens automatically and transparently.

VirtualFull Backup Consolidation Enhancements

By default Bacula selects jobs automatically for a VirtualFull, however, you may want to create the Virtual backup based on a particular backup (point in time) that exists.

For example, if you have the following backup Jobs in your catalog:

+-------+---------+-------+----------+----------+-----------+
| JobId | Name    | Level | JobFiles | JobBytes | JobStatus |
+-------+---------+-------+----------+----------+-----------+
| 1     | Vbackup | F     | 1754     | 50118554 | T         |
| 2     | Vbackup | I     | 1        | 4        | T         |
| 3     | Vbackup | I     | 1        | 4        | T         |
| 4     | Vbackup | D     | 2        | 8        | T         |
| 5     | Vbackup | I     | 1        | 6        | T         |
| 6     | Vbackup | I     | 10       | 60       | T         |
| 7     | Vbackup | I     | 11       | 65       | T         |
| 8     | Save    | F     | 1758     | 50118564 | T         |
+-------+---------+-------+----------+----------+-----------+

and you want to consolidate only the first 3 jobs and create a virtual backup equivalent to Job 1 + Job 2 + Job 3, you will use jobid=3 in the run command, then Bacula will select the previous Full backup, the previous Differential (if any) and all subsequent Incremental jobs.

run job=Vbackup jobid=3 level=VirtualFull

If you want to consolidate a specific job list, you must specify the exact list of jobs to merge in the run command line. For example, to consolidate the last Differential and all subsequent Incremental, you will use jobid=4,5,6,7 or jobid=4-7 on the run command line. As one of the Job in the list is a Differential backup, Bacula will set the new job level to Differential. If the list is composed only with Incremental jobs, the new job will have a level set to Incremental.

run job=Vbackup jobid=4-7 level=VirtualFull

When using this feature, Bacula will automatically discard jobs that are not related to the current Job. For example, specifying jobid=7,8, Bacula will discard JobId 8 because it is not part of the same backup Job.

We do not recommend it, but really want to consolidate jobs that have different names (so probably different clients, filesets, etc…), you must use alljobid= keyword instead of jobid=.

run job=Vbackup alljobid=1-3,6-8 level=VirtualFull

FD Storage Address

When the Director is behind a NAT, in a WAN area, to connect to the StorageDaemon, the Director uses an external ip address, and the FileDaemon should use an internal IP address to contact the StorageDaemon.

The normal way to handle this situation is to use a canonical name such as storage-server that will be resolved on the Director side as the WAN address and on the Client side as the LAN address. This is now possible to configure this parameter using the new directive FDStorageAddress in the Storage or Client resource.

BENewFeatures/CommunityNewFeatures/CommunityFeatures/RNimage13.png
Storage {
     Name = storage1
     Address = 65.1.1.1
     FD Storage Address = 10.0.0.1
     SD Port = 9103
     ...
}

 Client {
      Name = client1
      Address = 65.1.1.2
      FD Storage Address = 10.0.0.1
      FD Port = 9102
      ...
 }

Note that using the Client FDStorageAddress directive will not allow to use multiple Storage Daemon, all Backup or Restore requests will be sent to the specified FDStorageAddress.

Job Bandwidth Limitation

The new Job Bandwidth Limitation directive may be added to the File daemon’s and/or Director’s configuration to limit the bandwidth used by a Job on a Client. It can be set in the File daemon’s conf file for all Jobs run in that File daemon, or it can be set for each Job in the Director’s conf file. The speed is always specified in bytes per second.

For example:

FileDaemon {
  Name = localhost-fd
  Working Directory = /some/path
  Pid Directory = /some/path
  ...
  Maximum Bandwidth Per Job = 5Mb/s
}

The above example would cause any jobs running with the FileDaemon to not exceed 5 megabytes per second of throughput when sending data to the Storage Daemon. Note, the speed is always specified in bytes per second (not in bits per second), and the case (upper/lower) of the specification characters is ignored (i.e. 1MB/s = 1Mb/s).

You may specify the following speed parameter modifiers: k/s (1,000 bytes per second), kb/s (1,024 bytes per second), m/s (1,000,000 bytes per second), or mb/s (1,048,576 bytes per second).

For example:

Job {
  Name = locahost-data
  FileSet = FS_localhost
  Accurate = yes
  ...
  Maximum Bandwidth = 5Mb/s
  ...
}

The above example would cause Job localhost-data to not exceed 5MB/s of throughput when sending data from the File daemon to the Storage daemon.

A new console command setbandwidth permits to set dynamically the maximum throughput of a running Job or for future jobs of a Client.

* setbandwidth limit=1000 jobid=10

Please note that the value specified for the limit command line parameter is always in units of 1024 bytes (i.e. the number is multiplied by 1024 to give the number of bytes per second). As a consequence, the above limit of 1000 will be interpreted as a limit of 1000 * 1024 = 1,024,000 bytes per second.

This project was funded by Bacula Systems.

Maximum Concurrent Read Jobs

This is a new directive that can be used in the bacula-dir.conf file in the Storage resource. The main purpose is to limit the number of concurrent Copy, Migration, and VirtualFull jobs so that they don’t monopolize all the Storage drives causing a deadlock situation where all the drives are allocated for reading but none remain for writing. This deadlock situation can occur when running multiple simultaneous Copy, Migration, and VirtualFull jobs.

The default value is set to 0 (zero), which means there is no limit on the number of read jobs. Note, limiting the read jobs does not apply to Restore jobs, which are normally started by hand. A reasonable value for this directive is one half the number of drives that the Storage resource has rounded down. Doing so, will leave the same number of drives for writing and will generally avoid over committing drives and a deadlock.

Director job Codes in Message Resource Commands

Before submitting the specified mail command to the operating system, Bacula performs character substitution like in Runscript commands. Bacula will now perform also specific Director character substitution.

The code for this feature was contributed by Bastian Friedrich.

Additions to RunScript variables

The following variables are now available in runscripts:

  • current PID using %P

  • if the job is a clone job using %C

RunAfterJob = "/bin/echo Pid=%P isCloned=%C"

Read Only Storage Devices

This version of Bacula permits defining a Storage daemon device to be read-only. That is if the ReadOnly directive is specified and enabled, the drive can only be used for read operations. The the ReadOnly directive can be defined in any bacula-sd.conf Device resource, and is most useful to reserve one or more drives for restores. An example is:

Read Only = yes

New Prune Expired Volume Command

It is now possible to prune all volumes (from a pool, or globally) that are ``expired’’. This option can be scheduled after or before the backup of the Catalog and can be combined with the Truncate On Purge option. The Expired Prune option can be used instead of the manual_prune.pl script.

* prune expired volumes

* prune expired volumes pool=FullPool

To schedule this option automatically, it can be added to the BackupCatalog job definition.

Job {
  Name = CatalogBackup
  ...
  RunScript {
    Console = "prune expired volume yes"
    RunsWhen = Before
  }
}

DisableCommand Directive

There is a new Directive named Disable Command that can be put in the File daemon Client or Director resource. If it is in the Client, it applies globally, otherwise the directive applies only to the Director in which it is found. The Disable Command adds security to your File daemon by disabling certain commands. The commands that can be disabled are:

backup
cancel
setdebug=
setbandwidth=
estimate
fileset
JobId=
level =
restore
endrestore
session
status
.status
storage
verify
RunBeforeNow
RunBeforeJob
RunAfterJob
Run
accurate

On or more of these command keywords can be placed in quotes and separated by spaces on the Disable Command directive line. Note: the commands must be written exactly as they appear above.

Multiple Console Directors

Support for multiple bconsole and bat Directors in the bconsole.conf and bat.conf files has been implemented and/or improved.

Restricted Consoles

Better support for Restricted consoles has been implement for bconsole and bat.

Configuration Files

In previous versions of Bacula the configuration files for each component were limited to a maximum of 499 bytes per configuration file line. This version of Bacula permits unlimited input line lengths. This can be especially useful for specifying more complicated Migration/Copy SQL statements and in creating long restricted console ACL lists.

Maximum Spawned Jobs

The Job resource now permits specifying a number of Maximum Spawn Jobs. The default is 300. This directive can be useful if you have big hardware and you do a lot of Migration/Copy jobs which start at the same time. In prior versions of Bacula, Migration/Copy was limited to spawning a maximum of 100 jobs at a time.

Progress Meter

The new File daemon has been enhanced to send its progress (files processed and bytes written) to the Director every 30 seconds. These figures can then be displayed with a bconsole status dir command.

Scheduling a 6th Week

Prior version of Bacula permits specifying 1st through 5th week of a month (first through fifth) as a keyword on the run directive of a Schedule resource. This version of Bacula also permits specifying the 6th week of a month with the keyword sixth or 6th.

Scheduling the Last Day of a Month

This version of Bacula now permits specifying the lastday keyword in the run directive of a Schedule resource. If lastday is specified, it will apply only to those months specified on the run directive. Note: by default all months are specified.

Improvements to Cancel and Restart bconsole Commands

The Restart bconsole command now allow selection of either canceled or failed jobs to be restarted. In addition both the cancel and restart bconsole commands permit entering a number of JobIds separated by commas or a range of JobIds indicated by a dash between the begin and end range (e.g. 3-10). Finally the two commands also allow one to enter the special keyword all to select all the appropriate Jobs.

bconsole Performance Improvements

In previous versions of Bacula certain bconsole commands could wait a long time due to catalog lock contention. This was especially noticeable when a large number of jobs were running and putting their attributes into the catalog. This version uses a separate catalog connection that should significantly enhance performance.

New .bvfs_decode_lstat Command

There is a new bconsole command, which is .bvfs_decode_lstat it requires one argument, which is lstat=”lstat value to decode”. An example command in bconsole and the output might be:

.bvfs_decode_lstat lstat="A A EHt B A A A JP BAA B BTL/A7 BTL/A7 BTL/A7 A A C"

st_nlink=1
st_mode=16877
st_uid=0
st_gid=0
st_size=591
st_blocks=1
st_ino=0
st_ctime=1395650619
st_mtime=1395650619
st_mtime=1395650619
st_dev=0
LinkFI=0

New Debug Options

In Bacula Enterprise version 8.0 and later, we introduced new options to the setdebug command.

If the options parameter is set, the following arguments can be used to control debug functions.

0 clear debug flags i Turn off, ignore bwrite() errors on restore on File Daemon d Turn off decomp of BackupRead() streams on File Daemon t Turn on timestamp in traces T Turn off timestamp in traces c Truncate trace file if trace file is activated l Turn on recoding events on P() and V() p Turn on the display of the event ring when doing a bactrace

The following command will truncate the trace file and will turn on timestamps in the trace file.

* setdebug level=10 trace=1 options=ct fd

It is now possible to use class of debug messages called tags to control the debug output of Bacula daemons.

all Display all debug messages bvfs Display BVFS debug messages sql Display SQL related debug messages memory Display memory and poolmem allocation messages scheduler Display scheduler related debug messages

* setdebug level=10 tags=bvfs,sql,memory
* setdebug level=10 tags=!bvfs

# bacula-dir -t -d 200,bvfs,sql

The tags option is composed of a list of tags, tags are separated by , or + or - or !. To disable a specific tag, use - or ! in front of the tag. Note that more tags will come in future versions.

New Features in 5.2.2

This chapter presents the new features that have been added to the current Community version of Bacula that is now released.

Additions to RunScript variables You can have access to Director name using %D in your runscript command.

RunAfterJob = “/bin/echo Director=%D

New Features in 5.2.1

This chapter presents the new features were added in the Community release version 5.2.1.

There are additional features (plugins) available in the Enterprise version that are described in another chapter. A subscription to Bacula Systems is required for the Enterprise version.

LZO Compression

LZO compression has been to the File daemon. From the user’s point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).

For example:

Include {
   Options { compression=LZO }
   File = /home
   File = /data
}

LZO provides a much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the hardware compression is almost always a better option.

LZO is a good alternative for GZIP1 when you don’t want to slow down your backup. With a modern CPU it should be able to run almost as fast as:

  • your client can read data from disk. Unless you have very fast disks like SSD or large/fast RAID array.

  • the data transfers between the file daemon and the storage daemon even on a 1Gb/s link.

Note, Bacula uses compression level LZO1X-1.

The code for this feature was contributed by Laurent Papier.

New Tray Monitor

Since the old integrated Windows tray monitor doesn’t work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.

../../../_images/RNimage14.png ../../../_images/RNimage15.png

To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:

Console {
    Name = win2003-mon
    Password = "xxx"
    CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run
    ClientACL = *all*               # you can restrict to a specific host
    CatalogACL = *all*
    JobACL = *all*
    StorageACL = *all*
    ScheduleACL = *all*
    PoolACL = *all*
    FileSetACL = *all*
    WhereACL = *all*
}

This project was funded by Bacula Systems and is available with Bacula the Enterprise Edition and the Community Edition.

Purge Migration Job

The new Purge Migration Job directive may be added to the Migration Job definition in the Director’s configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.

For example:

Job {
  Name = "migrate-job"
  Type = Migrate
  Level = Full
  Client = localhost-fd
  FileSet = "Full Set"
  Messages = Standard
  Storage = DiskChanger
  Pool = Default
  Selection Type = Job
  Selection Pattern = ".*Save"
...
  Purge Migration Job = yes
}

This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.

Changes in Bvfs (Bacula Virtual FileSystem)

Bat has now a bRestore panel that uses Bvfs to display files and directories.

../../../_images/RNimage16.png

the Bvfs module works correctly with BaseJobs, Copy and Migration jobs.

This project was funded by Bacula Systems.

General notes

  • All fields are separated by a tab

  • You can specify limit= and offset= to list smoothly records in very big directories

  • All operations (except cache creation) are designed to run instantly

  • At this time, Bvfs works faster on PostgreSQL than MySQL catalog. If you can contribute new faster SQL queries we will be happy, else don’t complain about speed.

  • The cache creation is dependent of the number of directories. As Bvfs shares information across jobs, the first creation can be slow

  • All fields are separated by a tab

  • Due to potential encoding problem, it’s advised to always use pathid in queries.

Get dependent jobs from a given JobId

Bvfs allows you to query the catalog against any combination of jobs. You can combine all Jobs and all FileSet for a Client in a single session.

To get all JobId needed to restore a particular job, you can use the .bvfs_get_jobids command.

.bvfs_get_jobids jobid=num [all]

.bvfs_get_jobids jobid=10
1,2,5,10
.bvfs_get_jobids jobid=10 all
1,2,3,5,10

In this example, a normal restore will need to use JobIds 1,2,5,10 to compute a complete restore of the system.

With the all option, the Director will use all defined FileSet for this client.

Generating Bvfs cache

The .bvfs_update command computes the directory cache for jobs specified in argument, or for all jobs if unspecified.

.bvfs_update [jobid=numlist]

Example:

.bvfs_update jobid=1,2,3

You can run the cache update process in a RunScript after the catalog backup.

Get all versions of a specific file

Bvfs allows you to find all versions of a specific file for a given Client with the .bvfs_ version command. To avoid problems with encoding, this function uses only PathId and FilenameId. The jobid argument is mandatory but unused.

.bvfs_versions client=filedaemon pathid=num filenameid=num jobid=1
PathId FilenameId FileId JobId LStat Md5 VolName Inchanger
PathId FilenameId FileId JobId LStat Md5 VolName Inchanger
...

Example:

.bvfs_versions client=localhost-fd pathid=1 fnid=47 jobid=1
1  47  52  12  gD HRid IGk D Po Po A P BAA I A   /uPgWaxMgKZlnMti7LChyA  Vol1  1

List directories

Bvfs allows you to list directories in a specific path.

.bvfs_lsdirs pathid=num path=/apath jobid=numlist limit=num offset=num
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
...

You need to pathid or path. Using path=”” will list ``/’’ on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.

.bvfs_lsdirs pathid=4 jobid=1,11,12
4       0       0       0       A A A A A A A A A A A A A A     .
5       0       0       0       A A A A A A A A A A A A A A     ..
3       0       0       0       A A A A A A A A A A A A A A     regress/

In this example, to list directories present in regress/, you can use

.bvfs_lsdirs pathid=3 jobid=1,11,12
3       0       0       0       A A A A A A A A A A A A A A     .
4       0       0       0       A A A A A A A A A A A A A A     ..
2       0       0       0       A A A A A A A A A A A A A A     tmp/

List files

Bvfs allows you to list files in a specific path.

.bvfs_lsfiles pathid=num path=/apath jobid=numlist limit=num offset=num
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
...

You need to pathid or path. Using path=”” will list ``/’’ on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.

.bvfs_lsfiles pathid=4 jobid=1,11,12
4       0       0       0       A A A A A A A A A A A A A A     .
5       0       0       0       A A A A A A A A A A A A A A     ..
1       0       0       0       A A A A A A A A A A A A A A     regress/

In this example, to list files present in regress/, you can use

.bvfs_lsfiles pathid=1 jobid=1,11,12
1   47   52   12    gD HRid IGk BAA I BMqcPH BMqcPE BMqe+t A     titi
1   49   53   12    gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B     toto
1   48   54   12    gD HRie IGk BAA I BMqcPH BMqcPE BMqe+3 A     tutu
1   45   55   12    gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B     ficheriro1.txt
1   46   56   12    gD HRie IGk BAA I BMqe/K BMqcPE BMqe+3 D     ficheriro2.txt

Restore set of files

Bvfs allows you to create a SQL table that contains files that you want to restore. This table can be provided to a restore command with the file option.

.bvfs_restore fileid=numlist dirid=numlist hardlink=numlist path=b2num
OK
restore file=?b2num ...

To include a directory (with dirid), Bvfs needs to run a query to select all files. This query could be time consuming.

hardlink list is always composed of a series of two numbers (jobid, fileindex). This information c an be found in the LinkFI field of the LStat packet.

The path argument represents the name of the table that Bvfs will store results. The format of this table is b2[0-9]+. (Should start by b2 and followed by digits).

Example:

.bvfs_restore fileid=1,2,3,4 hardlink=10,15,10,20 jobid=10 path=b20001
OK

Cleanup after Restore

To drop the table used by the restore command, you can use the .bvfs_cleanup command.

.bvfs_cleanup path=b20001

Clearing the BVFS Cache

To clear the BVFS cache, you can use the .bvfs_clear_cache command.

.bvfs_clear_cache yes
OK

Changes in the Pruning Algorithm

We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won’t prune automatically a Job if this particular Job is needed to restore data. Example:

JobId: 1  Level: Full
JobId: 2  Level: Incremental
JobId: 3  Level: Incremental
JobId: 4  Level: Differential
.. Other incrementals up to now

In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.

Important, this change affect only the automatic pruning step after a Job and the prune jobs Bconsole command. If a volume expires after the VolumeRetention period, important jobs can be pruned.

Ability to Verify any specified Job

You now have the ability to tell Bacula which Job should verify instead of automatically verify just the last one.

This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.

To verify a given job, just specify the Job jobid in argument when starting the job.

*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName:     VerifyVolume
Level:       VolumeToCatalog
Client:      127.0.0.1-fd
FileSet:     Full Set
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  VerifyVol.2010-09-08_14.17.17_03
Verify List: /tmp/regress/working/VerifyVol.bsr
When:        2010-09-08 14:17:31
Priority:    10
OK to run? (yes/mod/no):

This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.

Additions to RunScript variables

You can have access to JobBytes and JobFiles using %b and %F in your runscript command. The Client address is now available through %h.

RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h"

Additions to the Plugin API

The bfuncs structure has been extended to include a number of new entrypoints.

bfuncs

The bFuncs structure defines the callback entry points within Bacula that the plugin can use register events, get Bacula values, set Bacula values, and send messages to the Job output or debug output.

The exact definition as of this writing is:

typedef struct s_baculaFuncs {
   uint32_t size;
   uint32_t version;
   bRC (*registerBaculaEvents)(bpContext *ctx, ...);
   bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
   bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
   bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
       int type, utime_t mtime, const char *fmt, ...);
   bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
       int level, const char *fmt, ...);
   void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
       size_t size);
   void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);

   /* New functions follow */
   bRC (*AddExclude)(bpContext *ctx, const char *file);
   bRC (*AddInclude)(bpContext *ctx, const char *file);
   bRC (*AddIncludeOptions)(bpContext *ctx, const char *opts);
   bRC (*AddRegex)(bpContext *ctx, const char *item, int type);
   bRC (*AddWild)(bpContext *ctx, const char *item, int type);
   bRC (*checkChanges)(bpContext *ctx, struct save_pkt *sp);

} bFuncs;
AddExclude

can be called to exclude a file. The file string passed may include wildcards that will be interpreted by the fnmatch subroutine. This function can be called multiple times, and each time the file specified will be added to the list of files to be excluded. Note, this function only permits adding excludes of specific file or directory names, or files matched by the rather simple fnmatch mechanism. See below for information on doing wild-card and regex excludes.

NewPreInclude

can be called to create a new Include block. This block will be added after the current defined Include block. This function can be called multiple times, but each time, it will create a new Include section (not normally needed). This function should be called only if you want to add an entirely new Include block.

NewInclude

can be called to create a new Include block. This block will be added before any user defined Include blocks. This function can be called multiple times, but each time, it will create a new Include section (not normally needed). This function should be called only if you want to add an entirely new Include block.

AddInclude

can be called to add new files/directories to be included. They are added to the current Include block. If NewInclude has not been included, the current Include block is the last one that the user created. This function should be used only if you want to add totally new files/directories to be included in the backup.

NewOptions

adds a new Options block to the current Include in front of any other Options blocks. This permits the plugin to add exclude directives (wild-cards and regexes) in front of the user Options, and thus prevent certain files from being backed up. This can be useful if the plugin backs up files, and they should not be also backed up by the main Bacula code. This function may be called multiple times, and each time, it creates a new prepended Options block. Note: normally you want to call this entry point prior to calling AddOptions, AddRegex, or AddWild.

AddOptions

allows the plugin it set options in the current Options block, which is normally created with the NewOptions call just prior to adding Include Options. The permitted options are passed as a character string, where each character has a specific meaning as defined below:

a

always replace files (default).

e

exclude rather than include.

h

no recursion into subdirectories.

H

do not handle hard links.

i

ignore case in wildcard and regex matches.

M

compute an MD5 sum.

p

use a portable data format on Windows (not recommended).

R

backup resource forks and Findr Info.

r

read from a fifo

S1

compute an SHA1 sum.

S2

compute an SHA256 sum.

S3

comput an SHA512 sum.

s

handle sparse files.

m

use st_mtime only for file differences.

k

restore the st_atime after accessing a file.

A

enable ACL backup.

Vxxx:

specify verify options. Must terminate with :

Cxxx:

specify accurate options. Must terminate with :

Jxxx:

specify base job Options. Must terminate with :

Pnnn:

specify integer nnn paths to strip. Must terminate with :

w

if newer

Zn

specify gzip compression level n.

K

do not use st_atime in backup decision.

c

check if file changed during backup.

N

honor no dump flag.

X

enable backup of extended attributes.

AddRegex

adds a regex expression to the current Options block. The following options are permitted:

(a blank) regex applies to whole path and filename.

F

regex applies only to the filename (directory or path stripped).

D

regex applies only to the directory (path) part of the name.

AddWild

adds a wildcard expression to the current Options block. The following options are permitted:

(a blank) regex applies to whole path and filename.

F

regex applies only to the filename (directory or path stripped).

D

regex applies only to the directory (path) part of the name.

checkChanges

call the check_changes() function in Bacula code that can use Accurate code to compare the file information in argument with the previous file information. The delta_seq attribute of the save_pkt will be updated, and the call will return bRC_Seen if the core code wouldn’t decide to backup it.

Bacula events

The list of events has been extended to include:

typedef enum {
  bEventJobStart        = 1,
  bEventJobEnd          = 2,
  bEventStartBackupJob  = 3,
  bEventEndBackupJob    = 4,
  bEventStartRestoreJob = 5,
  bEventEndRestoreJob   = 6,
  bEventStartVerifyJob  = 7,
  bEventEndVerifyJob    = 8,
  bEventBackupCommand   = 9,
  bEventRestoreCommand  = 10,
  bEventLevel           = 11,
  bEventSince           = 12,

  /* New events */
  bEventCancelCommand                   = 13,
  bEventVssBackupAddComponents          = 14,
  bEventVssRestoreLoadComponentMetadata = 15,
  bEventVssRestoreSetComponentsSelected = 16,
  bEventRestoreObject                   = 17,
  bEventEndFileSet                      = 18,
  bEventPluginCommand                   = 19,
  bEventVssBeforeCloseRestore           = 20,
  bEventVssPrepareSnapshot              = 21

} bEventType;
bEventCancelCommand

is called whenever the currently running Job is canceled */

bEventVssBackupAddComponents

bEventVssPrepareSnapshot

is called before creating VSS snapshots, it provides a char[27] table where the plugin can add Windows drives that will be used during the Job. You need to add them without duplicates, and you can use in fd_common.h add_drive() and copy_drives() for this purpose.

ACL enhancements

The following enhancements are made to the Bacula Filed with regards to Access Control Lists (ACLs)

  • Added support for AIX 5.3 and later new aclx_get interface which supports POSIX and NFSv4 ACLs.

  • Added support for new acl types on FreeBSD 8.1 and later which supports POSIX and NFSv4 ACLs.

  • Some generic cleanups for internal ACL handling.

  • Fix for acl storage on OSX

  • Cleanup of configure checks for ACL detection, now configure only tests for a certain interface type based on the operating system this should give less false positives on detection. Also when ACLs are detected no other acl checks are performed anymore.

This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.

XATTR enhancements

The following enhancements are made to the Bacula Filed with regards to Extended Attributes (XATTRs)

  • Added support for IRIX extended attributes using the attr_get interface.

  • Added support for Tru64 (OSF1) extended attributes using the getproplist interface.

  • Added support for AIX extended attributes available in AIX 6.x and higher using the listea/getea/setea interface.

  • Added some debugging to generic xattr code so it easier to debug.

  • Cleanup of configure checks for XATTR detection, now configure only tests for a certain interface type based on the operating system this should give less false positives on detection. Also when xattrs are detected no other xattr checks are performed anymore.

This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.

Class Based Database Backend Drivers

The main Bacula Director code is independent of the SQL backend in version 5.2.0 and greater. This means that the Bacula Director can be packaged by itself, then each of the different SQL backends supported can be packaged separately. It is possible to build all the DB backends at the same time by including multiple database options at the same time.

./configure can be run with multiple database configure options.

--with-sqlite3
--with-mysql
--with-postgresql

Order of testing for databases is:

  • postgresql

  • mysql

  • sqlite3

Each configured backend generates a file named: libbaccats-<sql_backend_name>-<version>.so A dummy catalog library is created named libbaccats-version.so

At configure time the first detected backend is used as the so called default backend and at install time the dummy libbaccats-<version>.so is replaced with the default backend type.

If you configure all three backends you get three backend libraries and the postgresql gets installed as the default.

When you want to switch to another database, first save any old catalog you may have then you can copy one of the three backend libraries over the libbaccats-<version>.so e.g.

An actual command, depending on your Bacula version might be:

cp libbaccats-postgresql-5.2.2.so libbaccats-5.2.2.so

where the 5.2.2 must be replaced by the Bacula release version number.

Then you must update the default backend in the following files:

create_bacula_database
drop_bacula_database
drop_bacula_tables
grant_bacula_privileges
make_bacula_tables
make_catalog_backup
update_bacula_tables

And re-run all the above scripts. Please note, this means you will have a new empty database and if you had a previous one it will be lost.

All current database backend drivers for catalog information are rewritten to use a set of multi inherited C++ classes which abstract the specific database specific internals and make sure we have a more stable generic interface with the rest of SQL code. From now on there is a strict boundary between the SQL code and the low-level database functions. This new interface should also make it easier to add a new backend for a currently unsupported database. As part of the rewrite the SQLite 2 code was removed (e.g. only SQLite 3 is now supported). An extra bonus of the new code is that you can configure multiple backends in the configure and build all backends in one compile session and select the correct database backend at install time. This should make it a lot easier for packages maintainers.

We also added cursor support for PostgreSQL backend, this improves memory usage for large installation.

This project was implemented by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with both the Bacula Enterprise Edition and the Community Edition.

Hash List Enhancements

The htable hash table class has been extended with extra hash functions for handling next to char pointer hashes also 32 bits and 64 bits hash keys. Also the hash table initialization routines have been enhanced with support for passing a hint as to the number of initial pages to use for the size of the hash table. Until now the hash table always used a fixed value of 10 Mb. The private hash functions of the mountpoint entry cache have been rewritten to use the new htable class with a small memory footprint.

This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.

Release Version 5.0.3

There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.

Release Version 5.0.2

There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.

New Features in 5.0.1

This chapter presents the new features that are in the released Bacula version 5.0.1. This version mainly fixes a number of bugs found in version 5.0.0 during the ongoing development process.

Truncate Volume after Purge

The Pool directive ActionOnPurge=Truncate instructs Bacula to truncate the volume when it is purged with the new command purge volume action. It is useful to prevent disk based volumes from consuming too much space.

Pool {
  Name = Default
  Action On Purge = Truncate
  ...
}

As usual you can also set this property with the update volume command

*update volume=xxx ActionOnPurge=Truncate
*update volume=xxx actiononpurge=None

To ask Bacula to truncate your Purged volumes, you need to use the following command in interactive mode or in a RunScript as shown after:

*purge volume action=truncate storage=File allpools
# or by default, action=all
*purge volume action storage=File pool=Default

This is possible to specify the volume name, the media type, the pool, the storage, etc…(see help purge) Be sure that your storage device is idle when you decide to run this command.

Job {
 Name = CatalogBackup
 ...
 RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=all allpools storage=File"
 }
}

Important note: This feature doesn’t work as expected in version 5.0.0. Please do not use it before version 5.0.1.

Allow Higher Duplicates

This directive did not work correctly and has been depreciated (disabled) in version 5.0.1. Please remove it from your bacula-dir.conf file as it will be removed in a future release.

Cancel Lower Level Duplicates

This directive was added in Bacula version 5.0.1. It compares the level of a new backup job to old jobs of the same name, if any, and will kill the job which has a lower level than the other one. If the levels are the same (i.e. both are Full backups), then nothing is done and the other Cancel XXX Duplicate directives will be examined.

New Features in 5.0.0

Maximum Concurrent Jobs for Devices

Maximum Concurrent Jobs is a new Device directive in the Storage Daemon configuration permits setting the maximum number of Jobs that can run concurrently on a specified Device. Using this directive, it is possible to have different Jobs using multiple drives, because when the Maximum Concurrent Jobs limit is reached, the Storage Daemon will start new Jobs on any other available compatible drive. This facilitates writing to multiple drives with multiple Jobs that all use the same Pool.

This project was funded by Bacula Systems.

Restore from Multiple Storage Daemons

Previously, you were able to restore from multiple devices in a single Storage Daemon. Now, Bacula is able to restore from multiple Storage Daemons. For example, if your full backup runs on a Storage Daemon with an autochanger, and your incremental jobs use another Storage Daemon with lots of disks, Bacula will switch automatically from one Storage Daemon to an other within the same Restore job.

You must upgrade your File Daemon to version 3.1.3 or greater to use this feature.

This project was funded by Bacula Systems with the help of Equiinet.

File Deduplication using Base Jobs

A base job is sort of like a Full save except that you will want the FileSet to contain only files that are unlikely to change in the future (i.e. a snapshot of most of your system after installing it). After the base job has been run, when you are doing a Full save, you specify one or more Base jobs to be used. All files that have been backed up in the Base job/jobs but not modified will then be excluded from the backup. During a restore, the Base jobs will be automatically pulled in where necessary.

This is something none of the competition does, as far as we know (except perhaps BackupPC, which is a Perl program that saves to disk only). It is big win for the user, it makes Bacula stand out as offering a unique optimization that immediately saves time and money. Basically, imagine that you have 100 nearly identical Windows or Linux machine containing the OS and user files. Now for the OS part, a Base job will be backed up once, and rather than making 100 copies of the OS, there will be only one. If one or more of the systems have some files updated, no problem, they will be automatically restored.

This project was funded by Bacula Systems.

AllowCompression = yesno

This new directive may be added to Storage resource within the Director’s configuration to allow users to selectively disable the client compression for any job which writes to this storage resource.

For example:

Storage {
  Name = UltriumTape
  Address = ultrium-tape
  Password = storage_password # Password for Storage Daemon
  Device = Ultrium
  Media Type = LTO 3
  AllowCompression = No # Tape drive has hardware compression
}

The above example would cause any jobs running with the UltriumTape storage resource to run without compression from the client file daemons. This effectively overrides any compression settings defined at the FileSet level.

This feature is probably most useful if you have a tape drive which supports hardware compression. By setting the AllowCompression = No directive for your tape drive storage resource, you can avoid additional load on the file daemon and possibly speed up tape backups.

This project was funded by Collaborative Fusion, Inc.

Accurate Fileset Options

In previous versions, the accurate code used the file creation and modification times to determine if a file was modified or not. Now you can specify which attributes to use (time, size, checksum, permission, owner, group, …), similar to the Verify options.

FileSet {
  Name = Full
  Include = {
    Options {
       Accurate = mcs
       Verify   = pin5
    }
    File = /
  }
}
  • i compare the inodes

  • p compare the permission bits

  • n compare the number of links

  • u compare the user id

  • g compare the group id

  • s compare the size

  • a compare the access time

  • m compare the modification time (st_mtime)

  • c compare the change time (st_ctime)

  • d report file size decreases

  • 5 compare the MD5 signature

  • 1 compare the SHA1 signature

Important note: If you decide to use checksum in Accurate jobs, the File Daemon will have to read all files even if they normally would not be saved. This increases the I/O load, but also the accuracy of the deduplication. By default, Bacula will check modification/creation time and size.

This project was funded by Bacula Systems.

Tab-completion for Bconsole

If you build bconsole with readline support, you will be able to use the new auto-completion mode. This mode supports all commands, gives help inside command, and lists resources when required. It works also in the restore mode.

To use this feature, you should have readline development package loaded on your system, and use the following option in configure.

./configure --with-readline=/usr/include/readline --disable-conio ...

The new bconsole won’t be able to tab-complete with older directors.

This project was funded by Bacula Systems.

Pool File and Job Retention

We added two new Pool directives, FileRetention and JobRetention, that take precedence over Client directives of the same name. It allows you to control the Catalog pruning algorithm Pool by Pool. For example, you can decide to increase Retention times for Archive or OffSite Pool.

It seems obvious to us, but apparently not to some users, that given the definition above that the Pool File and Job Retention periods is a global override for the normal Client based pruning, which means that when the Job is pruned, the pruning will apply globally to that particular Job.

Currently, there is a bug in the implementation that causes any Pool retention periods specified to apply to all Pools for that particular Client. Thus we suggest that you avoid using these two directives until this implementation problem is corrected.

Read-only File Daemon using capabilities

This feature implements support of keeping ReadAll capabilities after UID/GID switch, this allows FD to keep root read but drop write permission.

It introduces new bacula-fd option (-k) specifying that ReadAll capabilities should be kept after UID/GID switch.

root@localhost:~# bacula-fd -k -u nobody -g nobody

The code for this feature was contributed by our friends at AltLinux.

Bvfs API

To help developers of restore GUI interfaces, we have added new dot commands that permit browsing the catalog in a very simple way.

  • .bvfs_update [jobid=x,y,z] This command is required to update the Bvfs cache in the catalog. You need to run it before any access to the Bvfs layer.

  • .bvfs_lsdirs jobid=x,y,z path=/path | pathid=101 This command will list all directories in the specified path or pathid. Using pathid avoids problems with character encoding of path/filenames.

  • .bvfs_lsfiles jobid=x,y,z path=/path | pathid=101 This command will list all files in the specified path or pathid. Using pathid avoids problems with character encoding.

You can use limit=xxx and offset=yyy to limit the amount of data that will be displayed.

* .bvfs_update jobid=1,2
* .bvfs_update
* .bvfs_lsdir path=/ jobid=1,2

This project was funded by Bacula Systems.

Testing your Tape Drive

To determine the best configuration of your tape drive, you can run the new speed command available in the btape program.

This command can have the following arguments:

file_size=n

Specify the Maximum File Size for this test (between 1 and 5GB). This counter is in GB.

nb_file=n

Specify the number of file to be written. The amount of data should be greater than your memory ( $file_size*nb_file$).

skip_zero

This flag permits to skip tests with constant data.

skip_random

This flag permits to skip tests with random data.

skip_raw

This flag permits to skip tests with raw access.

skip_block

This flag permits to skip tests with Bacula block access.

*speed file_size=3 skip_raw
btape.c:1078 Test with zero data and bacula block structure.
btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes.
++++++++++++++++++++++++++++++++++++++++++
btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0)
btape.c:406 Volume bytes=3.221 GB. Write rate = 44.128 MB/s
...
btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 43.531 MB/s

btape.c:1090 Test with random data, should give the minimum throughput.
btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes.
+++++++++++++++++++++++++++++++++++++++++++
btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0)
btape.c:406 Volume bytes=3.221 GB. Write rate = 7.271 MB/s
+++++++++++++++++++++++++++++++++++++++++++
...
btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 7.365 MB/s

When using compression, the random test will give your the minimum throughput of your drive. The test using constant string will give you the maximum speed of your hardware chain (CPU, memory, SCSI card, cable, drive, tape).

You can change the block size in the Storage Daemon configuration file.

New Block Checksum Device Directive

You may now turn off the Block Checksum (CRC32) code that Bacula uses when writing blocks to a Volume. This is done by adding:

Block Checksum = no

doing so can reduce the Storage daemon CPU usage slightly. It will also permit Bacula to read a Volume that has corrupted data.

The default is yes - i.e. the checksum is computed on write and checked on read.

We do not recommend to turn this off particularly on older tape drives or for disk Volumes where doing so may allow corrupted data to go undetected.

New Bat Features

Those new features were funded by Bacula Systems.

Media List View

By clicking on Media, you can see the list of all your volumes. You will be able to filter by Pool, Media Type, Location,…And sort the result directly in the table. The old Media view is now known as Pool.

../../../_images/RNimage17.png

Media Information View

By double-clicking on a volume (on the Media list, in the Autochanger content or in the Job information panel), you can access a detailed overview of your Volume.

../../../_images/RNimage18.png

Job Information View

By double-clicking on a Job record (on the Job run list or in the Media information panel), you can access a detailed overview of your Job.

../../../_images/RNimage19.png

Autochanger Content View

By double-clicking on a Storage record (on the Storage list panel), you can access a detailed overview of your Autochanger.

../../../_images/RNimage20.png

To use this feature, you need to use the latest mtx-changer script version (With new listall and transfer commands).

Bat on Windows

We have ported bat to Windows and it is now installed by default when the installer is run. It works quite well on Win32, but has not had a lot of testing there, so your feedback would be welcome. Unfortunately, even though it is installed by default, it does not yet work on 64 bit Windows operating systems.

New Win32 Installer

The Win32 installer has been modified in several very important ways.

  • You must deinstall any current version of the Win32 File daemon before upgrading to the new one. If you forget to do so, the new installation will fail. To correct this failure, you must manually shutdown and deinstall the old File daemon.

  • All files (other than menu links) are installed in c:/Program Files/Bacula.

  • The installer no longer sets this file to require administrator privileges by default. If you want to do so, please do it manually using the cacls program. For example:

     cacls "C:\Program Files\Bacula" /T /G SYSTEM:F Administrators:F
    
    The server daemons (Director and Storage daemon) are no longer included in the Windows installer. If you want the Windows servers, you will either need to build them yourself (note they have not been ported to 64 bits), or you can contact Bacula Systems about this.
    

Win64 Installer

We have corrected a number of problems that required manual editing of the conf files. In most cases, it should now install and work. bat is by default installed in c:/Program Files/Bacula/bin32 rather than c:/Program Files/Bacula as is the case with the 32 bit Windows installer.

Linux Bare Metal Recovery USB Key

We have made a number of significant improvements in the Bare Metal Recovery USB key. Please see the README files it the rescue release for more details.

We are working on an equivalent USB key for Windows bare metal recovery, but it will take some time to develop it (best estimate 3Q2010 or 4Q2010).

bconsole Timeout Option

You can now use the -u option of bconsole to set a timeout in seconds for commands. This is useful with GUI programs that use bconsole to interface to the Director.

Important Changes

  • You are now allowed to Migrate, Copy, and Virtual Full to read and write to the same Pool. The Storage daemon ensures that you do not read and write to the same Volume.

  • The Device Poll Interval is now 5 minutes. (previously did not poll by default).

  • Virtually all the features of mtx-changer have now been parametrized, which allows you to configure mtx-changer without changing it. There is a new configuration file mtx-changer.conf that contains variables that you can set to configure mtx-changer. This configuration file will not be overwritten during upgrades. We encourage you to submit any changes that are made to mtx-changer and to parametrize it all in mtx-changer.conf so that all configuration will be done by changing only mtx-changer.conf.

  • The new mtx-changer script has two new options, listall and transfer. Please configure them as appropriate in mtx-changer.conf.

  • To enhance security of the BackupCatalog job, we provide a new script (make_catalog_backup.pl) that does not expose your catalog password. If you want to use the new script, you will need to manually change the BackupCatalog Job definition.

  • The bconsole help command now accepts an argument, which if provided produces information on that command (ex: help run).

Truncate volume after purge

Note that the Truncate Volume after purge feature doesn’t work as expected in 5.0.0 version. Please, don’t use it before version 5.0.1.

Custom Catalog queries

If you wish to add specialized commands that list the contents of the catalog, you can do so by adding them to the query.sql file. This query.sql file is now empty by default. The file examples/sample-query.sql has an a number of sample commands you might find useful.

Deprecated parts

The following items have been deprecated for a long time, and are now removed from the code.

  • Gnome console

  • Support for SQLite 2

Misc Changes

  • Updated Nagios check_bacula

  • Updated man files

  • Added OSX package generation script in platforms/darwin

  • Added Spanish and Ukrainian Bacula translations

  • Enable/disable command shows only Jobs that can change

  • Added show disabled command to show disabled Jobs

  • Many ACL improvements

  • Added Level to FD status Job output

  • Begin Ingres DB driver (not yet working)

  • Split RedHat spec files into bacula, bat, mtx, and docs

  • Reorganized the manuals (fewer separate manuals)

  • Added lock/unlock order protection in lock manager

  • Allow 64 bit sizes for a number of variables

  • Fixed several deadlocks or potential race conditions in the SD

Go back to: New Features in Bacula Community.