New Features in Bacula Enterprise

This chapter presents the new features that have been added to the previous versions of Bacula Enterprise. These features are only available with a subscription from Bacula Systems.

Bacula Enterprise 14

Microsoft 365 Plugin Enhancements

The Microsoft 365 Plugin has received a number of new features to complete the coverage of the most important Microsoft 365 service modules. It also gained enhancements to existing modules that will improve the user experience and allow better control of the data to backup in order to offer the highest level of privacy with respect to the data being being backed up.

More details on any of the following subsections can be found in the Microsoft 365 Plugin documentation.

Teams Module

The teams module adds support to backup and restore Microsoft Teams, including:

  • Team Entity

  • Team Settings

  • Team Members and Roles

  • Team Installed Apps

  • Team Channels

  • Team Channel Tabs

  • Team Channel Messages and Replies

  • Team Channel Attatchments

  • Team Channel Hosted Contents

As in any other M365 Plugin module, it is possible to restore the data locally to the File Daemon or natively into the M365 service as the original team or as a new one.

In order to enable Teams module in a M365 Plugin backup it is necessary to include the service teams in the Filesets Plugin line, and select the team name using groups directives.

FileSet {
  Name = my-teams-fs
  Include {
    Options {
      ...
    }
    Plugin = "m365: tenant=xxy-xx-xx objectid=yyy-yyy service=teams group=MyTeam"
  }
}

Chat Module

The Chat module adds support to backup and restore Microsoft Chats, including:

  • Chat entity

  • Chat settings

  • Chat installed apps

  • Chat tabs

  • Chat messages and replies

  • Chat hosted contents

As with any other M365 Plugin module it is possible to restore the data locally to the File Daemon or natively into the M365 service as a new Chat.

In order to enable Chats module in a M365 Plugin backup it is necessary to include the service chats in the fileset, and select the associated entity you want to back up (user(s) and/or group(s). If no entity is used, all chats will be included.

FileSet {
  Name = my-chats-fs
  Include {
    Options {
      ...
    }
    Plugin = "m365: tenant=xxxy-x-xx objectid=yy-yy-yyy-yy service=chats"
  }
}

Email Indexing

The email backup module has been improved in order to offer the possibility to filter and query the data stored during backups performed with it. The information is now indexed in specific tables of the catalog where details about emails and attachments are stored.

The layout of these tables is shown below (syntax shown is based on PostgreSQL):

CREATE TABLE MetaEmail
(
    EmailTenant             text,
    EmailOwner              text,
    EmailId                 text,
    EmailTime           timestamp without time zone,
    EmailTags               text,
    EmailSubject            text,
    EmailFolderName         text,
    EmailFrom               text,
    EmailTo                 text,
    EmailCc                 text,
    EmailInternetMessageId  text,
    EmailBodyPreview        text,
    EmailImportance         text,
    EmailConversationId     text,
    EmailIsRead             smallint,
    EmailIsDraft            smallint,
    EmailHasAttachment      smallint,
    EmailSize               integer,
    Plugin                  text,
    FileIndex               int,
    JobId                   int
);

CREATE TABLE MetaAttachment
(
    AttachmentTenant        text,
    AttachmentOwner         text,
    AttachmentName          text,
    AttachmentEmailId       text,
    AttachmentContentType   text,
    AttachmentIsInline      smallint,
    AttachmentSize          int,
    Plugin                  text,
    FileIndex               int,
    JobId                   int
);

A collection of associated new Catalog indexes is also included.

It is possible to use the information of these tables with regular SQL mechanisms (directly from the database engine or through bconsole SQL commands). However, it is also possible to use a new bconsole .jlist command which will use metadata as in the examples below:

#List all emails:
.jlist metadata type=email tenant="tenantname.microsoftonline.com" owner="test@localhost"

# Sample output:
[{"jobid": 1,"fileindex": 2160,"emailtenant": "tenantname.microsoftonline.com","emailowner"...

# Get emails with attachments
.jlist metadata type=email tenant="xxx.." owner="test@localhost" hasattachment=1

# Will search in bodypreview
.jlist metadata type=email tenant="xxx..." owner="test@localhost" bodypreview=veronica

# Will search specific fields
.jlist metadata type=email tenant="xxx..." owner="tes..." from=eric to=john subject=regress bodypreview=regards

# Will search for all text fields for "eric", return the next page of 512 elements, order by time desc
.jlist metadata type=email tenant="xxx..." owner="tes..." all=eric orderby=time order=desc limit=512 offset=512

# Will search in all text fields for "eric iaculis"
.jlist metadata type=email all="eric iaculis"

# Will search all text fields and apply some filters about the time, the size read flag and has attachment
.jlist metadata type=email tenant="xxx..." owner="tes..." all="spam" isread=1 hasttachment=1 mintime="2021-09-23 00:00:00" maxtime="2022-09-23 00:00:00" minsize=100 maxsize=100000
.jlist metadata type=attachment tenant="xxx..." owner="tes..." name="cv.pdf" id=xxxxxxxxxxx minsize=100 maxsize=100000

BWeb Management Console Wizard

A completely new BWeb Management Dashboard is included to significantly simplify the following actions associated with Microsoft 365 Plugin backups:

  • Connect with a new tenant

  • Easily connect using either the ’common’ or the ’standalone’ mode

  • List and manage configured tenants in each File Daemon

  • List and manage logged-in users for delegated authentication features

  • Wizard to add new Microsoft 365 Plugin fileset

Some example screenshots are provided below:

BWeb M365 Management Console: Tenant List

BWeb M365 Management Console: Tenant List

BWeb M365 Management Console: Model Selection

BWeb M365 Management Console: Model Selection

BWeb M365 Management Console: Services

BWeb M365 Management Console: Services

Advanced Email Privacy Filters

Bacula Systems is aware of the many privacy concerns that can arise when tools like our Microsoft 365 Plugin enables the possibility to backup and restore data coming from large numbers of different users. The backup administrator can restore potentially private data at will. Moreover, emails are often one of the most sensitive items in terms of privacy and security to an organization.

One of many strategies this plugin offers in order to mitigate this problem is the possibility to exclude messages. This is a very powerful feature where one can use flexible expressions that allow to:

  • select a subset of messages and simply exclude them from the backup with the email_messages_exclude_expr new fileset parameter

  • or only from the index (from the catalog) with email_messages_exclude_index_expr new fileset parameter

  • not only exclude message but also select only a subset of email fields to be included in the protected information. We can exclude fields from the backup with the email_fields_exclude new fileset parameter

  • or only exclude from the index (from the catalog) with email_fields_exclude_index new fileset parameter

  • Exclude those fields directly as a comma separated list in email_fields_exclude parameter and also email_fields_exclude_index parameter.

Then, for email_messages_exclude_expr and email_messages_exclude_index_expr we need to use a valid boolean expression represented in the Javascript language, using those fields. Some examples are provided below:

# Expression to exclude messages where subject includes the word 'private'
emailSubject.includes('private')

# Complex expression to exclude messages that are not read and are Draft or their folder name is named Private
!emailIsRead && (emailIsDraft || emailFolderName == 'Private')

# Expression to exclude messages based on the received or sent date
!emailTime < Date.parse('2012-11-01')

# Expression to exclude messages using a regex based on emailFrom
/.*private.com/.test(emailFrom)

An expression tester command is now included as a new query command, where it’s possible to validate the behavior of different expressions against a static, predefined set of data. Check the Micorsoft 365 Plugin Whitepaper in order to find more details.

Data Owner Restore Protection

A second solution to enhance the privacy of the data managed by cloud plugins like the Microsoft 365 Plugin is presented in this section.

The Data Owner Restore Protection feature is enabled at configuration time with the parameter owner_restore_protection (please check the Microsoft 365 Plugin documentation for further information). Once it is enabled, any restore operation will request the intervention of the owner of the data. The restore job will be paused and will show a message in the Job log asking to access a Microsoft 365 page and enter a security code. This same information will be also sent by email to the affected user.

If the user does not complete the operation (in 15 minutes) the restore will fail and no data will be restored. However, if the user knows of the operation and wishes to aprove it, as soon as they complete the login process, the restore will resume and the data will be processed and restored to the configured destination.

TOTP Console Authentication Plugin

The TOTP (Time based One Time Password) Authentication Plugin is compatible with RFC 6238. Many smartphone applications are available to store the keys and compute the TOTP code.

The standard password, possibly TLS-authentication and encryption are still used to accept an incoming console connection. Once accepted, the Console will prompt for a second level of authentication with a TOTP secret key generated from a shared token.

To enable this feature, you needed to install the bacula-enterprise-totp-dir-plugin package on your Director system, then to set the PluginDirectory directive of the Director resource and configure the AuthenticationPlugin directive of a given restricted Console in the Director configuration file.

# in bacula-dir.conf
Director {
  Name = myname-dir
  ...
  Plugin Directory = /opt/bacula/plugins
}

Console {
   Name = "totpconsole"
   Password = "xxx"

   Authentication Plugin = "totp"
}

The matching Console configuration in bconsole.conf has no extra settings compared to a standard restricted Console.

# in bconsole.conf
Console {
  Name = totpconsole
  Password = "xxx"              # Same as in bacula-dir.conf/Console
}
Director {
  Name = mydir-dir
  Address = localhost
  Password = notused
}

At the first console connection, if the TLS link is correctly setup (using the shared secret key), the plugin will generate a specific random key for the console and display a QR code in the console output. The user must then scan the QR code with his smartphone using an app such as Aegis (Opensource) or Google Authenticator. The plugin can also be configured to send the QR code via an external program.

Note

The program qrencode (>=4.0) is used to convert the otpauth URL to a QR code. If the program is not installed the QR code can’t be displayed.

More information can be found in Console Multi-Factor Authentication Plugins

To use the TOTP Authentication plugin with BWeb Management Console, it is required to perform the following steps:

  • Create a system user via the adduser command named admin

  • Assign a password via passwd command

  • Activate the security option and thesystem_authentification parameter in the BWeb Management Console / Configuration / BWeb Configuration page

  • Login with the admin user and the password defined earlier

For each user that needs to be added

  • Access the User administration page in BWeb Management Console / Configuration / Manage Users

  • Add a user username with the TOTP Authentication option of the Authentication parameter

  • Create a TOTP authentication key on the command line with the bacula account with btotp -c -n bweb:username

    The bweb: prefix is a requirement to distinguish between different login targets, namely bconsole without a prefix and BWeb with this one. The username can be freely chosen.

    Tip

    If the btotp command to create the secret is not run under the account the web server runs as, permissions and ownership to the generated file in the TOTP key storage directory will have to be modified:

    [root@ ~]# ls -al /opt/bacula/etc/conf.d/totp/
    total 8
    drwx------.  2 bacula bacula  53  9. Mar 14:39 .
    drwx------. 10 bacula bacula 128  9. Mar 06:55 ..
    -rw-------.  1 bacula bacula  31  9. Mar 06:55 KNSWG5LSMU
    [root@ ~]# /opt/bacula/bin/btotp -c -n bweb:Newuser
    /opt/bacula/etc/conf.d/totp//MJ3WKYR2JZSXO33VONSXE
    [root@ ~]# ls -al /opt/bacula/etc/conf.d/totp/
    total 12
    drwx------.  2 bacula bacula  82 10. Mar 04:58 .
    drwx------. 10 bacula bacula 128  9. Mar 06:55 ..
    -rw-------.  1 bacula bacula  31  9. Mar 06:55 KNSWG5LSMU
    -rw-------.  1 root   root    31 10. Mar 04:58 MJ3WKYR2JZSXO33VONSXE
    [root@ ~]# chown bacula. /opt/bacula/etc/conf.d/totp/MJ3WKYR2JZSXO33VONSXE
    [root@ ~]# ls -al /opt/bacula/etc/conf.d/totp/
    total 12
    drwx------.  2 bacula bacula  82 10. Mar 04:58 .
    drwx------. 10 bacula bacula 128  9. Mar 06:55 ..
    -rw-------.  1 bacula bacula  31  9. Mar 06:55 KNSWG5LSMU
    -rw-------.  1 bacula disk    31 10. Mar 04:58 MJ3WKYR2JZSXO33VONSXE
    [root@bsys-demo ~]#
    

    For security reasons, it may be best to set up a dedicated management account with rules for sudo to be able to call the btotp program as a restricted user and have it execute with proper permissions.

  • Display the TOTP QR Code on the command line with the bacula account with btotp -q -n bweb:username

Note

The program qrencode (>=4.0) is used to convert the otpauth URL to a QR code. If the program is not installed the QR code can’t be displayed.

Tip

It is possible to create additional BWeb users with administrative privileges (“Administrator” profile) and “TOTP Authentication” Password Type. Those users will be able to administer all functions of BWeb (and Bacula through it). At this point you could even disable the admin account created using the adduser command, but please note that system_authentication (enable_system_auth) needs to remain set (in the BWeb configuration) in order for the TOTP authentication to remain functional.

FileDaemon Security Enhancements

Restore and Backup Job User

New FileDaemon directives let the File Daemon control in which user’s context Backup and Restore Jobs run. The directive values can be assigned as uid:gid or username:groupname and are applied per configure Director. Backup and Restore jobs will then run as the specified user. If this directive is set for the Restore Job, it overrides the Restore user set with ’jobuser’ and ’jobgroup’ arguments from the ’restore’ command.

# in bacula-fd.conf
Director {
  Name = myname-dir
  ...
  BackupJobUser = 1001:1001
  RestoreJobUser = restoreuser:restoregroup
}

This facility requires that the running File Daemon can change its user context, and is only available on recent Linux systems with proper capabilities set up.

Allowed Backup and Restore Directories

New FileDaemon directives provide control of which client directories are allowed to be accessed for backup on a per-director basis. Directives can be specified as a comma-separated list of directories. Simple versions of the AllowedBackupDirectories and ExcludedBackupDirectories directives might look as follows:

# in bacula-fd.conf
Director {
  Name = myname-dir
  ...
  AllowedBackupDirectories = "/path/to/allowed/directory"
}
Director {
  Name = my-other-dir
  ...
  ExcludedBackupDirectories = "/path/to/excluded/directory"

}

This directive works on the FD side, and is fully independent of the include/exclude part of the Fileset defined in the Director’s config file. Nothing is backed up if none of the files defined in the Fileset is inside FD’s allowed directory.

Allowed Restore Directories

This new directive controls which directories the File Daemon can use as a restore destination on a per-director basis. The directive can have a list of directories assigned. A Simple version of the AllowedRestoreDirectories directive can look like this:

# in bacula-fd.conf
Director {
  Name = myname-dir
  ...
  AllowedRestoreDirectories = "/path/to/directory"

}

Allowed Script Directories

This File Daemon configuration directive controls from which directories the Director can execute client scripts and programs (e.g. using the Runscript feature or with a File Set’s ’File=’ directive). The directive can have a list of directories assigned. A simple version of the AllowedScriptDirectories directive could be

# in bacula-fd.conf
Director {
  Name = myname-dir
  ...
  AllowedScriptDirectories = "/path/to/directory"

}

When this directive is set, the File Daemon is also checking programs to be run against a set of forbidden characters.

When the following resource is defined inside the Director’s config file, Bacula won’t back up any file for the Fileset:

FileSet {
  Name = "Fileset_1"
  Include {
     File = "\\|/path/to/binary &"
  }
}

This is because of the ’&’ character, which is not allowed when the Allowed Script Directories is used on the Client side.

This is the full list of disallowed characters:

$ ! ; \ & < > ` ( )

To disable all commands sent by the Director, it is possible to use the following configuration in the File Daemon configuration:

AllowedScriptDirectories = none

Security Plugin

The Bacula Enterprise FileDaemon Security Plugin Framework can be used to produce security reports during Backup jobs. This version of the security plugin is compatible only on Linux/Unix systems and is delivered with a set of rules that can detect potential miss-configuration of the Director and Bacula files and directories.

To enable the plugin, you need to install the bacula-enterprise-security-plugin package on your Client and configure the Plugin Directory directive in the FileDaemon resource.

# in bacula-fd.conf
FileDaemon {
  Name = myname-fd
  ...
  Plugin Directory = /opt/bacula/plugins
  ...
  Plugin Options = "security: interval=2days"
}

The plugin will be automatically used once a day and will create a security report available in the Catalog. If a serious security issue is detected on the server, a message will be printed in the Job log, and a security event will be created. To configure the minimum interval, the PluginOptions directive in the FileDaemon resource can be used. The security plugin has an interval parameter that can accept a time duration.

The security report produced by plugin is accessible via the bconsole list command:

* list restoreobjects jobid=1 objecttype=security
+-------+-----------------+-----------------+------------+------------+--------------+
| jobid | restoreobjectid | objectname      | pluginname | objecttype | objectlength |
+-------+-----------------+-----------------+------------+------------+--------------+
|     1 |               1 | security-report | security:  |         30 |          696 |
+-------+-----------------+-----------------+------------+------------+--------------+

* list restoreobjects jobid=1 objecttype=security id=1
{"data":[{"source":"bacula-basic","version":1,"error":0,"events":[{"message":"Permissions on ..

Proxmox and QEMU Incremental Backup Plugin

The new QEMU plugin can backup QEMU hypervisors using the QMP transaction feature and dump disks with the QMP API. The QEMU plugin can be used to handle Proxmox QEMU virtual machines for Full and Incremental backup.

More information can be found in the QEMU Plugin user’s guide.

FreeSpace Storage Daemon Policy

Introducing new Storage Group policy, which queries each Storage Daemon in the list for its FreeSpace and sorts the list by the FreeSpace returned, so that first item in the list is the SD with the largest amount of FreeSpace while the last one in the list is the one with the least amount of FreeSpace available. For an Autochanger with many devices, pointing to the same mountpoint, the size of only one single device is taken into consideration for the FreeSpace policy.

Policy can be used in the same manner as the other ones:

Pool {
   ...
   Storage = File1, File2, File3
   StorageGroupPolicy = FreeSpace
   ...
}

Antivirus Plugin

The FileDaemon Antivirus plugin provides integration between the ClamAV Antivirus daemon and Bacula Verify Jobs, allowing post-backup virus detection within Bacula Enterprise.

More information can be found in the Antivirus Plugin user’s guide.

Volume Protection

Warning

This feature is only for file-based devices.

This feature can only be used if Bacula is run as a systemd service because only then, with proper capabilities set for the daemon, it’s allowed to manage Volume Files’ attributes.

For File-based volumes Bacula will set the Append Only attribute during the first backup job that uses the new volume. This will prevent Volumes losing data by being ovewritten.

The Append Only file attribute is cleared when the volume is being relabeled.

Bacula is now also able to set the Immutable file attribute on a file volume which is marked as Full.

When a volume is Full and has the Immutable flag set, it cannot be relabeled and reused until the expiration period elapses. This helps to protect volumes from being reused too early, according to the protection period set.

If Volume’s filesystem does not support the Append only or Immutable flags, a proper warning message is printed in the job log and Bacula proceeds with the usual backup workflow.

There are three new directives available to set on a per-device basis to control the the Volume Protection behavior:

SetVolumeAppendOnly

Determines if Bacula should set the Append_Only attribute when writing on the volume for the first time.

SetVolumeImmutable

Determines if Bacula should set the Immutable attribute when marking volume as Full.

MinimumVolumeProtectionTime

Specifies how much time has to elapse before Bacula is able to clear the attribute.

Nutanix Filer Plugin

The Nutanix Incremental Accelerator plugin is designed to simplify and optimize the backup and restore performance of your Nutanix NAS hosting a large number of files.

When using the plugin for Incremental backups, Bacula Enterprise will query the Nutanix REST API for a previous backup snapshot then quickly determine a list of all files modified since the last backup instead of having to walk recursively through the entire filesystem. Once Bacula has the backup list, it will use a standard network share NFS or CIFS to access the files.

The Nutanix HFC documentation provides information about this new plugin.

BWeb Management Console Enhancements

The new BWeb Management Console menu organisation has been improved. The Job Administration and the Bacula Configuration parts are now accessible via a single menu. All wizards about common administration tasks are now placed in a main location accessible via a floating button on all pages.

ZStandard FileSet Compression Option

The ZSTD compression algorithm is now available in the FileSet option directive Compression. It is possible to configure ZSTD level 1 zstd1, level 10 zstd10 and level 19 zstd19. The default zstd compression is 10.

Call Home Plugin

Note

The Call Home Plugin needs further work on the service side with Bacula System. In consequence, the features described below are not implemented.

The callhome Director plugin can be used to automatically check if your contract with Bacula Systems is correct and if the system is compliant with your subscription.

To use this option, the bacula-enterprise-callhome-dir-plugin must be installed on the Director’s system, the PluginDirectory directive in the Director resource of the bacula-dir.conf must be set to /opt/bacula/plugins and the directive CustomerId in the Director resource must be set to your CustomerId available in the Welcome package that you have recieved.

Director {
  Name = mydir-dir
  ...
  CustomerId = mycustomerid_found_in_the_welcome_package
  PluginDirectory = /opt/bacula/plugins
}

At a regular interval, the Director will contact the Bacula Systems server www.baculasystems.com (94.103.98.75) on the SSL port 443 to check the contract status based on the CustomerId. Bacula Systems will analyze and collect some information in this process:

  • The number of Clients

  • The number of Jobs in the catalog

  • The name of the Director

  • The Bacula version

  • The Uname of the Director platform

  • The list of the plugins installed

  • The size of all Jobs

  • The size of all volumes

RunScript Enhancements

A new Director RunScript RunsWhen keyword of AtJobCompletion has been implemented, which runs the command after at the end of job and can update the job status if the command fails.

Job {
    ...
  Runscript {
    RunsOnClient = no
    RunsWhen = AtJobCompletion
    Command = "mail command"
    AbortJobOnError = yes
  }
}

This directive has been added because the RunsWhen keyword After was not designed to update the job status if the command fails.

Miscellaneous

  • Amazon Cloud Driver

    A new Amazon Cloud Driver is available for beta testing. In the long term, it will enhance and replace the existing S3 cloud driver. The aws tool provided by Amazon is needed to use this cloud driver. The Amazon cloud driver is available within the bacula-enterprise-cloud-storage-s3 package

  • Bacula Enterprise Installation Manager Enhancements

  • Swift Plugin Keystone v3 Authentication Support

  • Metadata Catalog Support

  • Plugins List in the Catalog

    The list of the installed Plugins is now stored in the Client catalog table.

  • JSON Output

    The console has been improved to support a JSON output to list catalog objects and various daemon output. The new “.jlist” command is a shortcut of the standard “list” command and will display the results in a JSON table. All options and filters of the “list” command can be used in the “.jlist” command. Only catalog objects are listed with the “list” or “.jlist” commands. Resources such as Schedule, FileSets, etc… are not handled by the “list” command.

    See the “help list” bconsole output for more information about the “list” command. The Bacula configuration can be displayed in JSON format with the standard “bdirjson”, “bsdjson”, “bfdjson” and “bbconsjson” tools.

*.jlist jobs
{"type": "jobs", "data":[{"jobid": 1,"job": "CopyJobSave.2021-10-04_18.35.55_03",...
*.api 2 api_opts=j
*.status dir header
{"header":{"name":"127.0.0.1-dir","version":"12.8.2 (09 September 2021)"...

Bacula Enterprise 12.8.2

New Accurate Option to Save Only File’s Metadata

The new ’o’ Accurate directive option for a Fileset has been added to save only the metadata of a file when possible.

The new ’o’ option should be used in conjunction with one of the signature checking options (1, 2, 3, or 5). When the ’o’ option is specified, the signature is computed only for files that have one of the other accurate options specified triggering a backup of the file (for example an inode change, a permission change, etc…).

In cases where only the file’s metadata has changed (ie: the signature is identical), only the file’s attributes will be backed up. If the file’s data has been changed (hence a different signature), the file will be backed up in the usual way (attributes as well as the file’s contents will be saved on the volume).

For example:

Job {
  Name = JobTest
  JobDefs = DefaultJob
  FileSet = TestFS
  Accurate = yes
}

FileSet {
  Name = TestFS
  Options {
    Signature = MD5
    Accurate = pino5
  }
  File = /data
}

The backup job will will compare permission bits, inodes, number of links and if any of it changes it will also compute file’s signature to verify if only the metadata must be backed up or if the full file must be saved.

Bacula Enterprise 12.8.0

Microsoft 365 Plugin

Microsoft 365 is a cloud-based software solution offered by Microsoft as a service. It is intended to be used by customers who want to externalize their businesses services like email, collaboration, video conferencing, file sharing, and others.

The Bacula Systems M365 Plugin is designed to handle the following pieces of the Microsoft 365 galaxy:

  • Granular Exchange Online Mailboxes (BETA [1]_)

  • OneDrive for Business and Sharepoint Document libraries

  • Sharepoint Sites

  • Contacts/People

  • Calendars

  • Events

The Plugin has many advanced features, including:

  • Microsoft Graph API based backups

  • Multi-service backup in the same task

  • Multi-service parallelization capabilities

  • Multi-thread single service processes

  • Generation of user-friendly report for restore operations

  • Network resiliency mechanisms

  • Latest Microsoft Authentication mechanisms

  • Discovery/List/Query capabilities

  • Restore objects to Microsoft 365 (to original entity or to any other entity)

  • Restore any object to filesystem

  • Incremental and Differential backup level

Please see the Bacula Enterprise M365 Plugin whitepaper for more information.

Storage Group

It is now possible to access more than one Storage resource for each Job/Pool. Storage can be specified as a comma separated list of Storage resources to use.

Along with specifying a storage list it is now possible to specify Storage Group Policy which will be used for accessing the list elements. If no policy is specified, Bacula always tries to take first available Storage from the list. If the first few storage daemons are unavailable due to network problems; broken or unreachable for some other reason, Bacula will take the first one from the list (sorted according to the policy used) which is network reachable and healthy.

Currently supported policies are:

ListedOrder - This is the default policy, which uses first available storage from

the list provided

LeastUsed - This policy scans all storage daemons from the list and chooses the

one with the least number of jobs being currently run

Storage Groups can be used as follows (as a part of Job and Pool resources):

Job {
    ...
    Storage = File1, File2, File3
    ...
}
Pool {
    ...
    Storage = File4, File5, File6
    StorageGroupPolicy = LeastUsed
    ...
}

When a Job or Pool with Storage Group is used, the user can observe some messages related to the choice of Storage such as:

messages
31-maj 19:23 VBox-dir JobId 1: Start Backup JobId 1, Job=StoreGroupJob.2021-05-31_19.23.36_03
31-maj 19:23 VBox-dir JobId 1: Possible storage choices: "File1, File2"
31-maj 19:23 VBox-dir JobId 1: Storage daemon "File1" didn’t accept Device "FileChgr1-Dev1" becaus
31-maj 19:23 VBox-dir JobId 1: Selected storage: File2, device: FileChgr2-Dev1, StorageGroupPolic

Hyper-V VSS Single Item Restore

It is now possible to restore individual files from Hyper-V VSS Virtual Machine backups. The Hyper-V Single File Restore whitepaper provides information about it.

New Hyper-V Plugin

Hyper-V implements a VSS writer on all versions of Windows Server where Hyper-V is supported. This VSS writer allows developers to utilize the existing VSS infrastructure to backup virtual machines to Bacula using the Bacula Enterprise VSS Plugins.

Starting in Windows Server 2016, Hyper-V also supports backup through the Hyper-V WMI API. This approach still utilizes VSS inside the virtual machine for backup purposes, but no longer uses VSS in the host operating system. It allows individual Guest VMs to be backuped up separately and incrementally. This approach is more scalable than using VSS in the host, however it is only available on Windows Server 2016 and later. The new Bacula Enterprise Hyper-V Plugin “hv” uses this technology for backup and restore to/from Bacula.

The Microsoft Hyper-V whitepaper provides more information.

VMWare vSphere Plugin Enhancements

vSphere Permissions

The vsphere-ctl command can now check the permissions of the current user on the vCenter system and diagnose issues if any are detected.

/opt/bacula/bin/vsphere-ctl query list_missing_permissions

Configuration

It is now possible to manage the vsphere_global.conf parameter file with the vsphere-ctl config * command.

vsphere-ctl config create - creates an entry inside vsphere_global.conf

vsphere-ctl config delete - deletes an entry inside vsphere_global.conf

vsphere-ctl config list - lists all entries inside vsphere_global.conf

/opt/bacula/bin/vsphere-ctl config create
[root@localhost bin]# ./vsphere-ctl config create
Enter url: 192.168.0.15
Enter user: administrator@vsphere.local
Enter password:
Connecting to "https://192.168.0.15/sdk"...
OK: successful connection
Select an ESXi host to backup:
    1) 192.168.0.8
    2) 192.168.0.26
Select host: 1
Computing thumbprint of host "192.168.0.8"
OK: thumbprint for "192.168.0.8" is 04:24:24:13:3C:AD:63:84:A1:9F:E5:14:82
OK: added entry [192_168_0_8] to ../etc/vsphere_global.conf

Instant Recovery

The VMWare Instant Recovery feature has been enhanced to handle errors during migration and the NFS Datastore creation more effectively. The cleanup procedure has also been reviewed.

Backup and Restore

During the Backup or the Restore process, the thumbprint of the target ESXi Host system is now verified and a clear message is printed if the expected thumbprint is incorrect.

Restore

The support for SATA Disk controllers has been added.

BWeb Management Suite

Event Dashboard

BWeb Management Suite has a new dashboard to browse Bacula events stored in the Bacula Catalog (see Event and Auditing).

../_images/bweb-event-dashboard.png

OpenShift Plugin

The Bacula Enterprise OpenShift Plugin is now certified and available directly from the Redhat OpenShift system.

Please see the OpenShift whitepaper for more information.

Bacula Enterprise Ansible Collection

Ansible Collections are a new and flexible standard to distribute content like playbooks and roles. This new format helps to easily distribute and automate your environment. These pre-packaged collections can also be modified to meet the needs of your environment, especially by using templates and variables.

Our Bacula Enterprise Ansible Collection will help you to easily deploy Directors, Clients, and Storages in your environment. Since Bacula Enterprise version 12.6.4, a new option was introduced to the BWeb configuration split script to allow the configuration to be “re-split” when deploying new resources with the Bacula Enterprise Ansible Collection playbooks.

Our collection will create configuration files that can be integrated to your current BWeb configuration by using the tests/re-split-configuration.yml playbook provided in the collection. This is useful to know and use when BWeb is being used to manage your Bacula Enterprise environment.

We strongly recommend to use the BWeb configuration split script if you use Bacula Enterprise Ansible Collection to deploy new Clients and Storages and your Bacula Enterprise environment uses BWeb to manage configuration files, because it will check if all the resources being added to the current BWeb structure are correctly defined.

Bacula Enterprise plugins can also be deployed using the Ansible Collection. Please adapt the templates provided to take advantage of the specific configuration needs of your environment.

More information about Ansible Galaxy Collections may be found in a blog post called “Getting Started With Ansible Content Collections” available on the official Ansible website here: https://www.ansible.com/blog/getting-started-with-ansible-collections

The Bacula Enterprise Ansible Collection is publicly available in Ansible Galaxy:

https://galaxy.ansible.com/baculasystems/bacula_enterprise

Misc

Plugin Object Status Support

The Object table has now a ObjectStatus field that can be used by plugins to report more precise information about the backup of an Object (generated by Plugins).

SAP HANA 1.50 Support

The Bacula Enterprise SAP HANA Plugin is now certified with the SAP HANA 1.50 protocol version (SAP HANA 2 SP5).

Network Buffer Management

The new SDPacketCheck FileDaemon directive can be used to control the network flow in some specific use cases.

See SDPacketCheck directive in the client configuration for more information.

IBM Lintape Driver (BETA)

The new Use Lintape Storage Daemon directive has been added to support the Lintape Kernel driver.

See Use LinTape directive in the Storage Daemon Device{} resource for more information.

Bacula Enterprise 12.6.0

VMware Instant Recovery

It is now possible to recover a vSphere Virtual Machine in a matter of minutes by running it directly from a Bacula Volume.

Any changes made to the Virtual Machine’s disks are virtual and temporary. This means that the disks remain in a read-only state. The users may write to the Virtual Machine’s disks without the fear of corrupting their backups. Once the Virtual Machine is started, it is then possible via VMotion to migrate the temporary Virtual Machine to a production datastore.

Please see the Single Item Restore whitepaper and the vSphere Plugin whitepaper for more information.

New Features in BWeb Management Suite

New FileSet Editing Window

A new FileSet editing window is available. It is now possible to configure the different plugins with dynamic controls within BWeb.

../_images/bweb-newfileset.png

Tag Support

BWeb has now a support for user’s tags. It is possible to assign tags to various catalog records (such as Jobs, Clients, Objects, Volumes).

../_images/126-tagmenu.png
../_images/126-tagtable.png

Virtual Machine Dashboard

A new Virtual Machine dashboard is available in Job / Virtual Machines menu. From this dashboard, all Virtual Machines are listed and it is possible to directly backup or restore one of them from this new interface.

../_images/bweb-vm-dashboard.png

VSS Plugin Enhancements

The VSS Plugin has been improved to detect automatically the volumes to include in the Snapshot Set depending on the Writers and the Components that are included/excluded during the Backup job. The alldrives plugin or the use of a dummy file is no longer needed.

See the VSS whitepaper for more information.

Hyper-V Cluster Support

The new VSS Plugin supports the Hyper-V Cluster mode using Cluster Shared Volumes (CSV).

See the VSS whitepaper for more information.

Windows Cluster Volume Support

The Bacula FileDaemon now supports the Cluster Shared Volumes (CSV) natively. Note that due to a Microsoft restriction with the Snapshot Sets, it is not possible to mix standard volumes with CSV volumes within a single Job.

External LDAP Console Authentication

The new Bacula Plugable Authentication Module (BPAM) API framework introduced in Bacula Enterprise 12.6 comes with the first plugin which handles user authentication against any LDAP Directory Server (including OpenLDAP and Active Directory).

# bconsole
*status director
...
Plugin: ldap-dir.so
...

When the LDAP plugin is loaded you can configure a named console resource to use LDAP to authenticate users. BConsole will prompt for a User and Password and it will be verified by the Director. TLS PSK (activated by default) is recommended. To use this plugin, you have to specify the PluginDirectory Director resource directive, and add a new console resource directive Authentication Plugin as shown below:

Director {
    ...
    Plugin Directory = /opt/bacula/plugins
    }

Console {

    Name = "ldapconsole"
    Password = "xxx"

    # New directive
    Authentication Plugin = "ldap:<parameters>"
    ...
    }

where parameters are the space separated list of one or more plugin parameters:

url - LDAP Directory service location, i.e. “url=ldap://10.0.0.1/”

binddn - DN used to connect to LDAP Directory service to perform required query

bindpass - DN password used to connect to LDAP Directory service to perform required query

query - A query performed on LDAP Directory serice to find user for authentication. The query string is composed as <basedn>/<filter>. Where ‘<basedn> is a DN search starting point and <filter> is a standard LDAP search object filter which support dynamic string substitution: %u will be replaced by credential’s username and %p by credential’s password, i.e. query=dc=bacula,dc=com/(cn=%u).

starttls - Instruct the BPAM LDAP Plugin to use the **StartTLS** extension if the LDAP Directory service will support it and fallback to no TLS if this extension is not available.

starttlsforce - Does the same as the ‘starttls‘ setting does but reports error on fallback.

Working configuration examples:

bacula-dir.conf - Console resource configuration for BPAM LDAP Plugin with OpenLDAP authentication example.

Console {

    Name = "bacula_ldap_console"
    Password = "xxx"

    # New directive (on a single line)
    Authentication Plugin = "ldap:url=ldap://ldapsrv/ binddn=cn=root,dc=bacula,dc=com bindpass=secret query=dc=bacula,dc=com/(cn=(cn=%u) starttls"
    ...
    }

bacula-dir.conf - Console resource configuration for BPAM LDAP Plugin with Active Directory authentication example.

Console {

    Name = "bacula_ad_console"
    Password = "xxx"

    # New directive (on a single line)
    Authentication Plugin ="ldap:url=ldaps://ldapsrv/ binddn=cn=bacula,ou=Users,dc=bacula,dc=com bindpass=secret query=dc=bacula,dc=com/(&(objectCategory=person)(objectClass=user)(sAMAccountName=%u))"
    ...

In Bacula Enterprise 12.6.0, File Daemon Plugins will generate Objects recorded in the Catalog to easily find and restore plugin Objects such as databases or virtual machines. The Objects are easy to list, count and manage. Objects can be restored without knowing any details about the Job, the Client, or the Fileset. Each plugin can create multiple Objects of the specific type.

As of now, the following plugins support Object Management:

  • PostgreSQL (in dump mode)

  • MySQL (in dump mode)

  • MSSQL VDI

  • vSphere

  • VSS Hyper-V

  • Xenserver

  • Proxmox

*list objects
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog

objectid

jobid

objectcategory

objecttype

objectname

1

1

Database

PostgreSQL

postgres

2

1

Database

PostgreSQL

template1

3

1

Virtual Machine

VMWare

VM_1

*list objects category="Database"
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"

objectid

jobid

objectcategory

objecttype

objectname

2

1

Database

PostgreSQL

template1

4

1

Database

PostgreSQL

database1

Objects can be easily deleted:

*delete
In general it is not a good idea to delete either a
Pool or a Volume since they may contain data.
You have the following choices:
1: volume
2: pool
3: jobid
4: snapshot
5: client
6: tag
7: object
Choose catalog item to delete (1-7): 7
Enter ObjectId to delete: 1

It is also possible to delete specified groups of objects:

*delete object objectid=2,3-7,9

There is a new item in the restore menu to restore Objects easily:

*restore objectid=2
    OR
*restore
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
    1: List last 20 Jobs run
    2: List Jobs where a given File is saved
    ...
    11: Enter a list of directories to restore for found JobIds
    12: Select full restore to a specified Job date
    13: Select object to restore
    <-----------
    14: Cancel
Select item: (1-14): 13
List of the Object Types:
    1: PostgreSQL Database
    2: VMWare Virtual Machine
Select item: (1-2): 1
Automatically selected : database1
Objects available:

objectid

objectname

client

objectsource

starttime

objectsize

2

template1

127.0.0.1-fd

PostgreSQL Plugin

2020-10-15 13:10:15

10240

4

database1

127.0.0.1-fd

PostgreSQL Plugin

2020-10-15 13:10:17

10240

Enter ID of Object to be restored: 2
Automatically selected Client: 127.0.0.1-fd
Bootstrap records written to /opt/bacula/working/127.0.0.1-dir.restore.1.bsr

The Job will require the following (*=>InChanger):
Volume(s)
Storage(s)
SD Device(s)
===========================================================================
    TestVolume001      File1

Volumes marked with "*" are in the Autochanger.

1 file selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName:
RestoreFiles
...
Catalog:
MyCatalog
Priority:
10
Plugin Options: *None*
OK to run? (yes/mod/no): yes
Job queued. JobId=5

Objects can easily be managed from various BWeb Management Suite screens. (See objectbweb).

Support for MariaDB 10 in the MySQL Plugin’s Binary Backup Mode

Starting with MariaDB 10, the MariaDB team has introduced new backup tools based on the Percona backup tools. The MySQL FileDaemon Plugin now can determine dynamically which backup tool to use during a binary backup.

Tag Support

It is now possible to assign custom Tags to various catalog records in Bacula such as:

  • Volume

  • Client

  • Job

*tag
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Available Tag operations:
    1: Add
    2: Delete
    3: List
Select Tag operation (1-3): 1
Available Tag target:
    1: Client
    2: Job
    3: Volume
Select Tag target (1-3): 1
The defined Client resources are:
    1: 127.0.0.1-fd
    2: test1-fd
    3: test2-fd
    4: test-rst-fd
    5: test-bkp-fd
Select Client (File daemon) resource (1-5): 1
Enter the Tag value: test1

1000 Tag added
*tag add client=127.0.0.1-fd name=#important"
1000 Tag added
*tag list client

tag

clientid

client

#tagviamenu3

1

127.0.0.1-fd

test1

1

127.0.0.1-fd

#tagviamenu2

1

127.0.0.1-fd

#tagviamenu1

1

127.0.0.1-fd

#important

1

127.0.0.1-fd

*tag list client name=#important

clientid

client

1

127.0.0.1-fd

It is possible to assign Tags to a Job record with the new ’Tag’ directive in a Job resource.

Job {
    Name = backup
    ...
    Tag = "#important", "#production"
}

tag

jobid

job

#important

2

backup

#production

2

backup

The Tags are also accessible from various BWeb Management Suite screens. (See tagbweb).

Support for SHA256 and SHA512 in FileSet

The support for strong signature algorithms SHA256 and SHA512 has been added to Verify Jobs. It is now possible to check if data generated by a Job that uses an SHA256 or SHA512 signature is valid.

FileSet {
    Options {
        Signature = SHA512
        Verify = pins3
    }
    File = /etc
}

In the FileSet Verify option directive, the following code has been added:

3 - for SHA512

2 - for SHA256

Support for MySQL Cluster Bacula Catalog

The Bacula Director Catalog service can now use MySQL in cluster mode with the replication option sql_require_primary_key=ON. The support is dynamically activated.

Support of Windows Operating System in bee_installation_manager

Bacula Enterprise can now be installed in a very simple and straight forward way with the bee_installation_manager procedure on Windows operating system. The program will use the Customer Download Area information to help users to install Bacula Enterprise in just a few seconds.

The procedure to install Bacula Enterprise on Windows can now be automatised with the following procedure:

# wget https://baculasystems.com/ml/bee_installation_manager.exe
# bee_installation_manager.exe

Please see the Bacula Enterprise Installation Manager whitepaper for more information.

Windows Installer Silent Mode Enhancement

The following command line options can be used to control the regular Bacula installer values in silent mode:

  • -ConfigClientName

  • -ConfigClientPort

  • -ConfigClientPassword

  • -ConfigClientMaxJobs

  • -ConfigClientInstallService

  • -ConfigClientStartService

  • -ConfigStorageName

  • -ConfigStoragePort

  • -ConfigStorageMaxJobs

  • -ConfigStoragePassword

  • -ConfigStorageInstallService

  • -ConfigStorageStartService

  • -ConfigDirectorName

  • -ConfigDirectorPort

  • -ConfigDirectorMaxJobs

  • -ConfigDirectorPassword

  • -ConfigDirectorDB

  • -ConfigDirectorInstallService

  • -ConfigDirectorStartService

  • -ConfigMonitorName

  • -ConfigMonitorPassword

The following options control the installed components:

  • -ComponentFile

  • -ComponentStorage

  • -ComponentTextConsole

  • -ComponentBatConsole

  • -ComponentTrayMonitor

  • -ComponentAllDrivesPlugin

  • -ComponentWinBMRPlugin

  • -ComponentCDPPlugin

Example

bacula-enterprise-win64-12.4.0.exe /S -ComponentFile -ConfigClientName
foo -ConfigClientPassword bar

Will install only file deamon with bacula-fd.conf configured.

bacula-enterprise-win64-12.4.0.exe /S
-ComponentStorage
-ComponentFile-ConfigClientName foo  -ConfigClientPassword bar
-ConfigStorageName foo2  -ConfigStoragePassword bar2

Will install the Storage Deamon plus File Deamon with bacula-sd.conf and bacula-fd.conf configured.

New Global Endpoint Deduplication Storage System (BETA)

A new Dedup engine comes with a new storage format for the data on disk. The new format keeps the data of a backup grouped together. It significantly increases both the speed of the ibackup and restore operations. The new dedup vacuum command integrates a procedure that compacts the data that are scattered in order to clear large and contiguous areas for the new data and also reduce fragmentation. Please contact the Bacula Systems Customer Success team if you are interested to join the beta program.

Bacula Enterprise 12.4.1

New Message Identification Format

We are starting to add unique message indentifiers to each message (other than debug and the Job report) that Bacula prints. At the current time only two files in the Storage Daemon have these message identifiers and over time with subsequent releases we will modify all messages.

The message identifier will be kept unique for each message and once assigned to a message it will not change even if the text of the message changes. This means that the message identifier will be the same no matter what language the text is displayed in, and more importantly, it will allow us to make listing of the messages with in some cases, additional explanation or instructions on how to correct the problem. All this will take several years since it is a lot of work and requires some new programs that are not yet written to manage these message identifiers.

The format of the message identifier is:

[AAnnnn]

where A is an upper case character and nnnn is a four digit number, where the first character indicates the software component (daemon); the second letter indicates the severity, and the number is unique for a given component and severity.

For example:

[SF0001]

The first character representing the component at the current time one of the following:

S     Storage daemon
D     Director
F     File daemon

The second character representing the severity or level can be:

A       Abort
F       Fatal
E       Errop
W       Warning
S       Security
I       Info
D       Debug
O       OK (i.e. operation completed normally)

So in the example above [SF0001] indicates it is a message id, because of the brackets and because it is at the beginning of the message, and that it was generated by the Storage daemon as a fatal error. As mentioned above it will take some time to implement these message ids everywhere, and over time we may add more component letters and more severity levels as needed.

GPFS ACL Support

The new Bacula Enterprise FileDaemon supports the GPFS filesystem specific ACL. The GPFS libraries must be installed in the standard location. To know if the GPFS support is available on your system, the following commands can be used.

*setdebug level=1 client=stretch-amd64-fd
Connecting to Client stretch-amd64-fd at stretch-amd64:9102
2000 OK setdebug=1 trace=0 hangup=0 blowup=0 options= tags=
*st client=stretch-amd64-fd

Connecting to Client stretch-amd64-fd at stretch-amd64:9102
stretch-amd64-fd Version: 12.4.0 (20 July 2020) x86_64-pc-linux-gnu-bacula-enterprise debian 9.11
Daemon started 21-Jul-20 14:42. Jobs: run=0 running=0.
Ulimits: nofile=1024 memlock=65536 status=ok
Heap: heap=135,168 smbytes=199,993 max_bytes=200,010 bufs=104 max_bufs=105
Sizes: boffset_t=8 size_t=8 debug=1 trace=0 mode=0,2010 bwlimit=0kB/s
Crypto: fips=no crypto=OpenSSL 1.0.2u 20 Dec 2019
APIs: GPFS
Plugin: bpipe-fd.so(2)

The APIs line will indicate if the /usr/lpp/mmfs/libgpfs.so was loaded at the start of the Bacula FD service or not.

The standard ACL Support directive can be used to enable automatically the support for the GPFS ACL backup.

Bacula Enterprise 12.4

RHV Incremental Backup Support

The new Bacula Enterprise RHV Plugin supports Virtual Machine incremental backup.

Please see the RHV Plugin whitepaper for more information.

RHV Proxy Backup Support

The new Bacula Enterprise RHV Plugin can use a proxy to backup Virtual Machines.

Please see the RHV Plugin whitepaper for more information.

HDFS Hadoop Plugin

The Bacula Enterprise HDFS Plugin can save the objects stored in an HDFS cluster.

During a backup, the HDFS Hadoop Plugin will contact the Hadoop File System to generate a system snapshot and retrieve files one by one. During an incremental or a differential backup session, the Bacula File Daemon will read the differences between two Snapshots to determine which files should be backed up.

Please see the HDFS Plugin whitepaper for more information.

NDMP SMTAPE Incremental Support

Bacula Enterprise NDMP Plugin now supports the SMTAPE Incremental feature.

Please see the NDMP Plugin whitepaper for more information.

NDMP EMC Unity Global Endpoint Deduplication Support

The NDMP plugin has been enhanced to greatly increase the deduplication ratio of EMC DUMP images and TAR images.

When the NDMP system is identified as a EMC host or the format is TAR or DUMP and the target storage device supports the Bacula Global Endpoint Deduplication option, the NDMP data stream will be analyzed automatically. The following message will be displayed in the Job log.

JobId 1: EMCTAR analyzer for Global Endpoint Deduplication enabled

Please see the NDMP Plugin whitepaper for more information.

vSphere Virtual Machine Overwrite During Restore

The new Bacula Enteprise vSphere Plugin can now overwrite existing Virtual Machines during the restore process.

Please see the vSphere Plugin whitepaper for more information.

BWeb Management Console New Features

Remote Client Installation

BWeb Management Console can now deploy Bacula Enterprise File Daemons to remote client machines.

Event and Auditing

The Director daemon can now record events such as:

  • Console connection/disconnection

  • Daemon startup/shutdown

  • Command execution

The events may be stored in a new catalog table, to disk, or sent via syslog.

Messages {
    Name = Standard
    catalog = all, events
    append = /opt/bacula/working/bacula.log = all, !skipped
    append = /opt/bacula/working/audit.log = events, !events.bweb
}
Messages {
    Name = Daemon
    catalog = all, events
    append = /opt/bacula/working/bacula.log = all, !skipped
    append = /opt/bacula/working/audit.log = events, !events.bweb
    append = /opt/bacula/working/bweb.log = events.bweb
}

The new message category “events” is not included in the default configuration files by default.

It is possible to filter out some events using “!events.” form. It is possible to specify 10 custom events per Messages resource.

All event types are recorded by default.

When stored in the catalog, the events can be listed with the “list events” command.

* list events [type=<str> | limit=<int> | order=<asc|desc> | days=<int> |
start=<time-specification> | end=<time-specification>]

time

type

source

event

2020-04-24 17:04:07

daemon

Daemon

Director startup

2020-04-24 17:04:12

connection

Console

Connection from 127.0.0.1:8101

2020-04-24 17:04:20

command

Console

purge jobid=1

The .events command can be used to record an external event. The source recorded will be recorded as “**source**”. The events type can have a custom name.

* .events type=bweb source=joe text="User login"

The Director EventsRetention directive can be used to control the pruning of the Event catalog table. Click here for more information.

Misc 2

Bacula Enteprise Installation Manager

Bacula Enterprise can now be installed in a very simple and straight forward way with the bee_installation_manager procedure. The program will use the Customer Download Area information to help users to install Bacula Enterprise in just a few seconds.

The procedure to install Bacula Enterprise on Redhat, Debian and Ubuntu can now be automatised with the following procedure:

# wget https://www.baculasystems.com/ml/bee_installation_manager
# chmod +x ./bee_installation_manager
# ./bee_installation_manager

Please see the Bacula Enterprise Installation Manager whitepaper for more information.

VMware PERL SDK Replacement

The VMware PERL SDK is no longer required to configure VMware backup jobs with the vsphere plugin. To use the vSphere - BWeb integration, all that is now neccessary to install is the bacula-enterprise-vsphere plugin on the BWeb server.

SAP HANA TOOLOPTION

The TOOLOPTION parameter can be used to customize some backint parameter at runtime. The following job options can be modified:

  • job

  • pool

  • level

hdbsql -i 00 -u SYSTEM -p X -d SYSTEMDB "BACKUP DATA INCREMENTAL USING BACKINT (’Inc2’) TOOLOPTION ’level=full’"

QT5 on Windows

Microsoft Windows graphical programs are now using QT5.

Bacula Enterprise 12.2

Kubernetes Plugin

The Bacula Enterprise Kubernetes Plugin can save all the important Kubernetes Resources which build applications or services. This includes the following namespaced objects:

  • Config Map

  • Daemon Set

  • Deployment

  • Endpoint

  • Limit Range

  • Pod

  • Persistent Volume Claim

  • Pod Template

  • Replica Set

  • Replication Controller

  • Resource Quota

  • Secret

  • Service

  • Service Account

  • Stateful Set

  • PVC Data Archive

and non namespaced objects:

  • Namespace

  • Persistent Volume

All namespaced objects which belongs to a particular namespace are grouped together for easy browsing and recovery of backup data.

Please see the Kubernetes Plugin whitepaper for more information.

RHV Single Item Restore Support

BWeb Management Suite and a console tool named “mount-vm” allow the restore of single files from Redhat Virtualization VM backups.

Please see the RHV Plugin and the Single Item Restore whitepaper for more information.

FIPS Support

The Federal Information Processing Standards (FIPS) define U.S. and Canadian Government security and interoperability requirements for cryptographic modules. It describes the approved security functions for symmetric and asymmetric key encryption, message authentication, and hashing.

For more information about the FIPS 140-2 standard and its validation program, see the National Institute of Standards and Technology (NIST) and the Communications Security Establishment Canada (CSEC) Cryptographic Module Validation Program at http://csrc.nist.gov/groups/STM/cmvp.

Bacula Enterprise adds FIPS compliance through the Bacula Enterprise Cryptographic Module “OpenSSL-FIPS” [2]_ that was certified on a number of platform and by various vendors including RedHat [3]_. Bacula Enterprise daemons and tools can now display information about the current FIPS status and require a FIPS-compliant crypto library to be used on all Bacula compoments (for example, the MD5 hash function is not included in FIPS and an error will be reported if it is in use).

On Redhat8 a specific procedure is required to activate the FIPS mode with the fips-mode-setup tool. More information can be found on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/assembly_installing-a-rhel-8-system-with-fips-mode-enabled_security-hardening

To activate the FIPS requirements in a Bacula component (Console, Client, Director, Storage), the directive must be set.

root@localhost:~/ head -3 /opt/bacula/etc/bacula-fd.conf
FileDaemon {
    Name = localhost-fd
    FIPS Require = yes
    }
root@localhost:~/ head -3 /opt/bacula/etc/bacula-dir.conf
Director {
    Name = localhost-dir
    FIPS Require = yes
    }
root@localhost:~/ head -3 /opt/bacula/etc/bacula-sd.conf
Storage {
    Name = localhost-sd
    FIPS Require = yes
    }
root@localhost:~/ head -3 /opt/bacula/etc/bconsole.conf
Director {
    Name = localhost-sd
    FIPS Require = yes
    }

The FIPS status is displayed in the “Crypto” section of the header from the command for each daemon.

root@localhost:~/ bconsole
*status dir
rhel7-64-dir Version: 12.1.0 (03 July 2019) x86_64-redhat-linux-gnu-bacula-enterprise redhat Enterprise
Daemon started 02-Jul-19 11:54, conf reloaded 02-Jul-2019 11:54:11
Jobs: run=0, running=0 mode=0,2010
Crypto: fips=yes crypto=OpenSSL 1.0.2k-fips 26 Jan 2017

The OpenSSL cryptographic module information is also displayed.

Amazon “Glacier” Support

The Bacula Enterprise S3 Cloud Storage can now automatically restore volumes stored on Amazon Glacier, allowing for more flexible tiered backup storage in the cloud.

Please see the Cloud S3 whitepaper for more information.

DB2 Plugin

The DB2 Plugin is designed to simplify the backup and restore operations of a DB2 database system. The plugin simplifies backup operations so that the backup administrator does not need to know about internals of DB2 backup techniques or write complex scripts. The DB2 Plugin supports Point In Time Recovery (PITR) techniques, and Incremental and Incremental Delta backup levels.

Please see the DB2 Plugin whitepaper for more information.

vSphere vApp Properties Support

The virtual machine description (OVF) is now initialized with all vApp properties.

<Property ovf:qualifiers="MinLen(1) MaxLen(64)" ovf:userConfigurable="true"
    ovf:value="thisisatest.net" ovf:type="string" ovf:key="vsm_hostname">
</Property>

Please see the vSphere Plugin whitepaper for more information.

NDMP Global Endpoint Deduplication Enhancement

The NDMP plugin has been enhanced to greatly increase the deduplication ratio of NetApp NDMP dump images and TAR images.

When the NDMP system is identified as a NetApp host or the format is TAR and the target storage device supports the Bacula Global Endpoint Deduplication option, the NDMP data stream will be analyzed automatically. The following message will be displayed in the Job log.

JobId 1: NetApp Dump analyzer for Global Endpoint Deduplication enabled
or
JobId 1: TAR analyzer for Global Endpoint Deduplication enabled

Please see the NDMP Plugin whitepaper for more information.

BWeb Management Suite

Client Registration Module

The BWeb management suite simplifies the configuration and the deployment of new clients with QR codes (for Android systems), and the “Registration Wizard”.

Restricted Console Wizard

The updated BWeb Console Wizard simplifies the configuration of restricted Consoles.

Android Phone Support Enhancements

The support for Android Phones has been improved. It is now possible to:

  • configure the Bacula FileDaemon with a QR code generated from BWeb Management Suite;

  • start backup jobs from the main interface;

  • restore files from the main interface;

Volume Retention Enhancements

The Pool/Volume parameter Volume Retention can now be disabled to never prune a volume based on the Volume Retention time. When Volume Retention is disabled, only the Job Retention time will be used to prune jobs.

Pool {
    Volume Retention = 0
    ...
}

New BCloud Features

  • Add support for the Connect To Director feature (for clients behind NAT).

Global Endpoint Deduplication Changes

The Global Endpoint Deduplication feature was re-organized to support multiple Deduplication engines in a single Storage Daemon instance. The Deduplication engine can now be configured via a new Dedup configuration resource.

Please see the Global Endpoint Deduplication whitepaper for more information.

Bacula Enterprise 12.0.2

The Bacula Docker Plugin can now handle external Docker volumes.

The Docker Plugin whitepaper provides more detailed information.

Bacula Enterprise 12.0

Docker Plugin

Containers are very light system level virtualization with less overhead.

Docker containers rely on sophisticated file system level data abstraction with a number of read-only images to create templates used for container initialization.

The Bacula Enterprise Docker Plugin will save the full container image including all read-only and writable layers into a single image archive.

It is not necessary to install a Bacula File daemon in each container, so each container can be backed up from a common image repository.

The Bacula Docker Plugin will contact the Docker service to read and save the contents of any system image or container image using snapshots (default behavior) and dump them using the Docker API.

The Docker Plugin whitepaper provides more detailed information.

Docker Client Package

The File Daemon package can now be installed via a Docker image.

Sybase ASE Plugin

The Sybase ASE Plugin is designed to simplify the backup and restore operations of a Sybase Adaptive Server Enterprise. The backup administrator does not need to know about internals of Sybase ASE backup techniques or write complex scripts. The Sybase ASE Plugin supports Point In Time Recovery (PITR) with Sybase Backup Server Archive API backup and restore techniques.

The Plugin is able to do incremental and differential backups of the database at block level. This plugin is available on 32-bit and 64-bit Linux platforms supported by Sybase, and supports Sybase ASE 12.5, 15.5, 15.7 and 16.0.

Please see the Sybase ASE Plugin whitepaper for more information.

Continuous Data Protection Plugin

Continuous Data Protection (CDP), also called continuous backup or real-time backup, refers to backup of Client data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time.

../_images/cdp1.png

The CDP feature is composed of two components: An application (cdp-client or tray-monitor) that will monitor a set of directories configured by the user, and a Bacula FileDaemon plugin responsible to secure the data using Bacula infrastructure.

The user application (cdp-client or tray-monitor) is responsible for monitoring files and directories. When a modification is detected, the new data is copied into a spool directory. At a regular interval, a Bacula backup job will contact the FileDaemon and will save all the files archived by the cdp-client. The locally copied data can be restored at any time without a network connection to the Director.

See the CDP (Continious Data Protection) chapter blb:cdp for more information.

Automatic TLS Encryption

Starting with 12.0, all daemons and consoles are now using TLS automatically for all network communications. It is no longer required to setup TLS keys in advance. It is possible to turn off automatic TLS PSK encryption using the TLS PSK Enable directive.

Client Behind NAT Support with the Directive

A Client can now initiate a connection to the Director (permanently or scheduled) to allow the Director to communicate to the Client when a new Job is started or a bconsole command such as status client or estimate is issued.

This new network configuration option is particularly useful for Clients that are not directly reachable by the Director.

# cat /opt/bacula/etc/bacula-fd.conf
Director {
    Name = bac-dir
../_images/client-behind-nat.png
Password = aigh3wu7oothieb4geeph3noo     # Password used to connect

# New directives
    Address = bac-dir.mycompany.com          # Director address to connect
    Connect To Director = yes                # FD will call the Director
}
# cat /opt/bacula/etc/bacula-dir.conf
Client {
    Name = bac-fd
    Password = aigh3wu7oothieb4geeph3noo
    # New directive
    Allow FD Connections = yes
}

It is possible to schedule the Client connection at certain periods of the day:

# cat /opt/bacula/etc/bacula-fd.conf Director

Director {
    Name = bac-dir
    Password = aigh3wu7oothieb4geeph3noo    # Password used to connect
    # New directives

    Address = bac-dir.mycompany.com       # Director address to connect
    Connect To Director = yes             # FD will call the Director
    Schedule = WorkingHours
}

Schedule {
    Name = WorkingHours
    # Connect the Director between 12:00 and 14:00
    Connect = MaxConnectTime=2h on mon-fri at 12:00
}

Note that in the current version, if the File Daemon is started after 12:00, the next connection to the Director will occur at 12:00 the next day.

A Job can be scheduled in the Director around 12:00, and if the Client is connected, the Job will be executed as if the Client was reachable from the Director.

Android Phone Support

The FileDaemon and the Tray Monitor are now available on the Android platform.

Proxmox Clustering Features

With BWeb Management Console 12.0, it is now possible to analyze a Proxmox cluster configuration and dynamically adjust the Bacula configuration in the following cases:

  • Virtual machine added to the cluster

  • Virtual machine removed from the cluster

  • Virtual machine migrated between cluster nodes

The Proxmox whitepaper provides more information.

BWeb Management Console Dashboards

With BWeb Management Console 12.0, it is now possible to customize the size and the position of all boxes displayed in the interface. The Page Composer page can be used to graphically design pages and create dashboards with a library of predefined widgets or with Graphite-provided graphics.

../_images/bweb_graphite_graphs_dashboard.png

Miscellaneous

Global Control Directive

The Director Autoprune directive can now globally control the Autoprune feature. This directive will take precedence over Pool or Client Autoprune directives.

Director {
    Name = mydir-dir
    ...
    AutoPrune = no
}

vSphere Plugin ESXi 6.7 Support

The new vSphere Plugin is now using VDDK 6.7.1 and should have a more efficient backup process with empty or unallocated blocks.

New Documentation

The documentation was improved to automatically handle external references in PDF as well as in HTML.

Linux BMR UEFI Support

The Linux BMR version 2.2.1 now supports the UEFI boot system. Note that it is necessary to back up the related file system, usually mounted at /boot/efi and formatted with a MS-DOS or vfat file system.

MSSQL Plugin Enhancements

The Bacula Enterprise Microsoft SQL Server Plugin (MSSQL) has been improved to handle database recovery models more precisely. The target_backup_recovery_models parameter allows to enable database backups depending on their recovery model. The simple_recovery_models_incremental_action controls the plugin behavior when an incompatible incremental backup is requested on a simple recovery model database: It is possible to upgrade to full backup (default), to ignore the database and emit a job warning (ignore_with_error), or to ignore the database and emit a “skipped” message (ignore). Please refer to the specific plugin documentation for more information.

MySQL Percona Enhancements

The new MySQL Percona Plugin was optimized and does not require large temporary files anymore.

Dynamic Client Address Directive

It is now possible to use a script to determine the address of a Client when dynamic DNS option is not a viable solution:

Client {
    Name = my-fd
    ...
    Address = "|/opt/bacula/bin/compute-ip my-fd"
}

The command used to generate the address should return one single line with a valid address and end with the exit code 0. An example would be

Address = "|echo 127.0.0.1"

This option might be useful in some complex cluster environments.

Bacula Enterprise 10.2.1

New Prune Command Option

The prune jobs all command will query the catalog to find all combinations of Client/Pool, and will run the pruning algorithm on each of them. At the end, all files and jobs not needed for restore that have passed the relevant retention times will be pruned.

The prune command prune jobs all yes can be scheduled in a RunScript to prune the catalog once per day for example. All Clients and Pools will be analyzed automatically.

Job {
    ...
    RunScript {
        Console = "prune jobs all yes"
        RunsWhen = Before
        failjobonerror = no
        runsonclient = no
    }
}

Bacula Enterprise 10.2

Bacula Daemon Real-Time Statistics Monitoring

All daemons can now collect internal performance statistics periodically and provide mechanisms to store the values to a CSV file or to send the values to a Graphite daemon via the network. Graphite is an enterprise-ready monitoring tool (https://graphiteapp.org).

For more information see the section about Daemon Real-Time Statistics Monitoring in the core documentation for the Bacula Director.

Red Hat Virtualization System Plugin

RHV is an open, software-defined platform that virtualizes Linux and Microsoft Windows workloads. Built on RHEL and KVM, it features management tools that virtualize resources, processes, and applications, providing a stable foundation for a cloud-native and containerized future.

The Redhat Virtualization (RHV) plugin provides virtual machine bare metal recovery for the Redhat Virtualization system. This plugin provides image level full backup and recovery with a full set of options to select different backup sets and to personalize the restore operations.

The Redhat Virtualization (RHV) whitepaper is provides more information.

New Cloud Storage Drivers

The different cloud drivers are now distributed in separated packages. Accordingly, an upgrade from a previous version may need some manual interaction.

bacula-enterprise-cloud-storage-azure
bacula-enterprise-cloud-storage-google
bacula-enterprise-cloud-storage-oracle
bacula-enterprise-cloud-storage-s3
bacula-enterprise-cloud-storage-common

Google Cloud Driver

Support for the Google Cloud Storage has been added in version 10.2. The behavior is identical to the Amazon S3 Cloud Storage.

See the Google Cloud Storage whitepaper for more information.

Oracle Cloud Driver

Support for the Oracle S3 Cloud Storage has been added in version 10.2. The behavior is identical to the Amazon S3 Cloud Storage.

See the Cloud Storage whitepaper for more information.

MySQL Percona Plugin Enhancements

Making the databases consistent for restore is called Prepare in the Percona terminology. This prepare operation is commonly done when the databases are restored.

Rather than doing the Prepare work to make the database consistent at restore time, the Prepare can automatically be done by the plugin during the backup phase by adding the plugin option prepare. Prepare can take two values: fd (default) and sd.

When the prepare=fd is option is specified, the prepare will be done on the File daemon machine at backup time prior to sending the prepared binary data to the Storage daemon.

As an alternative to doing the prepare on the File daemon, it can be done on the Storage daemon by using the plugin option prepare=sd.

See the MySQL Plugin whitepaper for more information.

ACSLS Tape Changer Support

The ACSLS acts as a library management system. ACSLS manages the physical aspects of tape cartridge storage and retrieval through a system administrator interface and a programmatic interface. These real-time interfaces control and monitor tape libraries, including access control.

The support for the ACSLS system has been added witch version 10.2, please see the ACSLS whitepaper for more information.

WORM Tape Support

Automatic WORM tape detection has been added in version 10.2.

When a WORM tape is detected, the catalog volume entry is changed automatically to set Recycle=no. It will prevent the volume from being automatically recycled by Bacula.

There is no change in how the Job and File records are pruned from the catalog as that is a separate issue that is currently adequately implemented in Bacula.

When a WORM tape is detected, the SD will show WORM on the device state output, if the SD runs with a debug level greater or equal to 6. Otherwise, the status shows as !WORM

Device state:
    OPENED !TAPE LABEL APPEND !READ !EOT !WEOT !EOF WORM !SHORT !MOUNTED ...

The output of the used volume status has been modified to include the worm state. It shows worm=1 for a worm cassette and worm=0 otherwise. Example:

Used Volume status:
    Reserved volume: TestVolume001 on Tape device "nst0" (/dev/nst0)
    Reader=0 writers=0 reserves=0 volinuse=0 worm=1

The following programs are needed for WORM tape detection:

  • sdparm

  • tapeinfo

The new Storage Device directive Worm Command must be configured as well as the Control Device directive (used also with the Tape Alert feature).

Device {
    Name = "LTO-0"
    # below device names should be replaced with
    # /dev/tape/by-id/... in production environments!
    Archive Device = "/dev/nst0"
    Control Device = "/dev/sg0"    # from lsscsi -g
    Worm Command = "/opt/bacula/scripts/isworm %l"
...
}

Bacula Enterprise 10.0

The Client Registration Wizard is a simple GUI program that can register the local client with the BCloud Service configured on the network. The program is available in the bacula-enterprise-registration-wizard package. The end user must run the program as root (or via sudo) and supply the following parameters:

  • Username

  • Password

  • BCloud Service URL

The BCloud Service URL can be determined via a DNS query. The program will automatically query the following DNS SRV entry:

_bcloud._tcp.example.com. 18000 IN SRV 0 5 443 backup.example.com.
../_images/registration-wizard.png

where the localdomain (“example.com” here) is determined by the local resolver.

Username and password are the only information required to log in to the BCloud Service interface. Once registered, the local configuration files are updated and TLS communication is configured.

New Prune Command Options

The bconsole prune command can now run the pruning algorithm on all volumes from a Pool or on all Pools.

* prune allfrompool pool=Default yes
* prune allfrompool allpools yes

REST API version 2

The Bacula Enterprise REST API has been updated and now supports the following new features:

  • Run job

  • Cancel job

  • Run restore

  • Update Bvfs cache

  • Bvfs lsdir command

  • Bvfs lsfiles command

  • Bvfs restore command

  • Bvfs file version list

  • Set Bacula config files through BWeb/BConfig interface

  • Commit BWeb workset changes

  • List resources

  • List resources

  • List resources

  • List

  • List jobs of a specific type

  • Manage TLS certificates

Please see the REST API whitepaper for more information.

BWeb Management Suite

BWeb Management Suite was translated into Russian and Japanese.

Bacula Enterprise 8.8

General

Cloud Backup

A major problem of Cloud backup is that data transmission to and from the Cloud is very slow compared to traditional backup to disk or tape. The Bacula Cloud drivers provide a means to quickly finish the backups and then to transfer the data from the local cache to the Cloud in the background. This is done by first splitting the data Volumes into small parts that are cached locally, and then uploading those parts to the Cloud storage service in the background, either while the job continues to run or after the backup job has terminated. Once the parts are written to the Cloud, they may either be left in the local cache for quick restores or they can be removed (truncate cache).

Cloud Volume Architecture

../_images/nativeCloudStorage-diagram.png

The picture shows two Volumes (Volume0001 and Volume0002) with their parts in the cache. Below the cache, one can see that Volume0002 has be uploaded or synchronized with the Cloud.

Note: Normal Bacula disk Volumes are implemented as standard files that reside in the user-defined Archive Directory. On the other hand, Cloud Volumes are directories that reside in the user-defined Archive Directory. The directory contains the cloud Volume parts, implemented as numbered files (part.1, part.2, …).

Cloud Restore

During a restore, if the needed parts are available in the local cache, they will immediately be used. Otherwise, they will be downloaded from cloud storage as needed. The restore starts with parts already in the local cache but will wait as needed for any part that must be downloaded. The download proceeds while the restore is running.

With most cloud providers uploads are free of charge, but downloads of data from the cloud are billed. By using the local cache and multiple small parts, Bacula can be configured to substantially reduce download costs.

The Maximum File Size Device directive is valid within the Storage Daemon’s cloud device configuration and defines the granularity of a restore chunk. In order to minimize the number of volume parts to download during a restore (in particular when restoring single files), it is useful to set the Maximum File Size to a value smaller than or equal to the configured Maximum File Size.

Compatibility

Since a Cloud Volume contains the same data an ordinary Bacula Volume does, all existing types of data may be stored in the cloud – that is client data encryption, client-side compreion, plugin usage are all available. In fact, all existing functionality, with the exception of deduplication, is compatible with the Cloud drivers.

Deduplication and the Cloud

At the current time, Bacula Global Endpoint Backup does not support writing to the cloud because cloud storage would be too slow to support large hashed and indexed containers of deduplication data.

Virtual Autochangers and Disk Autochangers

Bacula Virtual Autochangers are compatible with the Bacula Cloud drivers. However, if you use a third party disk autochanger script such as Vchanger, unless or until it is modified to handle Volume directories, it may not be compatible with Bacula Cloud drivers.

Security

All data that is sent to and received from the cloud by default uses the HTTPS protocol, so data is encrypted while being transmitted and received. However, data that resides in the cloud is not encrypted by default. If that extra security of backed up data is required, Bacula’s PKI data encryption feature should be used during the backup.

New Commands, Resource, and Directives for Cloud

To support Cloud storage devices, some new bconsole commands, new Storage Daemon directives, and a new Cloud resource that is referenced in the Storage Daemon’s Device resource are available as of 8.8

Cache and Pruning

The Cache is treated much like a normal Disk based backup, so that in configuring Cloud the administrator should take care to set Archive Device in the resource to a dirctory that would also be suitable for storing backup data. Obviously, unless the truncate/prune cache commands are used, the Archive Device will continue to fill.

The cache retention can be controlled per Volume with the Cache Retention attribute. The default value is 0, meaning that pruning of the cache is disabled.

The Cache Retention value for a volume can be modified with the update command, or configured via the Pool directive Cache Retention for newly created volumes.

New Cloud bconsole Commands

  • truncate cache

  • upload

  • cloud The new cloud bconsole command allows inspecting and manipulating cloud volumes in different ways. The options are the following:

    • None. If you specify no arguments to the command, bconsole will prompt with:

    Cloud choice: 1: List Cloud Volumes in the Cloud 2: Upload a
    Volume to the Cloud 3: Prune the Cloud Cache 4: Truncate a
    Volume Cache 5: Done Select action to perform on Cloud (1-5):
    

    The different choices should be rather obvious.

    • truncate This command will attempt to truncate the local cache for the specified Volume. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:

      • Storage=xxx

      • Volume=xxx

      • AllPools

      • AllFromPool

      • Pool=xxx

      • Storage=xxx

      • MediaType=xxx

      • Drive=xxx

      • Slots=nnn

    • prune This command will attempt to prune the local cache for the specified Volume. Bacula will respect the Cache Retention volume attribute to determine if the cache can be truncated or not. Only parts that are uploaded to the cloud will be deleted from the cache. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:

      • Storage=xxx

      • Volume=xxx

      • AllPools

      • AllFromPool

      • Pool=xxx

      • Storage=xxx

      • MediaType=xxx

      • Drive=xxx

      • Slots=nnn

    • upload This command will attempt to upload the specified Volumes. It will prompt for the information needed to determine the Volume name or names. To avoid the prompts, any of the following additional command line options can be specified:

      • Storage=xxx

      • Volume=xxx

      • AllPools

      • AllFromPool

      • Storage=xxx

      • Pool=xxx

      • MediaType=xxx

      • Drive=xxx

      • Slots=nnn

    • list This command will list volumes stored in the Cloud. If a volume name is specified, the command will list all parts for the given volume. To avoid the prompts, the operator may specify any of the following additional command line options:

      • Storage=xxx

      • Volume=xxx

      • Storage=xxx

Cloud Additions to the DIR Resource

In bacula-dir.conf “Pool” resources, the directive CacheRetention can be specified. It is only effective for cloud storage backed volumes, and ignored when used with volumes stored on any different storage device.

Cloud Additions to the SD Device Resource

Device resource configured in the bacula-sd.conf file can use the Cloud keyword in the directive, and the two directives Maximum Part Size and Cloud.

New Cloud SD Device Directives

Device Type The Device Type has been extended to include the new keyword to specify that the device supports cloud Volumes. Example:

Device Type = Cloud

Cloud The new Cloud directive references a Resource. As with other Bacula resource references, the name of the resource is used as the value. Example:

Cloud = S3Cloud

Maximum Part Size This directive allows specification of the maximum size for each part of any volume written by the current device. Smaller part sizes will reduce restore costs, but will cause additional but small overhead to handle multiple parts. The maximum number of parts permitted per Cloud Volume is 524,288. The maximum size of any given part is approximately 17.5 TB.

Example Cloud Device Specification

An example of a Cloud Device Resource might be:

Device {
    Name = CloudStorage
    Device Type = Cloud
    Cloud = S3Cloud
    Archive Device = /opt/bacula/backups
    Maximum Part Size = 10000000
    Media Type = File
    LabelMedia = yes
    Random Access = Yes;
    AutomaticMount = yes
    RemovableMedia = no
    AlwaysOpen = no
}

As can be seen above, the Cloud directive in the resource contains the name (S3Cloud), which references the resource that is shown below.

Note also that the Archive Device is specified in the same manner as used for a File device, i. e. by indicating a directory name. However, in place of containing regular files as Volumes, the archive device for the Cloud drivers will contain the local cache, which consists a directory per Volume, and these directories contain the parts associated with the particular Volume. So with the above resource, and the two cached Volumes shown in figure above, the following layout on disk would result:

/opt/bacula/backups
    /opt/bacula/backups/Volume0001
        /opt/bacula/backups/Volume0001/part.1
        /opt/bacula/backups/Volume0001/part.2
        /opt/bacula/backups/Volume0001/part.3
        /opt/bacula/backups/Volume0001/part.4
    /opt/bacula/backups/Volume0002
        /opt/bacula/backups/Volume0002/part.1
        /opt/bacula/backups/Volume0002/part.2
        /opt/bacula/backups/Volume0002/part.3

The Cloud Resource

The Cloud resource has a number of directives that may be specified as exemplified in the following example:

Default east USA location:

Cloud {
    Name = S3Cloud
    Driver = "S3"
    HostName = "s3.amazonaws.com"
    BucketName = "BaculaVolumes"
    AccessKey = "BZIXAIS39DP9YNER5DFZ"
    SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0"
    Protocol = HTTPS
    URIStyle = VirtualHost
    Truncate Cache = No
    Upload = EachPart
    Region = ‘‘us-east-1"
    Maximum Upload Bandwidth = 5MB/s
}

For central europe location:

Cloud {
    Name = S3Cloud
    Driver = "S3"
    HostName = "s3-eu-central-1.amazonaws.com"
    BucketName = "BaculaVolumes"
    AccessKey = "BZIXAIS39DP9YNER5DFZ"
    SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0"
    Protocol = HTTPS
    UriStyle = VirtualHost
    Truncate Cache = No
    Upload = EachPart
    Region = "eu-central-1"
    Maximum Upload Bandwidth = 4MB/s
}

For Amazon Cloud, refer to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region to get a complete list of regions and corresponding endpoints and use them respectively as Region and HostName directives.

or in the following example for CEPH S3 interface:

Cloud {
    Name = CEPH_S3
    Driver = "S3"
    HostName = ceph.mydomain.lan
    BucketName = "CEPHBucket"
    AccessKey = "xxxXXXxxxx"
    SecretKey = "xxheeg7iTe0Gaexee7aedie4aWohfuewohxx0"
    Protocol = HTTPS
    Upload = EachPart
    UriStyle = Path
    # Must be set for CEPH
}

For Azure:

Cloud {
    Name = MyCloud
    Driver = "Azure"
    HostName = "MyCloud" #not used but needs to be specified
    BucketName = "baculaAzureContainerName"
    AccessKey = "baculaaccess"
    SecretKey = "/Csw1SECRETUmZkfQ=="
    Protocol = HTTPS
    UriStyle = Path
}

The directives of the above resource for the S3 driver are defined as follows:

Name = <Device-Name> The name of the Cloud resource. This is the logical Cloud name, and may be any string up to 127 characters in length. Shown as S3Cloud above.

Description = <Text> The description is used for display purposes as is the case with all resources.

Driver = <Driver-Name> This defines which driver to use. At the moment, the only Cloud driver that is implemented is S3. There is also a File driver, which is used mostly for testing.

Host Name = <Name> This directive specifies the hostname to be used in the URL. Each Cloud service provider has a different and unique hostname. The maximum size is 255 characters and may contain a TCP port specification.

Bucket Name = <Name> This directive specifies the bucket name that you wish to use on the Cloud service. This name is normally a unique name that identifies where you want to place your Cloud Volume parts. With Amazon S3, the bucket must be created previously on the Cloud service. With Azure Storage, it is generaly refered as Container and it can be created automatically by Bacula when it does not exist. The maximum bucket name size is characters.

Access Key = <String> The access key is your unique user identifier given to you by your cloud service provider.

Secret Key = <String> The secret key is the security key that was given to you by your cloud service provider. It is equivalent to a password.

Protocol = <HTTP | HTTPS> The protocol defines the communications protocol to use with the cloud service provider. The two protocols currently supported are: HTTPS and HTTP. The default is HTTPS.

Uri Style = <VirtualHost | Path> This directive specifies the URI style to use to communicate with the cloud service provider. The two Uri Styles currently supported are: VirtualHost and Path. The default is VirtualHost.

Truncate Cache = <truncate-kw> This directive specifies when Bacula should automatically remove (truncate) the local cache parts. Local cache parts can only be removed if they have been uploaded to the cloud. The currently implemented values are:

No Do not remove cache. With this option you must manually delete the cache parts with a bconsole truncate cache, or do so with an Admin Job that runs an truncate cache command. This is the default.

AfterUpload Each part will be removed just after it is uploaded. Note, if this option is specified, all restores will require a download from the Cloud.

AtEndOfJob With this option, at the end of the Job, every part that has been uploaded to the Cloud will be removed [2]_ (truncated).

Upload = <upload-kw> This directive specifies when local cache parts will be uploaded to the Cloud. The options are:

No Do not upload cache parts. With this option you must manually upload the cache parts with a bconsole command, or do so with an Admin Job that runs an upload command. This is the default.

EachPart With this option, each part will be uploaded when it is complete i.e. when the next part is created or at the end of the Job.

AtEndOfJob With this option all parts that have not been previously uploaded will be uploaded at the end of the Job. [3]_

Maximum Concurrent Uploads = <number> The default is 3, but by using this directive, you may set it to any value you want.

Maximum Concurrent Downloads = <number> The default is 3, but by using this directive, you may set it to any value you want.

Maximum Upload Bandwidth = <speed> The default is unlimited, but by using this directive, you may limit the upload bandwidth used globally by all devices referencing this resource.

Maximum Download Bandwidth = <speed> The default is unlimited, but by using this directive, you may limit the download bandwidth used globally by all devices referencing this Cloud resource.

Region = <String> The Cloud resource can be configured to use a specific endpoint within a region. This directive is required for AWS-V4 regions. ex: Region=. “eu-central-1”

BlobEndpoint = <String> This resource can be used to specify a custom URL for Azure Blob(see https://docs.microsoft.com/en-us/azure/storage/blobs/storage-custom-domain-name).

EndpointSuffix = <String> Use this resource to specify a custom URL postfix for Azure. ex: EnbpointSuffix=. “core.chinacloudapi.cn”

File Driver for the Cloud

As mentioned above, one may specify the keyword File on the Driver directive of the Cloud resource. Instead of writing to the Cloud, Bacula will instead create a Cloud Volume but write it to disk. The rest of this section applies to the resource directives when the File driver is specified.

The following Cloud directives are ignored: Bucket Name, Access Key, Secret Key, Protocol, URI Style. The directives Truncate Cache and Upload and work on the local cache in the same manner as they do for the S3 driver.

The main difference to note is that the Host Name, specifies the destination directory for the Cloud Volume files, and this Host Name must be different from the Archive Device name, or there will be a conflict between the local cache (in the Archive Device directory) and the destination Cloud Volumes (in the Host Name directory).

As noted above, the File driver is mostly used for testing purposes, and we do not particularly recommend using it. However, if you have a particularly slow backup device you might want to stage your backup data into an SSD or disk using the local cache feature of the Cloud device, and have your Volumes transferred in the background to a slow File device.

Progressive Virtual Full

Instead of the implementation of Perpetual Virtual Full backups with a Perl script which needs to be run regularly, with version 8.8.0, a new job directive named Backups To Keep has been added. This permits implementation of Progressive Virtual Fulls fully within Bacula itself.

../_images/pvf-slidingbackups.png

Figure 43.2: Backup Sequence Slides Forward One Day, Each Day

To use the Progressive Virtual Full feature, the Backups To Keep directive is added to a resource. The value specified for the directive indicates the number of backup jobs that should not be merged into the Virtual Full. The default is zero and behaves the same way the prior script pvf worked.

Backups To Keep Directive

The new BackupsToKeep directive is specified in the Resource and has the form:

Backups To Keep = 30

where the value (30 in the figure fig:slidingbackups ) is the number of backups to retain. When this directive is present during a Virtual Full (it is ignored for any other Job types), Bacula will check if the latest Full backup has more subsequent backups than the value specified. In the above example, the Job would simply terminate unless there is a Full back followed by at least 31 backups of either Differential or Incremental level.

Assuming that the latest Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full backup. The result is a Full backup followed by 30 Incremental ones. The Resource in bacula-dir.conf to accomplish this would be:

Job {
    Name = "VFull"
    Type = Backup
    Level = VirtualFull
    Client = "my-fd"
    File Set = "FullSet"
    Accurate = Yes
    Backups To Keep = 10
}

Delete Consolidated Jobs

The additional directive Delete Consolidated Jobs expects a <yes|no> value that, if set to yes, will cause any old Job that is consolidated during a Virtual Full to be deleted. In the example above we saw that a Full plus one other job (either an Incremental or Differential) were consolidated into a new Full backup. The original Full and the other Job consolidated would be deleted if this directive were set to yes. The default value is no.

Virtual Full Compatibility

Virtual Full Backup is not supported with all the plugins.

TapeAlert Enhancements

There are some significant enhancements to the TapeAlert feature of Bacula. Several directives are used slightly differently, and there is a minor compatibility problem with the old TapeAlert implementation.

What is New

First, the Alert Command directive needs to be added in the Device resource that calls the new tapealert script that is installed in the scripts directory (normally: /opt/bacula/scripts):

Device {
    Name = ...
    Archive Device = /dev/nst0
    Alert Command = "/opt/bacula/scripts/tapealert %l"
    Control Device = /dev/sg1 # must be SCSI ctl for Archive Device
    ...
}

The Control Device directive in the Storage Daemon’s configuration was previously used only for the SAN Shared Storage feature. With Bacula version 8.8, it is also used for the TapeAlert command to permit to detect tape alerts on a specific device (normally only tape devices).

Once the above mentioned two directives (Alert Command and Control Device) are in place in all resources, Bacula will check for tape alerts at two points:

  • After the Drive is used and it becomes idle.

  • After each read or write error on the drive.

At each of the above times, Bacula will call the new tapealert script, which uses the tapeinfo program. The tapeinfo utility is part of the apt sg3-utils and rpm sg3_utils packages. Then for each tape alert that finds for that drive, it will emit a Job message that is either INFO, WARNING, or FATAL depending on the designation in the Tape Alert published by the . For the specification, please see: http://www.t10.org/ftp/t10/document.02/02-142r0.pdf

As a somewhat extreme example, if tape alerts 3, 5, and 39 are set, you will get the following output in your backup job:

17-Nov 13:37 rufus-sd JobId 1: Error: block.c:287 Write error at 0:17
on device "tape" (/home/kern/bacula/k/regress/working/ach/drive0)
Vol=TestVolume001. ERR=Input/output error.

17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
Volume="TestVolume001" alert=3: ERR=The operation has stopped because
an error has occurred while reading or writing data which the drive
cannot correct. The drive had a hard read or write error

17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
Volume="TestVolume001" alert=5: ERR=The tape is damaged or the drive
is faulty. Call the tape drive supplier helpline. The drive can no
longer read data from the tape

17-Nov 13:37 rufus-sd JobId 1: Warning: Disabled Device "tape"
(/home/kern/bacula/k/regress/working/ach/drive0) due to tape
alert=39.

17-Nov 13:37 rufus-sd JobId 1: Warning: Alert: Volume="TestVolume001"
alert=39: ERR=The tape drive may have a fault. Check for availability
of diagnostic information and run extended diagnostics if applicable.
The drive may have had a failure which may be identified by stored
diagnostic information or by running extended diagnostics (eg Send
Diagnostic). Check the tape drive users manual for instructions on
running extended diagnostic tests and retrieving diagnostic data.

Without the tape alert feature enabled, you would only get the first error message above, which is the error Bacula received. Notice also, in this case the alert number 5 is a critical error, which causes two things to happen: First, the tape drive is disabled, and second, the Job is failed.

If you attempt to run another Job using the Device that has been disabled, you will get a message similar to the following:

17-Nov 15:08 rufus-sd JobId 2: Warning: Device "tape" requested by
DIR is disabled.

and the Job may be failed if no other usable drive can be found.

Once the problem with the tape drive has been corrected, you can clear the tape alerts and re-enable the device with the Bacula bconsole command such as the following:

enable Storage=Tape

Note, when you enable the device, the list of prior tape alerts for that drive will be discarded.

Since is is possible to miss tape alerts, Bacula maintains a temporary list of the last 8 alerts, and each time calls the tapealert script, it will keep up to 10 alert status codes. Normally there will only be one or two alert errors for each call to the **tapealert** script.

Once a drive has one or more tape alerts, they can be inspected by using the bconsole status command as follows:

status storage=Tape

which produces the following output:

Device Vtape is "tape" (/home/kern/bacula/k/regress/working/ach/drive0)
mounted with:
    Volume:
    TestVolume001
    Pool:
    Default
    Media type: tape
    Device is disabled. User command.
    Total Bytes Read=0 Blocks Read=1 Bytes/block=0
    Positioned at File=1 Block=0
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
        alert=Hard Error
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
        alert=Read Failure
    Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
        alert=Diagnostics Required

If you want to see the long message associated with each of the alerts, simply set the debug level to 10 or more and re-issue the status command:

setdebug storage=Tape level=10
status storage=Tape
...
Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
    flags=0x0 alert=The operation has stopped because an error has occurred
    while reading or writing data which the drive cannot correct. The drive had
    a hard read or write error
Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
    flags=0x0 alert=The tape is damaged or the drive is faulty. Call the tape
    drive supplier helpline. The drive can no longer read data from the tape
Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x1
    alert=The tape drive may have a fault. Check for availability of diagnostic
    information and run extended diagnostics if applicable.
    The drive may
    have had a failure which may be identified by stored diagnostic information
    or by running extended diagnostics (eg Send Diagnostic). Check the tape
    drive users manual for instructions on running extended diagnostic tests
    and retrieving diagnostic data.
    ...

The next time you enable the Device by either using bconsole or you restart the Storage Daemon, all the saved alert messages will be discarded.

Handling of Alerts

Tape Alerts numbered 7, 8, 13, 14, 20, 22, 52, 53, and 54 will cause Bacula to disable the current Volume.

Tape Alerts numbered 14, 20, 29, 30, 31, 38, and 39 will cause Bacula to disable the drive.

Please note certain tape alerts such as 14 have multiple effects (disable the Volume and disable the drive).

Multi-Tenancy Enhancements

New BWeb Management Suite Self User Restore

The BWeb Management Suite can be configured to allow authorized users to restore their own files on their own Unix or Linux system through BWeb. More information can be found in the BWeb Management Suite user’s guide.

New Console ACL Directives

By default, if a Console ACL directive is not set, Bacula will assume that the ACL list is empty. If the current Bacula configuration uses restricted Consoles and allows restore jobs, it is mandatory to configure the new directives.

Directory ACL

This directive is used to specify a list of directories that can be accessed by a restore session. Without this directive, the console cannot restore any file. Multiple directories names may be specified by separating them with commas, and/or by specifying multiple DirectoryACL directives. For example, the directive may be specified as:

DirectoryACL = /home/bacula/, "/etc/", "/home/test/*"

With the above specification, the console can access the following files:

  • /etc/password

  • /etc/group

  • /home/bacula/.bashrc

  • /home/test/.ssh/config

  • /home/test/Desktop/Images/something.png

But not the following files or directories:

  • /etc/security/limits.conf

  • /home/bacula/.ssh/id_dsa.pub

  • /home/guest/something

  • /usr/bin/make

If a directory starts with a Windows pattern (ex: c: / ), Bacula will automatically ignore the case when checking directories.

UserId ACL

This directive is used to specify a list of UID/GID that can be accessed from a restore session. Without this directive, the console cannot restore any file. During the restore session, the Director will compute the restore list and will exclude files and directories that cannot be accessed. Bacula uses the LStat database field to retrieve st_mode, st_uid and st_gid information for each file and compare them with the UserId ACL elements. If a parent directory doesn’t have a proper catalog entry, access to this directory will be automatically granted.

UID/GID names are resolved with getpwnam() function within the Director. The UID/GID mapping might be different from one system to an other.

Windows systems are not compatible with the UserId ACL feature. The use of is required to restore Windows systems from a restricted Console.

Multiple UID/GID names may be specified by separating them with commas, and/or by specifying multiple UserId ACL directives. For example, the directive may be specified as:

UserIdACL = "bacula", "100", "100:100", ":100", "bacula:bacula"
# ls -l /home
total 28
drwx—— 45 bacula bacula 12288 Oct 24 17:05 bacula
drwx—— 45 test test 12288 Oct 24 17:05 test
drwx–x–x 45 test2 test2 12288 Oct 24 17:05 test2
drwx—— 2 root root 16384 Aug 30 14:57 backup
-rwxr–r– 1 root root 1024 Aug 30 14:57 afile

In the example above, if the uid of the user test is 100, the following files will be accessible:

  • bacula/*

  • test/*

  • test2/*

The directory backup will not be accessible.

Restore Job Security Enhancement

The bconsole restore command can now accept the new jobuser= and jobgroup= parameters to restrict the restore process to a given user account. Files and directories created during the restore session will be restricted.

* restore jobuser=joe jobgroup=users

The Restore Job restriction can be used on Linux and on FreeBSD. If the restore Client OS doesn’t support the needed thread-level user impersonation, the restore job will be aborted.

New Bconsole “list” Command Behavior

The bconsole list commands can now be used safely from a restricted bconsole session. The information displayed will respect the ACL configured for the Console session. For example, if a Console has access to JobA, JobB and JobC, information about JobD will not appear in the list jobs command.

Bacula Enterprise 8.6.3

New Console ACL Directives

It is now possible to configure a restricted Console to distinguish Backup and Restore jobs permissions. The Backup Client ACL can restrict backup jobs on a specific set of clients, while the Backup Client ACL can restrict restore jobs.

# cat /opt/bacula/etc/bacula-dir.conf
...

Console {
    Name = fd-cons   # Name of the FD Console
    Password = yyy
    ...
    ClientACL = localhost-fd   # everything allowed
    RestoreClientACL = test-fd # restore only
    BackupClientACL = production-fd  # backup only
}

The Client ACL directive takes precedence over the Restore Client ACL and the Backup Client ACL settings. In the resource above, this means that the bconsole linked to the named “fd-cons” will be able to run:

  • backup and restore for “localhost-fd”

  • backup for “production-fd”

  • restore for “test-fd”

At restore time, jobs for client “localhost-fd”, “test-fd” and “production-fd” will be available.

If *all* is set for Client ACL, backup and restore will be allowed for all clients, despite the use of Restore Client ACL or Backup Client ACL .

Bacula Enterprise 8.6.0

Client Initiated Backup

A console program such as the new tray-monitor or bconsole can now be configured to connect a . There are many new features available (see the New Tray Monitor), but probably the most important one is the ability for the user to initiate a backup of her own machine. The connection established by the FD to the for the backup can be used by the for the backup, thus not only can clients (users) initiate backups, but a that is NAT ed (cannot be reached by the Director) can now be backed up without using advanced tunneling techniques.

The flow of information is shown in the picture

Configuring Client Initiated Backup

In order to ensure security, there are a number of new directives that must be enabled in the new tray-monitor, the File Daemon and in the Director. A typical configuration might look like the following:

# cat /opt/bacula/etc/bacula-dir.conf
...
Console {
    Name = fd-cons  # Name of the FD Console
    Password = yyy

    # These commands are used by the tray-monitor, it is possible to restrict
    CommandACL = run, restore, wait, .status, .jobs, .clients
    CommandACL = .storages, .pools, .filesets, .defaults, .info
    # Adapt for your needs
    jobacl = *all*
    poolacl = *all*
    clientacl = *all*
    storageacl = *all*
    catalogacl = *all*
    filesetacl = *all*
}
# cat /opt/bacula/etc/bacula-fd.conf
...

Console { # Console to connect the Director
    Name = fd-cons
    DIRPort = 9101
    address = localhost
    Password = "yyy"
}

Director {
    Name = remote-cons # Name of the tray monitor/bconsole
    Password = "xxx" # Password of the tray monitor/bconsole
    Remote = yes # Allow to use send commands to the Console defined
}

cat /opt/bacula/etc/bconsole-remote.conf
....

Director {
    Name = localhost-fd
    address = localhost # Specify the FD
    address DIRport = 9102 # Specify the FD Port
    Password = "notused"
    }

Console {
    Name = remote-cons # Name used in the auth process
    Password = "xxx"

}

cat  /.bacula-tray-monitor.conf

Monitor {
    Name = remote-cons
    }

Client {
    Name = localhost-fd
    address = localhost # Specify the FD address
    Port = 9102 # Specify the FD Port
    Password = "xxx" Remote = yes
}
misc/images/conf-nat.png

A more detailed description with complete examples is available in the Tray monitor chapter of this manual.

New Tray Monitor

A new tray monitor has been added to the 8.6 release, which offers the following features:

misc/images/conf-nat2.png
  • Director, File and Storage Daemon status page

  • Support for the Client Initiated Backup protocol (See). To use the Client Initiated Backup option from the tray monitor, the Client option “Remote” should be checked in the configuration (Fig).

  • Wizard to run new job (Fig)

  • Display an estimation of the number of files and the size of the next backup job (Fig)

  • Ability to configure the tray monitor configuration file directly from the GUI (Fig)

  • Ability to monitor a component and adapt the tray monitor task bar icon if a jobs are running.

  • TLS Support

  • Better network connection handling

  • Default configuration file is stored under $HOME/.bacula-tray-monitor.conf

  • Ability to “schedule” jobs

  • Available for Linux and Windows platforms

misc/images/tray-monitor-status.png

Scheduling Jobs via the Tray Monitor

The Tray Monitor can periodically scan a specific directory configured as Command Directory and process “*.bcmd” files to find jobs to run.

The format of the “file.bcmd” command file is the following:

<component name>:<run command>
<component name>:<run command>
...

<component name> = string
<run command> = string (bconsole command line)

For example:

localhost-fd: run job=backup-localhost-fd level=full
localhost-dir: run job=BackupCatalog

A command file should contain at least one command. The component specified in the first part of the command line should be defined in the tray monitor. Once the command file is detected by the tray monitor, a popup is displayed to the user and it is possible for the user to cancel the job.

misc/images/tray-monitor-conf-fd.png
misc/images/tray-monitor-run1.png
misc/images/tray-monitor-run2.png

Command files can be created with tools such as or the on Windows. It is possible and recommended to verify network connectivity at that time to avoid network errors:

#!/bin/sh
if ping -c 1 director &> /dev/null
then
    echo "my-dir: run job=backup" > /path/to/commands/backup.bcmd
fi

Concurrent VSS Snapshot Support

It is now possible to run multiple concurrent jobs that use VSS snapshots on the File Daemon for Microsoft Windows.

Accurate Option for Verify “Volume Data” Job

As of Bacula version 8.4.1, it has been possible to have a Verify Job configured with level=Data that will reread all records from a job and optionally check size and checksum of all files.

Starting with 8.6, it is now possible to use the accurate option to check catalog records at the same time. Using a Verify job with level=Data and accurate=yes can replace the level=VolumeToCatalog option.

For more information on how to setup a Verify Data job, see.

To run a Verify Job with the accurate option, it is possible to set the option in the Job definition or set use the accurate=yes on the command line.

* run job=VerifyData jobid=10 accurate=yes

Single Item Restore Optimisation

Bacula version 8.6.0 can generate indexes stored in the catalog to speed up file access during a Single Item Restore session for VMWare or for Exchange. The index can be displayed in bconsole with the list filemedia command.

* list filemedia jobid=1

File Daemon Saved Messages Resource Destination

It is now possible to send the list of all saved files to a resource with the saved message type. It is not recommended to send this flow of information to the and/or the when the client is large. To avoid side effects, the all keyword doesn’t include the saved message type. The saved message type should be explicitly set.

# cat /opt/bacula/etc/bacula-fd.conf
...
Messages {
    Name = Standard
    director = mydirector-dir = all, !terminate, !restored, !saved
    append = /opt/bacula/working/bacula-fd.log = all, saved, restored

BWeb New Features

The 8.6 release adds some new BWeb features, such as:

  • Two sets of wizards to help users to configure Copy/Migration jobs (Figures)

  • A wizard to run jobs (Fig)

  • SSH integration in BWeb Security Center to restart components remotely (Fig)

  • Global Endpoint Deduplication Overview screen (Fig)

misc/images/dedup_usage.png
misc/images/copy_job_wizard.png
misc/images/migrate_job_wizard.png
misc/images/run_job_wizard.png
misc/images/ssh_remote_commands.png

Minor Enhancements

New Bconsole .estimate Command

The new .estimate command can be used to get statistics about a job to run. The command uses the database to estimate the size and the number of files of the next job. On a PostgreSQL database, the command uses regression slope to compute values. On SQLite or MySQL, where these statistical functions are not available, the command uses a simple “average” estimation. The correlation number is given for each value.

*.estimate job=backup
level=I
nbjob=0
corrbytes=0
jobbytes=0
corrfiles=0
jobfiles=0
duration=0
job=backup
*.estimate job=backup level=F
level=F
nbjob=1
corrbytes=0
jobbytes=210937774
corrfiles=0
jobfiles=2545
duration=0
job=backup

Traceback and Lockdump

After the reception of a signal by any of the Bacula daemon binaries, traceback and lockdump information are now stored in the same file.

Bacula Enterprise 8.4.10

Plugin for Microsoft SQL Server

A plugin for Microsoft SQL Server (MSSQL) is now available. The plugin uses MSSQL advanced backup and restore features (like PITR, Log backup, Differential backup, …).

Job {
    Name = MSSQLJob
    Type = Backup
    Client = windows1
    FileSet = MSSQL
    Pool = 1Month
    Storage = File
    Level = Incremental
}
FileSet {
    Name = MSSQL
    Enable VSS = no
    Include {
        Options {
            Signature = MD5
        }
        Plugin = "mssql"
    }
}
FileSet {
    Name = MSSQL2
    Enable VSS = no
    Include {
        Options {
            Signature = MD5
        }
        Plugin = "mssql: database=production"
    }
}

Bacula Enterprise 8.4.1

Verify Volume Data

It is now possible to have a Verify Job configured with level=Data to reread all records from a job and optionally check the size and the checksum of all files.

# Verify Job definition
Job {
    Name = VerifyData
    Type = Verify
    Level = Data
    Client = 127.0.0.1-fd
    FileSet = Dummy
    Storage = File
    Messages = Standard
    Pool = Default
}
# Backup Job definition
Job {
    Name = MyBackupJob
    Type = Backup
    Client = windows1
    FileSet = MyFileSet
    Pool = 1Month
    Storage = File
}
FileSet {
    Name = MyFileSet
    Include {
        Options {
            Verify = s5
            Signature = MD5
        }
    File = /
}

To run the Verify job, it is possible to use the “jobid” parameter of the command.

*run job=VerifyData jobid=10
Run Verify Job
JobName: VerifyData
Level: Data
Client: 127.0.0.1-fd
FileSet: Dummy
Pool: Default (From Job resource)
Storage: File (From Job resource)
Verify Job: MyBackupJob.2015-11-11_09.41.55_03
Verify List: /opt/bacula/working/working/VerifyVol.bsr
When: 2015-11-11 09:47:38
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=14
...
11-Nov 09:46 my-dir JobId 13: Bacula Enterprise 8.4.1 (13Nov15):
Build OS: x86_64-unknown-linux-gnu archlinux
JobId: 14
Job: VerifyData.2015-11-11_09.46.29_03

FileSet: MyFileSet
Verify Level: Data
Client: 127.0.0.1-fd
Verify JobId: 10
Verify Job:q
Start time: 11-Nov-2015 09:46:31
End time: 11-Nov-2015 09:46:32
Files Expected: 1,116
Files Examined: 1,116
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: Verify differences
SD termination status: OK
Termination: Verify Differences

The current Verify Data implementation requires specifying the correct Storage resource in the Verify job. The Storage resource can be changed with the bconsole command line and with the menu.

Bconsole list jobs Command Options

The list jobs bconsole command now accepts new command line options:

  • joberrors Display jobs with JobErrors

  • jobstatus=T Display jobs with the specified status code

  • client=client-name Display jobs for a specified client

  • order=asc/desc Change the output format of the job list. The jobs are sorted by start time and JobId, the sort can use ascending (asc) or descending (desc) order, the latter being the default.

MinorEnhancements

New Bconsole tee all Command

The “@tall” command allows logging all input and output of a console session.

*@tall /tmp/log
*st dir
...
@tall

MySQL Plugin Restore Options

It is now possible to specify the database name during a restore in the Plugin Option menu. It is still possible to use the “Where” parameter to specify the target database name.

PostgreSQL Plugin

We added a “timeout” option to the plugin command line that is set to 60s by default. Users may want to change this value when the cluster is slow to complete SQL queries used during the backup.

Bacula Enterprise 8.4

VMWare Single File Restore

It is now possible to explore VMWare virtual machines backup jobs (Full, Incremental and Differential) made with the vSphere plugin to restore individual files and directories. The Single Item Restore feature comes with both a console interface and a Management Suite specific interface. See the VMWare Single File Restore whitepaper for more information.

misc/images/bweb-vmware-sir.png

Microsoft Exchange Single MailBox Restore

It is now possible to explore Microsoft Exchange databases backups made with the VSS plugin to restore individual mailboxes. The Single Item Restore feature comes with both a console interface and a web interface. See the Exchange Single Mailbox Restore whitepaper for more information.

Bacula Enterprise 8.2.8

New Job Edit Codes %I

In various places such as RunScripts, you have now access to %I to get the JobId of the copy or migration job started by a migrate job.

Job {
    Name = Migrate-Job
    Type = Migrate
    ...
    RunAfter = "echo New JobId is %I"
}

Bacula Enterprise 8.2.2

New Job Edit Codes %E %R

In various places such as RunScripts, you have now access to %E to get the number of non-fatal errors for the current Job and %R to get the number of bytes read from disk or from the network during a job.

Enable/Disable commands

The bconsole enable and disable commands have been extended from enabling/disabling Jobs to include Clients, Schedule, and Storage devices. Examples:

disable Job=NightlyBackup Client=Windows-fd

will disable the Job named NightlyBackup as well as the client named Windows-fd.

disable Storage=LTO-changer Drive=1

will disable the first drive in the autochanger named LTO-changer.

Please note that doing a reload command will set any values changed by the enable/disable commands back to the values in the bacula-dir.conf file.

The and resources in the bacula-dir.conf file now permit the directive Enabled = yes or Enabled = no.

Bacula Enterprise 8.2

Snapshot Management

Bacula Enterprise 8.2 is now able to handle Snapshots on Linux/Unix systems. Snapshots can be automatically created and used to backup files. It is also possible to manage Snapshots from ’s tool through a unique interface.

Snapshot Backends

The following Snapshot backends are supported with 8.2:

  • BTRFS

  • ZFS

  • LVM

Note

Some restrictions described here apply to the LVM backend.

By default, Snapshots are mounted (or directly available) under .snapshots directory on the root filesystem. (On ZFS, the default is .zfs/snapshots).

The Snapshot backend program is called bsnapshot and is available in the bacula-enterprise-snapshot package. In order to use the Snapshot Management feature, the package must be installed on the Client.

The bsnapshot program can be configured using /opt/bacula/etc/bsnapshot file. The following parameters can be adjusted in the configuration file:

  • trace=<file> Specify a trace file

  • debug=<num> Specify a debug level

  • sudo=<yes|no> Use sudo to run commands

  • disabled=<yes|no> Disable snapshot support

  • retry=<num> Configure the number of retries for some operations

  • snapshot_dir=<dirname> Use a custom name for the Snapshot directory. (.SNAPSHOT, .snapdir, etc.)

  • lvm_snapshot_size=<lvpath:size> Specify a custom snapshot size for a given LVM volume

  • mountopts=<devpath:options> Specify a custom mount option for a give device (available in 10.0.4)

# cat /opt/bacula/etc/bsnapshot.conf trace=/tmp/snap.log debug=10
lvm_snapshot_size=/dev/ubuntu-vg/root:5 mountopts=nouuid
mountopts=/dev/ubuntu-vg/root:nouuid,nosuid

Application Quiescing

When using Snapshots, it is very important to quiesce applications that are running on the system. The simplest way to quiesce an application is to stop it. Usually, taking the Snapshot is very fast, and the downtime is only about a couple of seconds. If downtime is not possible and/or the application provides a way to quiesce, a more advanced script can be used.

New Director Directives

The use of the Snapshot Engine on the FileDaemon is determined by the new Enable Snapshot FileSet directive. The default is no.

FileSet {
  Name = LinuxHome

  Enable Snapshot = yes

  Include {
    Options = { Compression = LZO }
    File = /home
  }
}

By default, Snapshots are deleted from the Client at the end of the backup. To keep Snapshots on the Client and record them in the Catalog for a determined period, it is possible to use the Snapshot Retention directive in the Client or in the Job resource. The default value is 0 seconds. If, for a given Job, both Client and Job Snapshot Retention directives are set, the Job directive will be used.

Client {
  Name = linux1
  ...

  Snapshot Retention = 5 days
}

To automatically prune Snapshots, it is possible to use the following RunScript command:

Job {
  ...
  Client = linux1
  ...
  RunScript {
    RunsOnClient = no
    Console = "prune snapshot client=%c yes"
    RunsAfter = yes
  }
}

In RunScripts, the AfterSnapshot keyword for the RunsWhen directive will allow a command to be run just after the Snapshot creation.

AfterSnapshot is a synonym for the AfterVSS keyword.

 Job {
   ...
    RunScript {
      Command = "/etc/init.d/mysql start"
      RunsWhen = AfterSnapshot
      RunsOnClient = yes
    }
    RunScript Command = "/etc/init.d/mysql stop"
    RunsWhen = Before
    RunsOnClient = yes
    }
}

Job Output Information

Information about Snapshots are displayed in the Job output. The list of all devices used by the Snapshot Engine is displayed, and the Job summary indicates if Snapshots were available.

JobId 3: Create Snapshot of /home/build
JobId 3: Create Snapshot of /home/build/subvol
JobId 3: Delete snapshot of /home/build
JobId 3: Delete snapshot of /home/build/subvol
...
JobId 3: Bacula 127.0.0.1-dir 8.2.0 (23Feb15):
  Build OS:             x86_64-unknown-linux-gnu archlinux
  JobId:                3
  Job:                  Incremental.2015-02-24_11.20.27_08
  Backup Level: Full
...
  Snapshot/VSS: yes
...
  Termination: Backup OK

New snapshot Bconsole Commands

The new snapshot command will display by default the following menu:

*snapshot
Snapshot choice:
     1: List snapshots in Catalog
     2: List snapshots on Client
     3: Prune snapshots
     4: Delete snapshot
     5: Update snapshot parameters
     6: Update catalog with Client snapshots
     7: Done
Select action to perform on Snapshot Engine (1-7):

The snapshot command can also have the following parameters:

[client=<client-name> | job=<job-name> | jobid=<jobid>]
 [delete | list | listclient | prune | sync | update]

It is also possible to use traditional list, llist, upgrade, prune or delete commands on Snapshots.

*llist snapshot jobid=5
 snapshotid: 1
 name: NightlySave.2015-02-24_12.01.00_04
 createdate: 2015-02-24 12:01:03
 client: 127.0.0.1-fd
 fileset: Full Set
 jobid: 5
 volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
 device: /home/btrfs
 type: btrfs
 retention: 30
 comment:
* snapshot listclient
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102
Snapshot      NightlySave.2015-02-24_12.01.00_04:
  Volume:     /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
  Device:     /home
  CreateDate: 2015-02-24 12:01:03
  Type:       btrfs
  Status:     OK
  Error:

With the Update catalog with Client snapshots option (or snapshot sync), the Director contacts the FileDaemon, lists snapshots of the system and creates catalog records of the Snapshots.

*snapshot sync
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102
Snapshot        NightlySave.2015-02-24_12.35.47_06:
  Volume:       /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
  Device:       /home
  CreateDate:   2015-02-24 12:35:47
  Type:         btrfs
  Status:       OK
  Error:
Snapshot added in Catalog

*llist snapshot
 snapshotid: 13
 name:       NightlySave.2015-02-24_12.35.47_06
 createdate: 2015-02-24 12:35:47
 client:     127.0.0.1-fd
 fileset:
 jobid: 0
 volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
 device: /home
 type: btrfs
 retention: 0
 comment:

LVM Backend Restrictions

LVM Snapshots are quite primitive compared to ZFS, BTRFS, NetApp and other systems. For example, it is not possible to use Snapshots if the VG is full. The administrator must keep some free space in the VG to create Snapshots. The amount of free space required depends on the activity of the LV. bsnapshot uses 10% of the LV by default. This number can be configured per LV in the bsnapshot.conf file (look here).

[root@system1]# vgdisplay
  --- Volume group ---
  VG Name              vg_ssd
...
  VG Size              29,81 GiB
...
  Alloc PE / Size      125 / 500,00 MiB
  Free PE / Size       7507 / 29,32 GiB   <--- Free Space
...

It is also not advisable to leave snapshots on the LVM backend. Having multiple snapshots of the same LV on LVM will slow down the system.

Only Ext4, XFS and EXT3 filesystems are supported with the Snapshot LVM backend.

Note

XFS and EXT3 are available in 8.2.7 and later.

Debug Options

To get low level information about the Snapshot Engine, the debug tag “snapshot” should be used in the setdebug command.

* setdebug level=10 tags=snapshot client
* setdebug level=10 tags=snapshot dir

Bacula Enterprise 8.0

Global Endpoint Deduplication

The Global Endpoint Deduplication solution minimizes network transfers and Bacula Volume size using deduplication technology.

The new Global Endpoint Deduplication Storage daemon directives are:

Device Type = Dedup sets the Storage device for deduplication. Deduplication is performed only on disk volumes.

Dedup Directory = this directive specifies where the deduplicated blocks will be stored. Blocks that are deduplicated will be placed in this directory rather than in the Bacula Volume, which will only contain a reference pointer to the deduplicated blocks.

Dedup Index Directory in addition to the deduplicated blocks, when deduplication is enabled, the Storage daemon keeps an index of the deduplicated block locations. This index will be frequently consulted during the deduplication backup process, so it should be placed on the fastest device possible (e.g. an SSD).

See below for a FileSet example using the new dedup directive.

Configuration Example

In the Storage Daemon configuration file, you must define a Device with the DeviceType = Dedup. It is also possible to configure where the Storage Daemon will store blocks and indexes. Blocks will be stored in the Dedup Directory, the directory is common for all Dedup devices and should have a large amount of free space. Indexes will be stored in the Dedup Index Directory, indexes will have a lot of random update access, and can benefit from SSD drives.

# from bacula-sd.conf
Storage {
    Name = my-sd
    Working Directory = /opt/bacula/working
    Pid Directory = /opt/bacula/working
    Plugin Directory = /opt/bacula/plugins
    Dedup Directory = /opt/bacula/dedup
    Dedup Index Directory = /opt/bacula/ssd
    # default for Dedup Directory
}
Device {
    Name = DedupDisk
    Archive Device = /opt/bacula/storage
    Media Type = DedupVolume
    Label Media = yes
    Random Access = yes
    Automatic Mount = yes
    Removable Media = no
    Always Open = no
    Device Type = Dedup  # Required
}

The Global Endpoint Deduplication Client cache system can speed up restore jobs by getting blocks from the local client disk instead of requesting them over the network. Note that if blocks are not available locally, the FileDaemon will get blocks from the Storage Daemon. This feature can be enabled with the Dedup Index Directory directive in the FileDaemon resource. When using this option, the File Daemon will have to maintain the cache during Backup jobs.

# from bacula-fd.conf
FileDaemon {
    Name = my-fd
    Working Directory = /opt/bacula/working
    Pid Directory = /opt/bacula/working
    # Optional, Keep indexes on the client for faster restores
    Dedup Index Directory = /opt/bacula/dedupindex
}

It is possible to configure the Global Endpoint Deduplication system in the Director with a FileSet directive called Dedup. Each FileSet Include section can specify a different deduplication behavior depending on your needs.

FileSet {
    Name = FS_BASE
    # Send everything to the Storage Daemon as usual
    # and let the Storage Daemon do the deduplication
    Include {
        Options {
            Dedup = storage
        }
        File = /opt/bacula/etc
    }
    # Send only references and new blocks to the Storage Daemon
    Include {
        Options {
            Dedup = bothsides
        }
        File = /VirtualBox
        }
    # Do not try to dedup my encrypted directory
    Include {
        Options {
        Dedup = none
        }
        File = /encrypted
        }
    }

The FileSet Dedup directive accepts the following values:

  • storage All the deduplication work is done on the SD side if the device type is dedup (default value). This option is useful if you want to avoid the extra client-side disk space overhead that will occur with the bothsides option.

  • none Force FD and SD to not use deduplication

  • bothsides The deduplication work is done on both the FD and the SD. Only references and new blocks will be transfered over the network.

Storage Daemon to Storage Daemon

version 8.0 now permits SD to SD transfer of Copy and Migration Jobs. This permits what is commonly referred to as replication or off-site transfer of Bacula backups. It occurs automatically if the source SD and destination SD of a Copy or Migration job are different. That is, the SD to SD transfers need no additional configuration directives. The following picture shows how this works.

misc/images/sd-to-sd.png

Windows Mountpoint Support

version 8.0 is now able to detect Windows mountpoints and include volumes automatically in the VSS snapshot set. To backup all local disks on a Windows server, the following FileSet is now accepted. It depreciates the alldrives plugin.

FileSet {
    Name = "All Drives" Include
    Options {
        Signature = MD5
    }
    File = /
}

If you have mountpoints, the onefs=no option should be used as it is with Unix systems.

FileSet {
    Name = "All Drives with mountpoints"
    Include {
        Options {
            Signature = MD5
            OneFS = no
        }
    File = C:/
    # will include mountpoint C:/mounted/...
    }
}

To exclude a mountpoint from a backup when OneFS = no, use the Exclude block as usual:

FileSet {
    Name = "All Drives with mountpoints"
    Include {
        Options {
            Signature = MD5
            OneFS = no
        }
        File = C:/      # will include all mounted mountpoints under C:/
                        # including C:/mounted (see Exclude below)
    }
    Exclude {
        File = C:/mounted
    }
}
# will not include C:/mounted

SD Calls Client

If the SD Calls Client directive is set to true in a Client resource any Backup, Restore, Verify Job where the client is involved, the client will wait for the Storage daemon to contact it. By default this directive is set to false, and Client the Client will call the Storage daemon as it always has. This directive can be useful if your Storage daemon is behind a firewall that permits outgoing connections but not incoming connections. The picture shows the communications connection paths in both cases.

misc/images/sd-calls-client.png

Data Encryption Cipher Configuration

version 8.0 and later now allows configuration of the data encryption cipher and the digest algorithm. Previously, the cipher was forced to AES 128, but it is now possible to choose between the following ciphers:

  • AES128(default)

  • AES192

  • AES256

  • blowfish

The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL options. We advise you to not modify the PkiDigest default setting. Please, refer to the OpenSSL documentation to understand the pros and cons regarding these options.

FileDaemon {
    ...
    PkiCipher = AES256
}

Minor Enhancements

New Option Letter “M” for Accurate Directive in FileSet

Added in version 8.0.5, the new “M” option letter for the Accurate directive in the FileSet Options block, which allows comparing the modification time and/or creation time against the last backup timestamp. This is in contrast to the existing options letters “m” and/or “c”, mtime and ctime, which are checked against the stored catalog values, which can vary accross different machines when using the BaseJob feature.

The advantage of the new “M” option letter for Jobs that refer to BaseJobs is that it will instruct Bacula to backup files based on the last backup time, which is more useful because the mtime/ctime timestamps may differ on various Clients, causing files to be needlessly backed up.

Job {
    Name = USR
        Level = Base
        FileSet = BaseFS
        ...
    }
    Job {
        Name = Full
        FileSet = FullFS
        Base = USR
        ...
    }
    FileSet {
        Name = BaseFS
        Include {
            Options {
                Signature = MD5
            }
        File = /usr
        }
    }
    FileSet {
        Name = FullFS
        Include {
            Options {
                Accurate = Ms
                Signature = MD5
            }
        File = /home
        File = /usr
        }
    }

.api version 2

In version 8.0 and later, we introduced a new .api version to help external tools to parse various Bacula bconsole output.

The api_opts option can use the following arguments:

C Clear current options

tn Use a specific time format (1 ISO format, 2 Unix Timestamp, 3 Default Bacula time format)

sn Use a specific separator between items (new line by default).

Sn Use a specific separator between objects (new line by default).

o Convert all keywords to lowercase and convert all non isalpha characters to _

.api 2 api_opts=t1s43S35
.status dir running
==================================
jobid=10
job=AJob
...

New Debug Options

In version 8.0 and later, we introduced a new options parameter for the setdebug bconsole command.

The following arguments to the new option parameter are available to control debug functions.

0 Clear debug flags

i Turn off, ignore bwrite() errors on restore on File Daemon

d Turn off decomp of BackupRead() streams on File Daemon

t Turn on timestamps in traces

T Turn off timestamps in traces

c Truncate trace file if trace file is activated

I Turn on recoding events on P() and V()

p Turn on the display of the event ring when doing a backtrace

The following command will enable debugging for the File Daemon, truncate an existing trace file, and turn on timestamps when writing to the trace file.

* setdebug level=10 trace=1 options=ct fd

It is now possible to use a class of debug messages called tags to control the debug output of Bacula daemons.

all Display all debug messages

bvfs Display BVFS debug messages

sql Display SQL related debug messages

memory Display memory and poolmem allocation messages

scheduler Display scheduler related debug messages

* setdebug level=10 tags=bvfs,sql,memory
* setdebug level=10 tags=!bvfs
# bacula-dir -t -d 200,bvfs,sql

The tags option is composed of a list of tags. Tags are separated by “,” or “+” or “-” or “!”. To disable a specific tag, use “-” or “!” in front of the tag. Note that more tags are planned for future versions.

Component

Tag

Debug Level

Comment

director

scheduler

100

information about job queue mangement

director

scheduler

20

information about resources in job queue

director

bvfs

10

information about bvfs

director

sql

15

information about bvfs queries

all

memory

40-60

information about smartalloc

Bacula Enterprise 6.6.0

Communication Line Compression

version 6.6.0 and later now includes communication line compression. It is turned on by default, and if the two Bacula components (DIR, FD, SD, bconsole) are both version 6.6.0 or greater, communication line compression) will be enabled, by default. If for some reason, you do not want communication line compression, you may disable it with the following directive:

Comm Compression = no

This directive can appear in the following resources:

  • bacula-dir.conf: Director resource

  • bacula-fd.conf: Client (or FileDaemon) resource

  • bacula-sd.conf: Storage resource

  • bconsole.conf: Console resource

  • bat.conf: Console resource

In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is enabled. In the case that the compression is not effective, Bacula turns it off on a record by record basis.

If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, Bacula reports None in the Job report.

Read Only Storage Devices

This version of Bacula allows you to define a Storage deamon device to be read-only. If the Read Only directive is specified and enabled, the drive can only be used for read operations. The Read Only directive can be defined in any bacula-sd.conf Device resource, and is most useful for reserving one or more drives for restores. An example is:

Read Only = yes

Catalog Performance Improvements

There is a new Bacula database format (schema) in this version of Bacula that eliminates the FileName table by placing the Filename into the File record of the File table. This substantiallly improves performance, particularly for large (1GB or greater) databases.

The update_xxx_catalog script will automatically update the Bacula database format, but you should realize that for very large databases (greater than 1GB), it may take some time, and there are several different options for doing the update:

  1. Shudown the database and update it

  2. Update the database while production jobs are running.

See the Bacula Systems White Paper “Migration-to-6.6” on this subject.

This database format change can provide very significant improvements in the speed of metadata insertion into the database, and in some cases (backup of large email servers) can significantly reduce the size of the database.

Plugin Restore Options

This version of Bacula permits user configuration of Plugins at restore time. For example, it is now possible to choose the datastore where your VMware image will be restored, or to choose pg_restore options directly. See specific Plugin whitepapers for more information about new restore options.

The restore options, if implemented in a plugin, will be presented to you during initiation of a restore either by command line or if available by a GUI such as . For examples of the command line interface and the GUI interface, please see below:

*run restore jobid=11766
Run Restore job
JobName:
RestoreFiles
Bootstrap: /tmp/regress/working/my-dir.restore.1.bsr
Where: /tmp/regress/tmp/bacula-restores
...
Plugin Options: *None*
OK to run? (yes/mod/no): mod
Parameters to modify:
    1: Level
    ...
    13: Plugin Options
Select parameter to modify (1-13): 13

Automatically selected : vsphere: host=squeeze2
Plugin Restore Options
datastore:
*None*
restore_host:
*None*
new_hostname:
*None*
Use above plugin configuration? (yes/mod/no): mod
You have the following choices:
    1: datastore (Datastore to use for restore)
    2: restore_host (ESX host to use for restore)
    3: new_hostname (Restore host to specified name)
Select parameter to modify (1-3): 3
Please enter a value for new_hostname: test
Plugin Restore Options
datastore:
*None*
restore_host:
*None*
new_hostname:
test
Use above plugin configuration? (yes/mod/no): yes

Or via the restore interface (see Fig)

Alldrives Plugin Improvements

The alldrives plugin simplifies the FileSet creation of Windows Clients by automatically generating a FileSet which includes all local drives.

The alldrives plugin now accepts the snapshot option that generates snapshots for all local Windows drives, but without explicitly adding them to the FileSet. It may be combined with the VSS plugin. For example:

FileSet {
    ...
    Include {
        Plugin = "vss:/@MSSQL/"
        Plugin = "alldrives: snapshot"    # should be placed after vss plugin
    }
}

New Truncate Command

We have added a new truncate command to bconsole which will truncate a volume if the volume is purged, and if the volume is also marked Action On Purge = Truncate. This feature was originally added in Bacula version 5.0.1, but the mechanism for actually doing the truncate required the user to enter a complicated command such as:

purge volume action=truncate storage=File pool=Default

The above command is now simplified to be:

truncate storage=File pool=Default

Bacula Enterprise 6.4.x

The following features were added during the 6.4.x life cycle.

SAP Plugin

The SAP Plugin is designed to implement the official SAP Backint interface to simplify the backup and restore procedure through your traditional SAP database tools. See SAP-Backint whitepaper for more information.

Oracle SBT Plugin

By default, the Oracle backup Manager, RMAN, sends all backups to an operating system specific directory on disk. You can also configure RMAN to make backups to media such as tape using the SBT module. Bacula will act as Media Manager, and the data will be transfered directly from RMAN to Bacula. See Oracle Plugin whitepaper for more information.

MySQL Plugin

The MySQL plugin is designed to simplify the backup and restore of your MySQL database, the backup administrator doesn’t need to know about the internals of MySQL backup techniques or how to write complex scripts. This plugin will automatically backup essential information such as configurations and user definitions. The MySQL plugin supports both dump (with support for Incremental backup) and binary backup techniques. See the MySQL Plugin whitepaper for more information.

Bacula Enterprise 6.4.0

Deduplication Optimized Volumes

This version of Bacula includes a new alternative (or additional) volume format that optimizes the placement of files so that an underlying deduplicating filesystem such as ZFS can optimally deduplicate the backup data that is written by Bacula. These are called Deduplication Optimized Volumes or Aligned Volumes for short. The details of how to use this feature and its considerations are in the Bacula Systems Deduplication Optimized Volumes whitepaper.

Migration/Copy/VirtualFull Performance Enhancements

The Bacula Storage daemon now permits multiple jobs to simultaneously read from the same disk volume which gives substantial performance enhancements when running Migration, Copy, or VirtualFull jobs that read disk volumes. Our testing shows that when running multiple simultaneous jobs, the jobs can finish up to ten times faster with this version of Bacula. This is built-in to the Storage daemon, so it happens automatically and transparently.

VirtualFull Backup Consolidation Enhancements

By default Bacula selects jobs automatically for a VirtualFull backup. However, you may want to create the virtual backup based on a particular backup (point in time) that exists.

For example, if you have the following backup Jobs in your catalog:

bVerbatim

JobId

Name

Level

JobFiles

JobBytes

JobStatus

1

Vbackup

F

1754

50118554

T

2

Vbackup

I

1

4

T

3

Vbackup

I

1

4

T

4

Vbackup

D

2

8

T

5

Vbackup

I

1

6

T

6

Vbackup

I

10

60

T

7

Vbackup

I

11

65

T

8

Save

F

1758

50118564

T

and you want to consolidate only the first 3 jobs and create a virtual backup equivalent to Job 1 + Job 2 + Job 3, you will use jobid=3 in the run command, then Bacula will select the previous Full backup, the previous Differential (if any) and all subsequent Incremental jobs.

run job=Vbackup jobid=3 level=VirtualFull

If you want to consolidate a specific job list, you must specify the exact list of jobs to merge in the run command line. For example, to consolidate the last Differential and all subsequent Incrementals, you will use jobid=4,5,6,7 or jobid=4-7 on the run command line. Because one of the Jobs in the list is a Differential backup, Bacula will set the new job level to Differential. If the list is composed of only Incremental jobs, the new job will have its level set to Incremental.

run job=Vbackup jobid=4-7 level=VirtualFull

When using this feature, Bacula will automatically discard jobs that are not related to the current Job. For example, specifying jobid=7,8, Bacula will discard JobId 8 because it is not part of the same backup Job.

We do not recommend it, but if you really want to consolidate jobs that have different names (so probably different clients, filesets, etc…), you must use alljobid= keyword instead of jobid=.

run job=Vbackup alljobid=1-3,6-8 level=VirtualFull

New Prune “Expired” Volume Command

In 6.4, it is now possible to prune all volumes (from a pool, or globally) that are “expired”. This option can be scheduled after or before the backup of the catalog and can be combined with the Truncate On Purge option. The prune expired volme command may be used instead of the manual_prune.pl script.

* prune expired volume

* prune expired volume pool=FullPool

To schedule this option automatically, it can be added to the Catalog backup job definition.

Job {
    Name = CatalogBackup
    ...
    RunScript {
        Console = "prune expired volume yes"
        RunsWhen = Before
    }
}

Bacula Enterprise 6.2.3

New Job Edit Codes %P %C

In various places such as RunScripts, you have now access to %P to get the current Bacula process ID (PID) and %C to know if the current job is a cloned job.

Bacula Enterprise 6.2.0

BWeb Bacula Configuration GUI

In Bacula Enterprise version 6.2, the BWeb Management Suite integrates a Bacula configuration GUI module which is designed to help you create and modify the Bacula configuration files such as bacula-dir.conf, bacula-sd.conf, bacula-fd.conf and bconsole.conf.

The BWeb Management Suite offers a number of Wizards which support the Administrator in his daily work. The wizards provide a step by step set of required actions that graphically guide the Administrator to perform quick and easy creation and modification of configuration files.

BWeb also provides diagnostic tools that enable the Administrator to check that the Catalog Database is well configured, and that is installed properly.

The new Online help mode displays automatic help text suggestions when the user searches data types.

misc/images/bweb-config-screen.png

This project was funded by Bacula Systems and is available with the.

Performance Improvements

Bacula Enterprise 6.2 has a number of new performance improvements:

  • An improved way of storing Bacula Resources (as defined in the .conf files). This new handling permits much faster loading or reloading of the conf files, and permits larger numbers of resources.

  • Improved performance when inserting large numbers of files in the DB catalog by breaking the insertion into smaller chunks, thus allowing better sharing when running multiple simultaneous jobs.

  • Performance enhancements in BVFS concerning eliminating duplicate path records.

  • Performance improvement when getting Pool records.

  • Pruning performance enhancements.

Enhanced Status and Error Messages

We have enhanced the Storage daemon status output to be more readable. This is important when there are a large number of devices. In addition to formatting changes, it also includes more details on which devices are reading and writing.

A number of error messages have been enhanced to have more specific data on what went wrong.

If a file changes size while being backed up the old and new size are reported.

WinBMR 3

The Windows BMR plugin enables you to do safe, reliable Disaster Recovery with on Windows and allows you to get critical systems up and running again quickly. The Windows BMR is a toolkit that allows the Administrator to perform the restore of a complete operating system to the same or similar hardware without actually going through the operating system’s installation procedure.

The WinBMR 3 version is a major rewrite of the product that support all x86 Windows versions and technologies. Especially UEFI and secure boot systems. The WinBMR 3 File Daemon plugin is now part of the plugins included with the Bacula File Daemon package. The rescue CD or USB key is available separately.

Miscellaneous New Features

  • Allow unlimited line lengths in .conf files (previously limited to 2000 characters).

  • Allow /dev/null in ChangerCommand to indicated a Virtual Autochanger.

  • Add a --fileprune option to the manual_prune.pl script.

  • Add a -m option to make_catalog_backup.pl to do maintenance on the catalog.

  • Safer code that cleans up the working directory when starting the daemons. It limits what files can be deleted, hence enhances security.

  • Added a new .ls command in bconsole to permit browsing a client’s filesystem.

  • Fixed a number of bugs, includes some obscure seg faults, and a race condition that occurred infrequently when running Copy, Migration, or Virtual Full backups.

  • Included a new vSphere library version, which will hopefully fix some of the more obscure bugs.

  • Upgraded to a newer version of Qt4 for BAT. All indications are that this will improve ’s stability on Windows machines.

  • The Windows installers now detect and refuse to install on an OS that does not match the 32/64 bit value of the installer.

Bacula Enterprise 6.0.0

Incremental/Differential Block Level Difference Backup

The new delta Plugin is able to compute and apply signature-based file differences. It can be used to backup only changes in a big binary file like Outlook PST, VirtualBox/VMware images or database files.

It supports both Incremental and Differential backups and stores signatures database in the File Daemon working directory. This plugin is available on all platform including Windows 32 and 64bit.

Accurate option should be turned on in the Job resource.

Job {
    Accurate = yes
    FileSet = DeltaFS
    ...
}
FileSet {
    Name = DeltaFS
    ...
    Include {
        # Specify one file
        Plugin = "delta:/home/eric/.VirtualBox/HardDisks/lenny-i386.vdi"
    }
}
FileSet {
    Name = DeltaFS-Include
    ...
    Include {
        Options {
            Compression = GZIP1
            Signature = MD5
            Plugin = delta
        }
        # Use the Options{} filtering and options
        File = /home/user/.VirtualBox
    }
}

Please contact Bacula Systems support to get Delta Plugin specific documentation.

This project was funded by Bacula Systems and is available with the Bacula Enterprise Edition.

SAN Shared Tape Storage Plugin

The problem with backing up multiple servers at the same time to the same tape library (or autoloader) is that if both servers access the same tape drive same time, you will very likely get data corruption. This is where the Bacula Systems shared tape storage plugin comes into play. The plugin ensures that only one server at a time can connect to each device (tape drive) by using the SPC-3 SCSI reservation protocol. Please contact Bacula Systems support to get SAN Shared Storage Plugin specific documentation.

This project was funded by Bacula Systems and is available with .

Advanced Autochanger Usage

The new Shared Storage Director’s directive is a feature that allows you to share volumes between different Storage resources. This directive should be used only if all Media Type are correctly set across all Devices.

The Shared Storage directive should be used when using the SAN Shared Storage plugin or when accessing from the Director Storage resources directly to Devices of an Autochanger.

When sharing volumes between different Storage resources, you will need also to use the reset-storageid script before using the update slots command. This script can be scheduled once a day in an Admin job.

$ /opt/bacula/scripts/reset-storageid MediaType StorageName
$ bconsole
* update slots storage=StorageName drive=0

Please contact Bacula Systems support to get help on this advanced configuration.

This project was funded by Bacula Systems and is available with .

The reset-storageid procedure is no longer required when using the appropriate Autochanger configuration in the Director configuration side.

Enhancement of the NDMP Plugin

The previous NDMP Plugin 4.0 was fully supporting only the NetApp hardware, the new NDMP Plugin should now be able to support all NAS vendors with the volume_format plugin command option.

On some NDMP devices such as Celera or Blueray, the administrator can use arbitrary volume structure name, ex:

/dev/volume_home
/rootvolume/volume_tmp
/VG/volume_var

The NDMP plugin should be aware of the structure organization in order to detect if the administrator wants to restore in a new volume (where=/dev/vol_tmp) or inside a subdirectory of the targeted volume (where=/tmp).

FileSet {
    Name = NDMPFS
    ...
    Include {
        Plugin = "ndmp:host=nasbox user=root pass=root file=/dev/vol1 volume_format=/dev/"
    }
}

Please contact Bacula Systems support to get NDMP Plugin specific documentation.

This project was funded by Bacula Systems and is available with the

Always Backup a File

When the Accurate mode is turned on, you can decide to always backup a file by using then new A Accurate option in your FileSet. For example:

Job {
    Name = ...
    FileSet = FS_Example
    Accurate = yes
    ...
}
FileSet {
    Name = FS_Example
    Include {
        Options {
            Accurate = A
        }
    File = /file
    File = /file2
    }
...
}

This project was funded by Bacula Systems based on an idea of James Harper and is available with the .

Setting Accurate Mode at Runtime

You are now able to specify the Accurate mode on the run command and in the Schedule resource.

* run accurate=yes job=Test
Schedule {
    Name = WeeklyCycle
    Run = Full 1st sun at 23:05
    Run = Differential accurate=yes 2nd-5th sun at 23:05
    Run = Incremental accurate=no mon-sat at 23:05
}

It can allow you to save memory and and CPU resources on the catalog server in some cases.

These advanced tuning options are available with the .

Additions to RunScript variables

You can have access to JobBytes, JobFiles and Director name using %b, %F and %D in your runscript command. The Client address is now available through %h.

RunAfterJob = "/bin/echo Job=

LZO Compression

LZO compression was added in the Unix File Daemon. From the user point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).

For example:

Include {
    Options { compression=LZO }
    File = /home
    File = /data
}

LZO provides much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the built-in compression may be a better option.

LZO is a good alternative for GZIP1 when you don’t want to slow down your backup. On a modern CPU it should be able to run almost as fast as:

  • your client can read data from disk. Unless you have very fast disks like SSD or large/fast RAID array.

  • the data transfers between the file daemon and the storage daemon even on a 1Gb/s link.

Note that bacula only use one compression level LZO1X-1.

The code for this feature was contributed by Laurent Papier.

New Tray Monitor

Since the old integrated Windows tray monitor doesn’t work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.

misc/images/tray-monitor.png
misc/images/tray-monitor1.png

To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:

Console {
    Name = win2003-mon
    Password = "xxx"
    CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run
    ClientACL = *all*
    # you can restrict to a specific host
    CatalogACL = *all*
    JobACL = *all*
    StorageACL = *all*
    ScheduleACL = *all*
    PoolACL = *all*
    FileSetACL = *all*
    WhereACL = *all*
}

This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Bacula Community Edition.

Purge Migration Job

The new Purge Migration Job directive may be added to the Migration Job definition in the Director’s configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.

For example:

Job {
    Name = "migrate-job"
    Type = Migrate
    Level = Full
    Client = localhost-fd
    FileSet = "Full Set"
    Messages = Standard
    Storage = DiskChanger
    Pool = Default
    Selection Type = Job
    Selection Pattern = ".*Save"
    ...
    Purge Migration Job = yes
}

This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.

Changes in the Pruning Algorithm

We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won’t prune automatically a Job if this particular Job is needed to restore data. Example:

JobId: 1 Level: Full
JobId: 2 Level: Incremental
JobId: 3 Level: Incremental
JobId: 4 Level: Differential
.. Other incrementals up to now

In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.

Important, this change affect only the automatic pruning step after a Job and the prune jobs bconsole command. If a volume expires after the VolumeRe tention period, important jobs can be pruned.

Ability to Verify any specified Job

You now have the ability to tell Bacula which Job should verify instead of automatically verify just the last one.

This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.

To verify a given job, just specify the Job jobid in argument when starting the job.

*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName: VerifyVolume
Level: VolumeToCatalog
Client: 127.0.0.1-fd
FileSet: Full Set
Pool: Default (From Job resource)
Storage: File (From Job resource)
Verify Job: VerifyVol.2010-09-08_14.17.17_03
Verify List: /tmp/regress/working/VerifyVol.bsr
When: 2010-09-08 14:17:31
Priority: 10
OK to run? (yes/mod/no):