File Spooling
EnterpriseBacula Enterprise Only
This solution is only available for Bacula Enterprise. For subscription inquiries, please reach out to sales@baculasystems.com.
In general, this plugin backs up two types of information:
Metadata
Files
Metadata is information associated to the objects, but also the information represented by S3 bucket and object ACLs.
While metadata is directly streamed from the cloud source to the backup engine, files need to be downloaded to the FD host before being stored. This is done in order to make some checks and to improve overall performance, as this way operations can be done in parallel. Each downloaded file is removed immediately after being completely downloaded and sent to the backup engine.
The path used for file spooling is configured with the ‘path’ plugin variable which, by default is set up in the s3_backend configuration file with the value: /opt/bacula/working. However it may be adjusted as needed.
Under the path directory, a ‘spool’ directory will be created and used for the temporary download processes.
Therefore, it is necessary to have at least enough disk space available for the size of the largest file in the backup session. If you are using concurrency between jobs or through the same job (by default, this is the case, as the ‘concurrent_threads’ parameter is set to 5), you would need at least that size for the largest file multiplied by the number of operations in parallel you will run.
Go back to: Backup.