Performance
The performance of this plugin is highly dependent on many external factors:
Exchange latency and bandwidth
Network infrastructure
FD Host hardware
FD Load
Ratio number of elements/size
And many more.
In summary, it is not possible to establish an exact reference about how much time a backup will need to complete.
As a reference and regarding the number of elements and their size:
Many little objects to protect: More objects per second, but smaller speed (MB/s).
Big files to protect: Fewer objects per second, but greater speed (MB/s).
It is recommended to benchmark your own environment in base to your requirements and needs.
The automatic parallelization mechanism (using concurrent_threads=x
) should
work well for most scenarios, however, fine-tune is possible if we define one job per user,
and we control how many of them run in parallel, together to decrease the concurrent_threads
value in order to avoid throttling or Exchange server capacity problems.
There are many possible strategies to use this plugin, so it is recommended to study what suits your needs best before deploying the jobs in your entire environment, so you can get the best possible results:
You can have a job per user and all services.
You can have multiple entities and only some services inside a job.
You can split your workload through a schedule, or try to run all your jobs together.
You can run jobs in parallel or take advantage of
concurrent_threads
and so, run less jobs in parallel.You can select what services to backup or backup them all.
You can backup whole data to backup or select precisely what elements you really need inside each service (folders).
And more.
See also
Go back to Jobs Distribution
Go back to Concurrency
Go back to the Best Practices article.