Concurrency
When using Amazon RDS APIs, it is possible to find a variety of boundaries that need to be considered. We highlight some of them below:
Amazon RDS limits: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html
Capabilities of the host serving the RDS Service
Service usage during the backup window
If a boundary is crossed, the corresponding request will usually fail. Bacula Amazon RDS Plugin is prepared to wait for a certain amount of time and then retry it, thus offering a degree of resiliency. However, it is crucial to plan an adequate strategy to backup all the elements without frequently approaching any boundaries. This entails managing the number of concurrent requests made during the backup window.
The recommended strategy to backup a new environment is to plan a step-by-step testing scenario prior to deployment, where the number of instances and the concurrency of the jobs are increased progressively. Another important aspect is the timing schedule, as some boundaries are related to time-frames (i.e., the number of request per time unit). If you detect you are reaching boundaries when running all your backups during a single day of the week, try to increase the time window and distribute the load throughout it in order to enhance performance results.
Note that from an architectural and AWS service point of view, you can also consider to:
Run your File Daemon directly in the cloud (if your SD is also in the cloud)
Run your Storage Daemon and File Daemon in the same host, so you skip one network hop in the process (recommended)
Use a dedicated AWS connection (https://aws.amazon.com/directconnect/)
Go back to: Best Practices.