Cloud Plugin: Status Storage and Cloud Statistics Explained
We have information about the upload of cloud volume part files in either the job log or in the output of the bconsole “status storage” command when there is a cloud storage configured.
Job log example
20-Apr 16:31 bacula-sd JobId 27: Cloud Upload transfers:
20-Apr 16:31 bacula-sd JobId 27: cloudvolume-Vol-0001/part.1 state=done size=401 B duration=8s
20-Apr 16:31 bacula-sd JobId 27: cloudvolume-Vol-0001/part.2 state=done size=9.999 MB duration=17s
20-Apr 16:31 bacula-sd JobId 27: cloudvolume-Vol-0001/part.3 state=done size=9.999 MB duration=17s
20-Apr 16:31 bacula-sd JobId 27: cloudvolume-Vol-0001/part.4 state=done size=9.999 MB duration=14s
** Example of a bconsole status storage command output**
Cloud transfer status:
Uploads (1.642 MB/s) (ETA 0 s) Queued=0 0 B, Waiting=0 0 B, Processing=0 0 B, Done=53 524.7 MB, Failed=0 0 B
Downloads (0 B/s) (ETA 0 s) Queued=0 0 B, Waiting=0 0 B, Processing=0 0 B, Done=0 0 B, Failed=0 0 B
We also have information in BWeb.
Duration value needs to be considered as the amount of time the cloud upload operation took. It can be for a single part file or for multiple part files as multiple part files can be uploaded into the cloud. The main goal of the duration value is to have an idea if something takes too long and it is possible that another process is affecting the cloud volumes upload speed.
Timestamp in the job log of each part file upload doesn’t match exactly with the value of the part file in the remote cloud as this value is the timestamp of the part file creation in the bucket and they can be different.
Uploads value (XXX KB/s) means that the upload rate is XXX KB/s. It is not a median of all the values uploaded, but a rate of the last uploaded part file or even part files if they have been uploaded concurrently. So, the Uploads value is not an average nor cumulative value.
On the other hand, the Queued, Waiting, and the Processing values are computed for all part files which been uploaded, even from other jobs.
Then, the Done and Failed values are cumulative values since the Storage Daemon startup. As soon as the Storage Daemon is restarted, these values are list and are reset to zero.
ETA = Data to transfer/global speed, and ETA for each transfer is the same, but applied only to the transfer. Different ETAs are computed at a given time with the given resources allocated to the cloud transfer and the global network capabilities. It’s done based on the last part transfer duration vs size. So it’s as close to an “instant” transfer rate that it can be. The values can change over the time.
See also
Previous articles:
Next articles:
Go back to: Cloud Plugin.