Use Cases

This document aims at explaining how to use the Bacula Kubernetes Plugin, running in a container, to back up the resources in the Kubernetes Cluster.

The Software versions used in this use case:

  • Rancher Kubernetes Engine (RKE): Provider: RKE2, Kubernetes Version: v1.30.4 +rke2r1, Architecture: Amd64

  • Bacula Enterprise 18.0.5: Bacula Client 18.0.5, and Bacula Kubernetes Plugin 18.0.5

The Bacula File Daemon and MySQL Plugin Kubernetes Deployment

Dockerfile

Use the following Dockerfile to build a Bacula File Daemon with the MySQL Plugin installed image.

# cat Dockerfile
FROM debian:bullseye
# Replace "@@customer@@" and "@@bee-version@@" with the customer download area and Bacula Enterprise version to use
ARG CUSTOMER_AREA="@@customer@@"
ARG BEE_VERSION="@@bee-version@@"
LABEL maintainer="Bacula Systems SA"
LABEL version="${BEE_VERSION}"
LABEL name="Bacula Enterprise Edition Client"
LABEL vendor="BACULA SYSTEMS SA"
LABEL summary="This is a Bacula File Daemon with the Kubernetes plugin"
LABEL description="This image contains a Bacula File Daemon which allows connection between this pod resources and the Bacula Director."
# Update image
RUN apt update
RUN apt-get -y install curl
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://www.baculasystems.com/dl/${CUSTOMER_AREA}/BaculaSystems-Public-Signature-08-2017.asc -o /etc/apt/keyrings/bacula.asc
RUN chmod a+r /etc/apt/keyrings/bacula.asc
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/bacula.asc] https://www.baculasystems.com/dl/${CUSTOMER_AREA}/debs/bin/${BEE_VERSION}/bullseye-64/ bullseye main" > /etc/apt/sources.list.d/Bacula-Enterprise-Edition.list
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/bacula.asc] https://www.baculasystems.com/dl/${CUSTOMER_AREA}/debs/kubernetes/${BEE_VERSION}/bullseye-64/ bullseye kubernetes" > /etc/apt/sources.list.d/Bacula-Enterprise-Edition-kubernetes-plugin.list
RUN apt-get update
RUN apt-get install -y bacula-enterprise-client bacula-enterprise-kubernetes-plugin
RUN apt autoremove
# use the bacula-fd.conf file previously configured for a specific Bacula Director / configuration
RUN rm /opt/bacula/etc/bacula-fd.conf
COPY bacula-fd.conf /opt/bacula/etc/bacula-fd.conf
# copy the kubeconfig file
COPY config /opt/bacula/etc/config
#  expose bacula-fd port
EXPOSE 9102
USER root
# Start the Bacula File Daemon service
CMD ["/opt/bacula/bin/bacula-fd", "-f"]

Build an image using the Dockerfile, and tag/push it to a local registry in the Kubernetes Cluster.

docker build \
 -t <image>:<tag> .

Important Notes

  • It is possible to use any base image, Debian Bullseye is used in this Use Case, but the dependencies and external program versions could change.

  • Have the bacula-fd.conf file previously configured (the Director the File Daemon will use, for example), or use a Kubernetes ConfigMap for a valid bacula-fd.conf file.

For example:

# kubectl create configmap bacula-fd-configmap --from-file=/path/to/bacula-fd.conf

And, in the bacula-fd deployment definition:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: bacula-fd
  namespace: default
  labels:
    app.kubernetes.io/name: bacula-fd
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: bacula-fd
  template:
    metadata:
      labels:
        app.kubernetes.io/name: bacula-fd
      namespace: default
    spec:
      containers:
        - name: bacula-fd
          imagePullPolicy: Always
          image: <registry>/<image>:<tag>
          ports:
            - containerPort: 9102
              name: bacula-fd
              protocol: TCP
            volumeMounts:
            - name: bacula-fd-configmap-volume
              mountPath: /opt/bacula/etc/bacula-fd.conf
              subPath: bacula-fd.conf
      volumes:
        - name: bacula-fd-configmap-volume
          configMap:
            name: bacula-fd-configmap

Create the bacula-fd Deployment in the Kubernetes Cluster

After creating the bacula-fd deployment in the Kubernetes Cluster, it is important to provide an external access for the communication between the Bacula File Daemon in the Kubernetes Cluster and both the Director and the Storage Daemon, if their services are running outside the Kubernetes Cluster.

In this use case, the EXTERNAL-IP configured for the bacula-fd service is the ip address of one of the Kubernetes Master nodes:

# kubectl get service -n default
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
bacula-fd       ClusterIP   10.43.94.186    10.0.97.201   9102/TCP,9104/TCP   15d

An External Load Balancer or Ingress can be used as well.

The Kubernetes Plugin uses both the 9102 port for the File Daemon, and the 9104 port for the bacula-backup proxy pod to backup persistent volume data. Thus, you must add these two ports to the bacula-fd service:

../../../../../_images/bacula-fd-service.png

Backup and Restore using the bacula-fd service in the Kubernetes Cluster

Using the Kubernetes Plugin, it is possible to backup all the resources in the Kubernetes cluster, and also include the backup of persistent volumes data.

For details about the Kubernetes Plugin backup configuration, refer to the Kubernetes Plugin page.

Backup and Restore of All the Resources in the Kubernetes Cluster

RBAC Configuration

To back up all the resources, in all namespaces and non-namespaced objects, ideally we can create a cluster role to all the Kubernetes cluster resources with the verbs list and get. The admin cluster role rules can be copied, but having the verbs get and list only for the backup purpose.

In addition to the RBAC configuration used to backup all the resources in the Kubernetes cluster, to perform pvcdata backups, our plugin needs to create/delete the snapshot/clone of persistent volumes, and to create/delete the bacula-backup pod to perform pvcdata backups.

This is the bacula-backup-default ClusterRole used, rules copied from the admin ClusterRole, but only get and list verbs kept:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bacula-backup-default
rules:
  - apiGroups:
      - cert-manager.io
    resources:
      - certificates
      - certificaterequests
      - issuers
      - challenges
      - orders
    verbs:
      - get
      - list
  - apiGroups:
      - longhorn.io
    resources:
      - volumes
      - engines
      - replicas
      - settings
      - engineimages
      - nodes
      - instancemanagers
      - sharemanagers
      - backingimages
      - backingimagemanagers
      - backingimagedatasources
      - backupbackingimages
      - backuptargets
      - backupvolumes
      - backups
      - recurringjobs
      - orphans
      - snapshots
      - supportbundles
      - systembackups
      - systemrestores
      - volumeattachments
    verbs:
      - get
      - list
  - apiGroups:
      - apps
    resources:
      - controllerrevisions
      - daemonsets
      - deployments
      - replicasets
      - statefulsets
    verbs:
      - get
      - list
  - apiGroups:
      - ''
    resources:
      - namespaces
      - secrets
      - configmaps
      - events
      - replicationcontrollers
      - secrets
      - serviceaccounts
      - services
      - endpoints
      - persistentvolumes
      - bindings
      - events
      - limitranges
      - resourcequotas
    verbs:
      - get
      - list
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - get
      - list
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
  - apiGroups:
      - extensions
    resources:
      - daemonsets
      - deployments
      - ingresses
      - networkpolicies
      - replicasets
    verbs:
      - get
      - list
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
      - networkpolicies
    verbs:
      - get
      - list
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - get
      - list
  - apiGroups:
      - metrics.k8s.io
    resources:
      - pods
      - nodes
    verbs:
      - get
      - list
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs:
      - get
      - list
  - apiGroups:
      - authorization.k8s.io
    resources:
      - localsubjectaccessreviews
      - rolebindings
      - roles
    verbs:
      - get
      - list
  - apiGroups:
      - storage.k8s.io
    resources:
      - storageclasses
    verbs:
      - get
      - list
      - patch
  - apiGroups:
      - ''
    resources:
      - persistentvolumeclaims
    verbs:
      - create
      - delete
      - get
      - list
  - apiGroups:
      - ''
    resources:
      - pods
    verbs:
      - create
      - delete
      - get
      - list
  - apiGroups:
      - ''
    resources:
      - persistentvolumeclaims/status
    verbs:
      - get
      - list
  - apiGroups:
      - ''
    resources:
      - pods/status
    verbs:
      - get
      - list

Note

The bacula-backup-default ClusterRole in this Use Case is an example. You may need a different set of permissions/rules to allow the backup and restore of other resources in your Kubernetes cluster.

See the Kubernetes documentation about using RBAC authorization for more details: https://kubernetes.io/docs/reference/access-authn-authz/rbac/

And the bacula-backup-default-binding ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bacula-backup-default-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: bacula-backup-default
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default

FileSet Configuration

The FileSet configuration used in this use case for the backup of all the resources in the Kubernetes Cluster, and another example to backup the persistent volumes data in the testing-ns-0010-1 namespace:

Fileset {
  Name = "kubernetes-all-resources-incluster-fileset"
  Include {
   Plugin = "kubernetes: incluster"
  }
}
Fileset {
  Name = "kubernetes-pvcdata-testing-ns-0010-1-incluster-fileset"
  Include {
   Plugin = "kubernetes: incluster pluginhost=bacula-fd.default.svc.cluster.local namespace=testing-ns-0010-1 pvcdata baculaimage=harbor.supportlab.lan/library/bacula-backup:30Nov23"
  }
}

The value for the fdaddress or the pluginhost options is the FQDN for the bacula-fd service configured to listen in both ports 9102 and 9104:

# kubectl get svc -n default
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
bacula-fd       ClusterIP   10.43.94.186    10.0.97.201   9102/TCP,9104/TCP   14d

For more details about pvcdata backup and restore, see Persistent Volume Claim Backup page and Backup and Restore Plugin Parameters page.

Go back to the main Kubernetes Plugin page.