Installing Heptio-Ark (Velero) on GKE, store data in GCP bucket.

Sachin Arote
3 min readMar 1, 2019

--

Heptio ark gives you tools to backup and restore your Kubernetes cluster resources and persistent volume.

git clone https://github.com/sachinar/ark.git

Install ark command on local machine with the help following link.

wget https://github.com/heptio/ark/releases/download/v0.10.0/ark-v0.10.0-linux-amd64.tar.gztar -xvf ark-v0.10.0-linux-amd64.tar.gzmv ark /usr/bin/ark

Create GCS bucket
Heptio Ark requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the FAQ for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket:

BUCKET=<YOUR_BUCKET>gsutil mb gs://$BUCKET/

Create service account

To integrate Heptio Ark with GCP, create an Ark-specific Service Account:

View your current config settings:

gcloud config list

Store the project value from the results in the environment variable $PROJECT_ID.

PROJECT_ID=$(gcloud config get-value project)

Create a service account:

gcloud iam service-accounts create heptio-ark --display-name “Heptio Ark service account”

If you’ll be using Ark to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default heptio-ark.

Then list all accounts and find the heptio-ark account you just created:

gcloud iam service-accounts list

Set the $SERVICE_ACCOUNT_EMAIL variable to match its email value.

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list --filter=”displayName:Heptio Ark service account” --format ‘value(email)’)

Attach policies to give heptio-ark the necessary permissions to function:

ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete)gcloud iam roles create heptio_ark.server --project $PROJECT_ID --title “Heptio Ark Server” --permissions “$(IFS=”,”; echo “${ROLE_PERMISSIONS[*]}“)”
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT_EMAIL --role projects/$PROJECT_ID/roles/heptio_ark.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}

Create a service account key, specifying an output file (credentials-ark) in your local directory:

gcloud iam service-accounts keys create credentials-ark --iam-account $SERVICE_ACCOUNT_EMAIL

Credentials and configuration
If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects. See the GKE documentation for more information.

In the Ark directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See Run in custom namespace.

kubectl apply -f config/common/00-prereqs.yaml

Create a Secret:

In the directory of the credentials file you just created, run:

kubectl create secret generic cloud-credentials --namespace heptio-ark --from-file cloud=credentials-ark

Note: If you use a custom namespace, replace heptio-ark with the name of the custom namespace

Specify the following values in the example files:

In file config/gcp/05-ark-backupstoragelocation.yaml:

Replace <YOUR_BUCKET>. See the BackupStorageLocation definition for details.

Replace <YOUR_STORAGE_CLASS_NAME> with standard. This is GCP’s default StorageClass name.
(Optional, use only if you need to specify multiple volume snapshot locations) In config/gcp/10-deployment.yaml:

Uncomment the — default-volume-snapshot-locations and replace provider locations with the values for your environment.
Start the server
In the root of your Ark directory, run:

kubectl apply -f config/gcp/05-ark-backupstoragelocation.yaml
kubectl apply -f config/gcp/06-ark-volumesnapshotlocation.yaml
kubectl apply -f config/gcp/10-deployment.yaml
kubectl apply -f config/gcp/20-restic-daemonset.yaml

Now backup will be store in your GCS bucket where you created bucket BUCKET=<YOUR_BUCKET>

Now for taking backup:-

Let’s consider deployment or statefulset is running in kubernetes.

This is an example of mongodb, where it is running as a statefulset,

kubectl annotate pod/mongo-0 backup.ark.heptio.com/backup-volumes=datadir

Annotate pods in kubernetes deployment or in statefulset using this command.

Here datadir is volume where deployment/statefulset are storing data.

Now take backup using command.

ark backup create mongo-backup --selector app=mongo --snapshot-volumes

To check volume details,

ark backup describe mongo-backup --details

Here it will show the output of In-progress/Completed/Failed.

To check logs of backup,

ark backup logs mongo-backup

Now for Restoring backup:-

Use this

ark restore create  mongo-restore   --from-backup mongo-backup

Here it need from which backup we are restoring backup.

To check status,

ark restore describe mongo-restore --details

To check restore logs,

ark restore logs mongo-restore

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Sachin Arote
Sachin Arote

Written by Sachin Arote

DevOps Architect | Docker | GCP |AWS | Terraform | Spinnaker|Jenkins|Prometheus|Grafana | Kubernetes |Victoria Metrics

No responses yet

Write a response