Knowledgebase

Case Study: Use a 3-2-1 Backup Strategy at Rcs Print

  • 0

Introduction

With the complexity and moving pieces involved in today's applications and computing, failure is inevitable. The Cybersecurity and Infrastructure Security Agency (CISA) recommends that individuals and businesses use what is commonly referred to as a 3-2-1 strategy.

  • 3: Have one primary system backup and two copies of your data
  • 2: Store your backup two different ways or on two different sets of media
  • 1: Store one copy of the backed up data off-site

Deploy & Secure Ubuntu 20.04

This case study uses a Ubuntu 20.04 LTS server as the base operating system. Deploy a new Rcs Ubuntu 20.04 (x64) cloud server using the type of configuration that best suits the needs of the environment you support. Ensure to deploy/locate the server in any datacenter except New York (NJ). This ensures two distinctly different locations of the data and backup files. Also, ensure to check "Enable Auto Backups" in the server configuraiton. Throughout this document, the server name is apphost. After deploying apphost update the server according to the Ubuntu best practices guide.

Add a Backup User

After deploying apphost, add a user that performs the tasks of 3-2-1 backup. This document names the user beepbeep (as in backing up) because backup is often a reserved word in many operating systems and languages and, therefore, should not be used. The user beepbeep should only be performing automated tasks, so after creating it, lock the account, preventing it from logging in. To create the account and then lock it, run:

# useradd -m -s /bin/bash beepbeep
# usermod -L beepbeep

Create Supporting Directories

To perform the 3-2-1 backup, create 3 supplemental directories. These directories are for:

  • Holding the application/backup script
  • Holding the actual backup files
  • Holding the log files associated with backup

To create the application/backup directory, run:

# mkdir /usr/local/bin/backup
# chown -R beepbeep.beepbeep /usr/local/bin/backup

To create the directory to hold the backups, run:

# mkdir /opt/backups
# chown -R beepbeep.beepbeep /opt/backups

To create the directory to hold the logs, run:

# mkdir /var/log/backupauto
# chown -R beepbeep.beepbeep /var/log/backupauto

Setup AWSCLI

This guide stores the off-site data in Rcs Object Storage. Rcs Object Storage uses S3 protocols. This 3-2-1 document uses awscli to communicate and store the off-site backup.

Install AWSCLI

To install awscli run:

# curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
# unzip awscliv2.zip
# sudo ./aws/install
# rm -fr aws
# rm awscliv2.zip

This downloads the package and supplemental files, unzips them, installs them, and removes the original files.

Setup MySQL/MariaDB

MariaDB is one of the most common database platforms. This document uses a demonstration database running on MariaDB to highlight the 3-2-1 idea by backing up the database both locally and off-site. To install the latest version of MariaDB, update the repository by running:

# curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash

After updating the repository, install MariaDB by running:

# sudo apt install mariadb-server mariadb-client -y

After installation, enable MariaDB to start on boot by running:

# sudo systemctl enable mariadb.service

Now, secure the database. Run the built-in utility provided by MariaDB:

# sudo mysql_secure_installation

There are a series of questions. To ensure an optimal and secure database answer them as follows:

* Switch to unix_socket authentication [Y/n]: Y
* Change the root password? [Y/n]: Y
* Remove anonymous users? [Y/n]: Y
* Disallow root login remotely? [Y/n]: Y
* Remove test database and access to it? [Y/n]: Y
* Reload privilege tables now? [Y/n]: Y

After installing and configuring MariaDB, create and populate a sample database. The following steps download, extract, import, and then delete the file associated with the sample database:

# wget https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip
# unzip mysqlsampledatabase.zip
# mysql -u root < mysqlsampledatabase.sql
# rm mysqlsample*

Now that there is a populated database, create a restricted user to complete the 3-2-1 backup. Lock this user down, only giving permission to backup the data and not alter it. To do this, access the MariaDB console, create and then grant specific permissions to the user. Also, lock the account to only access the database from localhost. Carry out these tasks by running:

# mysql -u root
MariaDB [(none)]> GRANT SELECT, SHOW VIEW, TRIGGER on classicmodels.* to 'sqlbackuser'@'localhost' identified by 'Password123!';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit;

Note in the above, the user has a username of sqlbackuser and password of Password123!. Best practice uses a strong, random password in place of the above password. You can randomly generate a long, secure password online.

Setup Rcs Object Storage

To meet the third point of the 3-2-1 backup strategy, store files in a secondary, off-site location. To do this, create Rcs Object Storage. Label the object storage appropriately. After creating the object storage, take note of the Secret Key and Access Key.

Configure the Installation

Now all the components are in place, create the configurations to support the solution. To do this, from the root account, use the su command to log in to the beepbeep account (as it cannot log in from a command prompt, based on the account creation). From the root prompt, run:

# su - beepbeep

To confirm successful login as beepbeep the command prompt reads:

beepbeep@apphost:~$ 

AWSCLI Configuration

To configure awscli, type aws configure at the prompt (logged in as beepbeep). It asks four questions:

AWS Access Key ID []:
    * Enter the Access Key found on the Manage Object Storage page
AWS Secret Access Key []:
    * Enter the Secret Key found on the Manage Object Storage page (you will need to click the eyeball first to reveal this key)
Default region name []:
    * Enter `us-east-1`
Default output format [None]:
    * Leave this blank

After you enter these four values, awscli creates multiple configuration files used by the program and the user.

MariaDB Configuration

The database backup script needs to know the username and password to use when running. These credentials live in a file called .my.cnf, located in the root directory of the user running the backup command. To edit this file, first ensure you are in the root of the beepbeep users' home directory, logged in as the beepbeep user. You can check the directory by typing pwd and it should return /home/beepbeep/. You can check the user by typing whoami and it should return beepbeep. After confirming these two settings, create the file by typing:

$ nano ~/.my.cnf

After the editor is open, add the following lines to the file (ensuring to change the password to the one you used above):

[mysqldump]
user=sqlbackuser
password=Password123!
single-transaction
no-tablespaces

Save the file by pressing Ctrl+X and then pressing Y and Enter.

Now, secure the file by changing the permissions to only allow beepbeep to read it by running:

$ chmod 600 ~/.my.cnf

Backup Script

Creating the Backup Script

Now all the pieces are in place for the actual backup to occur, create the backup script. Store the backup script in the directory created at the beginning. Still logged in as the beepbeep user, change the working directory by typing:

$ cd /usr/local/bin/backup

After changing to this directory, create the backup script called backup.sh by typing:

# nano backup.sh

The contents of this file should contain:

#!/bin/bash

#############################################################
# Below here are variables that can be modified by the user #
# to alter the way the script behaves                       #
#############################################################
# S3 Bucket Name
BUCKET_NAME="sql-backup"
# location where the backups will be stored
DEST="/opt/backups"
# how many days to keep backups around locally
DAYS=3
# name of the database to backup
DBNAME="classicmodels"
# directory location of the log file
LOGLOC="/var/log/backupauto"
# GZip Compression Level
GZCOMP="-9"
# Vutlr S3 URL
VULTR_S3_URL="https://ewr1.vultrobjects.com"

#########################################################
# Below here are generated variables used in the script #
#########################################################
# Used as the name for the sql backup
NOW="$(date +"%d-%m-%Y_%s")"
# Used as the filename for the backup log
TODAY="$(date +"%d-%m-%Y")"
# Actual name of the SQL file that gets written
FILE="$DEST/$DBNAME.$NOW.sql.gz"
# Log file to be written to
LOG="$LOGLOC/$TODAY.log"
# LTS = Log Time Stamp. This is inserted at the front of every echo log entry
# NOTE: There is an intentional space left at the end
LTS="[`date +%Y-%m-%d\ %H:%M:%S`] "

#####################################################
# Here is where the script actually begins and runs #
#####################################################
echo $LTS began backing up >> $LOG
mysqldump $DBNAME | gzip $GZCOMP > $FILE
find $DEST -mtime +$DAYS -exec rm -f {} \;
if ! aws s3api head-bucket --bucket "$BUCKET_NAME" --endpoint-url $VULTR_S3_URL 2>/dev/null; then
    aws s3api create-bucket --bucket "$BUCKET_NAME" --endpoint-url $VULTR_S3_URL
    echo $LTS created $BUCKET_NAME >> $LOG
fi
aws s3 cp $FILE s3://$BUCKET_NAME --endpoint-url $VULTR_S3_URL >> $LOG
echo $LTS finished backup >> $LOG

The variables in the top section control the way the backup script works. Make sure to change the DBNAME to the name of the database used in production.

Allowing the Backup Script To Execute

After creating the script, it still does not have the ability to execute on behalf of the beepbeep user. Change the file to allow the user to execute the script by running:

$ chmod u+x backup.sh

After changing the execution bit on the file, you can type exit to log out of the beepbeep user session and return back to the root user's command prompt.

Automate the Backup Script

As root, create a Cron job:

# nano /etc/cron.d/beep_backup

Inside the file enter the following:

32 05 * * *     beepbeep     /usr/local/bin/backup/backup.sh > /dev/null 2>&1

The '32 05denotes the script runs daily at 05:32 (local time to the server), and the* * *denotes it runs every day of every week of every month. Thebeepbeepis the user who runs the command, and the command is at the end. The line ends with> /dev/null 2>&1` to essentially discard any extra output the file produces.

Test and Restore

The worst thing to do is create a 3-2-1 backup strategy and not actually test it. All too often, with backup strategies, it's "set it and forget it". If you do that with the backup the day you need it, the last successful backup won't contain what you need. Check the machine backup. Ensure it restores successfully. Open local archival backups. Make sure the data backed up is inside those files. Open the remote/off-site backup. Make sure the data is also in those files. The worst feeling is when you need a backup and don't have it or the files you need are missing or not there.

Summary

By deploying a virtual host with backups enabled, you carry out the easiest part of the backup. You can restore the system backup using the Rcs Console. The second part of backup is backing up files. Setting a routine job to run at set intervals accomplishes this task. Reassurance this is successful comprises opening the backup files and finding the data required. After you have the system backed up, and the files backed up as well, store another copy of the files off-site. Use a technology like Rcs Object Storage, giving you a second copy at a second location, should a catastrophic failure occur. This strategy, known as the 3-2-1 backup strategy, one day, just might come in handy.

References

Introduction With the complexity and moving pieces involved in today's applications and computing, failure is inevitable. The Cybersecurity and Infrastructure Security Agency (CISA) recommends that individuals and businesses use what is commonly referred to as a 3-2-1 strategy. 3: Have one primary system backup and two copies of your data 2: Store your backup two different ways or on two different sets of media 1: Store one copy of the backed up data off-site Deploy & Secure Ubuntu 20.04 This case study uses a Ubuntu 20.04 LTS server as the base operating system. Deploy a new Rcs Ubuntu 20.04 (x64) cloud server using the type of configuration that best suits the needs of the environment you support. Ensure to deploy/locate the server in any datacenter except New York (NJ). This ensures two distinctly different locations of the data and backup files. Also, ensure to check "Enable Auto Backups" in the server configuraiton. Throughout this document, the server name is apphost. After deploying apphost update the server according to the Ubuntu best practices guide. Add a Backup User After deploying apphost, add a user that performs the tasks of 3-2-1 backup. This document names the user beepbeep (as in backing up) because backup is often a reserved word in many operating systems and languages and, therefore, should not be used. The user beepbeep should only be performing automated tasks, so after creating it, lock the account, preventing it from logging in. To create the account and then lock it, run: # useradd -m -s /bin/bash beepbeep # usermod -L beepbeep Create Supporting Directories To perform the 3-2-1 backup, create 3 supplemental directories. These directories are for: Holding the application/backup script Holding the actual backup files Holding the log files associated with backup To create the application/backup directory, run: # mkdir /usr/local/bin/backup # chown -R beepbeep.beepbeep /usr/local/bin/backup To create the directory to hold the backups, run: # mkdir /opt/backups # chown -R beepbeep.beepbeep /opt/backups To create the directory to hold the logs, run: # mkdir /var/log/backupauto # chown -R beepbeep.beepbeep /var/log/backupauto Setup AWSCLI This guide stores the off-site data in Rcs Object Storage. Rcs Object Storage uses S3 protocols. This 3-2-1 document uses awscli to communicate and store the off-site backup. Install AWSCLI To install awscli run: # curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # unzip awscliv2.zip # sudo ./aws/install # rm -fr aws # rm awscliv2.zip This downloads the package and supplemental files, unzips them, installs them, and removes the original files. Setup MySQL/MariaDB MariaDB is one of the most common database platforms. This document uses a demonstration database running on MariaDB to highlight the 3-2-1 idea by backing up the database both locally and off-site. To install the latest version of MariaDB, update the repository by running: # curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash After updating the repository, install MariaDB by running: # sudo apt install mariadb-server mariadb-client -y After installation, enable MariaDB to start on boot by running: # sudo systemctl enable mariadb.service Now, secure the database. Run the built-in utility provided by MariaDB: # sudo mysql_secure_installation There are a series of questions. To ensure an optimal and secure database answer them as follows: * Switch to unix_socket authentication [Y/n]: Y * Change the root password? [Y/n]: Y * Remove anonymous users? [Y/n]: Y * Disallow root login remotely? [Y/n]: Y * Remove test database and access to it? [Y/n]: Y * Reload privilege tables now? [Y/n]: Y After installing and configuring MariaDB, create and populate a sample database. The following steps download, extract, import, and then delete the file associated with the sample database: # wget https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip # unzip mysqlsampledatabase.zip # mysql -u root < mysqlsampledatabase.sql # rm mysqlsample* Now that there is a populated database, create a restricted user to complete the 3-2-1 backup. Lock this user down, only giving permission to backup the data and not alter it. To do this, access the MariaDB console, create and then grant specific permissions to the user. Also, lock the account to only access the database from localhost. Carry out these tasks by running: # mysql -u root MariaDB [(none)]> GRANT SELECT, SHOW VIEW, TRIGGER on classicmodels.* to 'sqlbackuser'@'localhost' identified by 'Password123!'; MariaDB [(none)]> FLUSH PRIVILEGES; MariaDB [(none)]> exit; Note in the above, the user has a username of sqlbackuser and password of Password123!. Best practice uses a strong, random password in place of the above password. You can randomly generate a long, secure password online. Setup Rcs Object Storage To meet the third point of the 3-2-1 backup strategy, store files in a secondary, off-site location. To do this, create Rcs Object Storage. Label the object storage appropriately. After creating the object storage, take note of the Secret Key and Access Key. Configure the Installation Now all the components are in place, create the configurations to support the solution. To do this, from the root account, use the su command to log in to the beepbeep account (as it cannot log in from a command prompt, based on the account creation). From the root prompt, run: # su - beepbeep To confirm successful login as beepbeep the command prompt reads: beepbeep@apphost:~$ AWSCLI Configuration To configure awscli, type aws configure at the prompt (logged in as beepbeep). It asks four questions: AWS Access Key ID []: * Enter the Access Key found on the Manage Object Storage page AWS Secret Access Key []: * Enter the Secret Key found on the Manage Object Storage page (you will need to click the eyeball first to reveal this key) Default region name []: * Enter `us-east-1` Default output format [None]: * Leave this blank After you enter these four values, awscli creates multiple configuration files used by the program and the user. MariaDB Configuration The database backup script needs to know the username and password to use when running. These credentials live in a file called .my.cnf, located in the root directory of the user running the backup command. To edit this file, first ensure you are in the root of the beepbeep users' home directory, logged in as the beepbeep user. You can check the directory by typing pwd and it should return /home/beepbeep/. You can check the user by typing whoami and it should return beepbeep. After confirming these two settings, create the file by typing: $ nano ~/.my.cnf After the editor is open, add the following lines to the file (ensuring to change the password to the one you used above): [mysqldump] user=sqlbackuser password=Password123! single-transaction no-tablespaces Save the file by pressing CTRL+X and then pressing Y and ENTER. Now, secure the file by changing the permissions to only allow beepbeep to read it by running: $ chmod 600 ~/.my.cnf Backup Script Creating the Backup Script Now all the pieces are in place for the actual backup to occur, create the backup script. Store the backup script in the directory created at the beginning. Still logged in as the beepbeep user, change the working directory by typing: $ cd /usr/local/bin/backup After changing to this directory, create the backup script called backup.sh by typing: # nano backup.sh The contents of this file should contain: #!/bin/bash ############################################################# # Below here are variables that can be modified by the user # # to alter the way the script behaves # ############################################################# # S3 Bucket Name BUCKET_NAME="sql-backup" # location where the backups will be stored DEST="/opt/backups" # how many days to keep backups around locally DAYS=3 # name of the database to backup DBNAME="classicmodels" # directory location of the log file LOGLOC="/var/log/backupauto" # GZip Compression Level GZCOMP="-9" # Vutlr S3 URL VULTR_S3_URL="https://ewr1.vultrobjects.com" ######################################################### # Below here are generated variables used in the script # ######################################################### # Used as the name for the sql backup NOW="$(date +"%d-%m-%Y_%s")" # Used as the filename for the backup log TODAY="$(date +"%d-%m-%Y")" # Actual name of the SQL file that gets written FILE="$DEST/$DBNAME.$NOW.sql.gz" # Log file to be written to LOG="$LOGLOC/$TODAY.log" # LTS = Log Time Stamp. This is inserted at the front of every echo log entry # NOTE: There is an intentional space left at the end LTS="[`date +%Y-%m-%d\ %H:%M:%S`] " ##################################################### # Here is where the script actually begins and runs # ##################################################### echo $LTS began backing up >> $LOG mysqldump $DBNAME | gzip $GZCOMP > $FILE find $DEST -mtime +$DAYS -exec rm -f {} \; if ! aws s3api head-bucket --bucket "$BUCKET_NAME" --endpoint-url $VULTR_S3_URL 2>/dev/null; then aws s3api create-bucket --bucket "$BUCKET_NAME" --endpoint-url $VULTR_S3_URL echo $LTS created $BUCKET_NAME >> $LOG fi aws s3 cp $FILE s3://$BUCKET_NAME --endpoint-url $VULTR_S3_URL >> $LOG echo $LTS finished backup >> $LOG The variables in the top section control the way the backup script works. Make sure to change the DBNAME to the name of the database used in production. Allowing the Backup Script To Execute After creating the script, it still does not have the ability to execute on behalf of the beepbeep user. Change the file to allow the user to execute the script by running: $ chmod u+x backup.sh After changing the execution bit on the file, you can type exit to log out of the beepbeep user session and return back to the root user's command prompt. Automate the Backup Script As root, create a Cron job: # nano /etc/cron.d/beep_backup Inside the file enter the following: 32 05 * * * beepbeep /usr/local/bin/backup/backup.sh > /dev/null 2>&1 The '32 05denotes the script runs daily at 05:32 (local time to the server), and the* * *denotes it runs every day of every week of every month. Thebeepbeepis the user who runs the command, and the command is at the end. The line ends with> /dev/null 2>&1` to essentially discard any extra output the file produces. Test and Restore The worst thing to do is create a 3-2-1 backup strategy and not actually test it. All too often, with backup strategies, it's "set it and forget it". If you do that with the backup the day you need it, the last successful backup won't contain what you need. Check the machine backup. Ensure it restores successfully. Open local archival backups. Make sure the data backed up is inside those files. Open the remote/off-site backup. Make sure the data is also in those files. The worst feeling is when you need a backup and don't have it or the files you need are missing or not there. Summary By deploying a virtual host with backups enabled, you carry out the easiest part of the backup. You can restore the system backup using the Rcs Console. The second part of backup is backing up files. Setting a routine job to run at set intervals accomplishes this task. Reassurance this is successful comprises opening the backup files and finding the data required. After you have the system backed up, and the files backed up as well, store another copy of the files off-site. Use a technology like Rcs Object Storage, giving you a second copy at a second location, should a catastrophic failure occur. This strategy, known as the 3-2-1 backup strategy, one day, just might come in handy. References MySQL Replication vs Backup MySQL Dump MySQL Sample Database MariaDB Repository Information

Was this answer helpful?
Back

Powered by WHMCompleteSolution