Skip to content

Commit

Permalink
update to aws-cli:v2
Browse files Browse the repository at this point in the history
  • Loading branch information
Leen15 committed Apr 9, 2024
1 parent 23b086f commit e85d434
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 48 deletions.
20 changes: 5 additions & 15 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,35 +1,25 @@
FROM mohamnag/aws-cli
MAINTAINER Luca Mattivi <luca@smartdomotik.com>
FROM amazon/aws-cli:2.15.35
LABEL Author="Luca Mattivi <luca@smartdomotik.com>"

# change these to fit your need
RUN apt-get update -q && apt-get install cron --yes

# m h dom mon dow
ENV BACKUP_CRON_SCHEDULE="* * * * *"
RUN yum update -y && yum install tar gzip -y

ENV BACKUP_TGT_DIR=/backup/
ENV BACKUP_SRC_DIR=/data/
ENV BACKUP_FILE_NAME='host_volumes'
ENV BACKUP_FILE_NAME='backup'

# bucket/path/to/place/
ENV BACKUP_S3_BUCKET=
ENV AWS_DEFAULT_REGION=
ENV AWS_ACCESS_KEY_ID=
ENV AWS_SECRET_ACCESS_KEY=

ADD crontab /etc/cron.d/backup-cron
ADD backup.sh /opt/backup.sh
ADD restore.sh /opt/restore.sh
ADD cron.sh /opt/cron.sh

RUN chmod 0644 /etc/cron.d/backup-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
RUN chmod +x /opt/*.sh

VOLUME $BACKUP_TGT_DIR
VOLUME $BACKUP_SRC_DIR

WORKDIR /opt/

CMD /opt/cron.sh
ENTRYPOINT ["/opt/backup.sh"]
26 changes: 8 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,6 @@ This avoids to upload multiple backups that are all equals.

You can also exclude one or more directories from the backup just adding an empty file `exclude_dir_from_backup` inside every directory.

Image runs as a cron job by default evey minute. Period may be changed by tuning `BACKUP_CRON_SCHEDULE` environment variable.

May also be run as a one time backup job by using `backup.sh` script as command.

Following environemnt variables should be set for backup to work:
```
BACKUP_S3_BUCKET= // no trailing slash at the end!
Expand All @@ -27,7 +23,7 @@ Flowing environment variables can be set to change the functionality:
BACKUP_CRON_SCHEDULE=* * * * *
BACKUP_TGT_DIR=/backup/ // always with trailing slash at the end!
BACKUP_SRC_DIR=/data/ // always with trailing slash at the end!
BACKUP_FILE_NAME=host_volumes
BACKUP_FILE_NAME=backup
```
## Usage
### Backup
Expand All @@ -38,14 +34,9 @@ If you want to store files on S3 under a subdirectory, just add it to the `BACKU

#### Examples

Mount the dir you want to be backed up on `BACKUP_SRC_DIR` and run image as daemon for periodic backup:
```
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ mohamnag/s3-dir-backup
```

or for one time backup (using default values and not keeping the backup archive):
Mount the dir you want to be backed up on `BACKUP_SRC_DIR` and run image for one time backup (using default values and not keeping the backup archive):
```
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ mohamnag/s3-dir-backup /opt/backup.sh
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/backedup/:/data/ leen15/docker-s3-dir-backup
```

### Restore
Expand All @@ -65,25 +56,24 @@ Works exactly like auto restore but container will stop after restoring and ther
If you know the file path of backup (relative to `BACKUP_S3_BUCKET`) you can use this functionality to restore that specific status. Container will stop after restoring and there will be no future backups.

#### Examples
To run any of the restore tasks, proper environment variables shall be set and `/opt/restore.sh` shall be run as command.
To run any of the restore tasks, proper environment variables shall be set and `/opt/restore.sh` shall be run as command.

Restore an specific backup and exit:
```
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
```

Restore latest backup and exit:
```
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
$ docker run --rm -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
```

Restoring an specific backup and start scheduled backup:
```
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_FILE_PATH=2016-02-23/2016-02-23-12-00-01.tar.gz -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
```

Restoring latest and starting scheduled backup:
```
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ mohamnag/s3-dir-backup /opt/restore.sh
$ docker run -d -e BACKUP_S3_BUCKET=bucket/directory -e AWS_DEFAULT_REGION=aws-region -e AWS_ACCESS_KEY_ID=awsid -e AWS_SECRET_ACCESS_KEY=awskey -e RESTORE_RESUME_BACKUP=1 -v /dir/to/be/restored/:/data/ --entrypoint /opt/restore.sh leen15/docker-s3-dir-backup
```

4 changes: 2 additions & 2 deletions backup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ eval "export COMPARE_DST_FULL_PATH=${COMPARE_DIR}${BACKUP_FILE_NAME}.tar.gz"
BACKUP_DST_DIR=$(dirname "${BACKUP_DST_FULL_PATH}")

mkdir -p ${COMPARE_DIR}
echo "Gzipping ${BACKUP_SRC_DIR} into ${COMPARE_DST_FULL_PATH}"
echo "Gzipping ${BACKUP_SRC_DIR} into ${COMPARE_DST_FULL_PATH}"
tar -czf ${COMPARE_DST_FULL_PATH} --exclude-tag-all=exclude_dir_from_backup -C ${BACKUP_SRC_DIR} .

if cmp -s -i 8 "$BACKUP_DST_FULL_PATH" "$COMPARE_DST_FULL_PATH"
Expand All @@ -24,7 +24,7 @@ else
mkdir -p ${BACKUP_DST_DIR}
mv "$COMPARE_DST_FULL_PATH" "$BACKUP_DST_FULL_PATH"
#echo "archive created, uploading..."
/usr/bin/aws s3 sync ${BACKUP_TGT_DIR} s3://${BACKUP_S3_BUCKET} --region ${AWS_DEFAULT_REGION}
/usr/local/bin/aws s3 sync ${BACKUP_TGT_DIR} s3://${BACKUP_S3_BUCKET} --region ${AWS_DEFAULT_REGION}
fi


Expand Down
11 changes: 0 additions & 11 deletions cron.sh

This file was deleted.

2 changes: 0 additions & 2 deletions crontab

This file was deleted.

0 comments on commit e85d434

Please sign in to comment.