18
loading...
This website collects cookies to deliver better user experience
backup.sql
file, but what if you have multiple massive projects inside Appwrite? Your disk storage may or may not be able to hold a backup of all Appwrite volumes. Additionally, you most likely need to backup periodically and need multiple versions of the backup. So, what are your options?"Over one exabyte of data storage under management, and counting." - Backblaze website
Bucket
and press Create a Bucket
. Give it some name such as appwrite
and click Create a Bucket
again. You should see a newly created bucket on your list. Make sure to mark your Bucket Name
as we will need this later.App Keys
, you can Add a New Application Key
. For simplicity, we will ignore that, and we will use our master key to have all permissions. Simply click Generate a New Master Application Key
, and this should give you keyID
and applicationKey
.mtronix/b2-cli:0.0.1
. This is the only image I could find that was actually working.docker run \
--rm \
mtronix/b2-cli:0.0.1 \
bash -c \
"b2 authorize-account <ACCESS_KEY_ID> <ACCESS_KEY_ID> && b2 list-file-names <BUCKET_NAME>"
{
"files": [],
"nextFileName": null
}
cat.png
. To upload the file, I run:docker run \
--rm \
-v $PWD:/b2 \
mtronix/b2-cli:0.0.1 \
bash -c \
"b2 authorize-account <ACCESS_KEY_ID> <ACCESS_KEY_ID> && b2 upload-file <BUCKET_NAME> cat.png cat.png"
{
"action": "upload",
"fileId": "4_z78da5cd2a05db73574a90515_f11841831f8e91ca6_d20210717_m092603_c002_v0001140_t0056",
"fileName": "cat.png",
"size": 136021,
"uploadTimestamp": 1626513963000
}
list-file-names
command from above again, and you should see your file in the array of files:{
"files": [
{
"accountId": "8ac20d754955",
"action": "upload",
"bucketId": "78da5cd2a05db73574a90515",
"contentLength": 136021,
...
docker-compose exec mariadb sh -c 'exec mysqldump --all-databases --add-drop-database -u "$MYSQL_USER" -p "$MYSQL_ROOT_PASSWORD"' > ./dump.sql
dump.sql
dump.sql
to Backblazedump.sql
to free up the space on the machinedocker-compose exec \
mariadb \
sh -c \
'exec mysqldump --all-databases --add-drop-database -u"$MYSQL_USER" -p"$MYSQL_ROOT_PASSWORD"' > ./dump.sql ; \
docker run \
--rm \
-v $PWD:/b2 \
mtronix/b2-cli:0.0.1 \
bash -c \
"b2 authorize-account <ACCESS_KEY_ID> <ACCESS_KEY_ID> && b2 upload-file <BUCKET_NAME> dump.sql dump.sql" ; \
rm dump.sql
mkdir -p backup && docker run --rm --volumes-from "$(docker-compose ps -q appwrite)" -v $PWD/backup:/backup ubuntu bash -c "cd /storage/uploads && tar cvf /backup/uploads.tar ."
mkdir -p backup && docker run --rm --volumes-from "$(docker-compose ps -q appwrite)" -v $PWD/backup:/backup ubuntu bash -c "cd /storage/functions && tar cvf /backup/functions.tar ."
backup/uploads.tar
mkdir -p backup && docker run \
--rm \
--volumes-from "$(docker-compose ps -q appwrite)" \
-v $PWD/backup:/backup \
ubuntu \
bash -c \
"cd /storage/uploads && tar cvf /backup/uploads.tar ." ; \
docker run \
--rm \
-v $PWD:/b2 \
mtronix/b2-cli:0.0.1 \
bash -c \
"b2 authorize-account <ACCESS_KEY_ID> <ACCESS_KEY_ID> && b2 upload-file <BUCKET_NAME> backup/uploads.tar upload_backup.tar" ; \
rm backup/uploads.tar
upload_backup.tar
in your Backblaze bucket.mkdir -p backup && docker run \
--rm \
--volumes-from "$(docker-compose ps -q appwrite)" \
-v $PWD/backup:/backup \
ubuntu \
bash -c \
"cd /storage/functions && tar cvf /backup/functions.tar ." ; \
docker run \
--rm \
-v $PWD:/b2 \
mtronix/b2-cli:0.0.1 \
bash -c \
"b2 authorize-account <ACCESS_KEY_ID> <ACCESS_KEY_ID> && b2 upload-file <BUCKET_NAME> backup/functions.tar upload_functions.tar" ; \
rm backup/functions.tar
18