[Tutorial] Step by Step guide to build the Poor Man’s Backup System

LOL… Very cool. Thanks

I am working away at the revision for the Poor Mans Backup v2 and have finally got over some of the major hurdles. Been working on it steady since this morning. By sometime late tonight I should have it working on one of my test servers to verify everything.

If I am unable to figure out the github thing, then I will certainly take you up on the offer tomorrow. I still want to try myself first.

BKM

So, I think I have worked out how to use GitHub. I created a place for the files once I get everything working on my test servers. Getting the new additions to the backup system to work has been a struggle. I keep seeing the script run past commands before they finish and only partial files get copied, etc. Still working on that part. Using ‘tar’ seems to take longer than using gzip, but gzip doesn’t handle multiple files so when executing tar commands the script blasts past the command once it is started and executes the next command before the tar function finishes.

I will get it figured out eventually. Once I do it will get published. I will post a thread here, but on GitHub I will have the updateable version of the text doc as well as separate files for the bash scripts. That should make it easier. Oh yeah, the repo is called ERPNext_Guides

More to be added once I get past some of the new script errors.

BKM

Well, here it is…

BKM’s ERPNext Guides

I also put a copy of it the newest version of the Poor Man’s Backup System (PMBS) on the forum here:

Since the forum locks me out of editing the text after a few weeks, your idea of putting it on GitHub was the right way to go. It just took me several days to figure out GitHub enough to get it all published.

Thanks for the push. :grin:

BKM

1 Like

If you go the route of having a second server somewhere, you would neither need backups 4 times an hour nor risk losing even a few minutes of user work if you opted for Master Slave replication.

Concerning brown outs.
Whenever there are network failures any updates on the master are not reflected to the slave. When connection resumes, the slave quickly catches up to the right position in the log file and everything is good again.

Interesting tutorial I might reuse some of the codes from here like only holding the last backup instead of all.

Currently what I have done is set cronjobs on my local linux nas device to ssh into the VPS Server and pass bench backup --with-files and pull the files created during that hour when the backup was taken.

This is done on a daily basis while I run another cronjob to remove all the backups which are older than 7 days.

#!/bin/sh

# Set Vairables
#################################
UsernameSite=Server1 #Used for local storage directory assigned to this server backup
Username=server1
Sitename=erp.server1.com
Ip=123.1.1.123
Baklocation=/home/nasbox/backups  #Add Path to local backup directory
#Set Current Time Def
CurrentDateTime=`date +"%Y-%m-%d-%H"`
BackupDateTime=`date +"%Y%m%d_%H"`
#################################

#Add Key to server - One time command
#ssh-copy-id -i /home/$(whoami)/.ssh/id_rsa.pub ${Username}@${Ip}

# Generate a new backup with files
ssh ${Username}@${Ip} 'cd /home/$(whoami)/frappe-bench/; /usr/local/bin/bench backup --with-files' > ${Baklocation}/${UsernameSite}/bak-${CurrentDateTime}.log 

#Sync Command - Not useful for multi-bench Servers
rsync -azzv -e ssh ${Username}@${Ip}:/home/${Username}/frappe-bench/sites/${Sitename}/private/backups/${BackupDateTime}*  ${Baklocation}/${UsernameSite}/${CurrentDateTime} >> ${Baklocation}/${UsernameSite}/bak-${CurrentDateTime}.log 

I will try to adjust these lines with more sophisticated way of creating a new backup and pulling it locally.

Great to see others continue to innovate the concept further!

BTW… if you want the same thing with all required the support files, I created an updated version of this here:

~BKM

@bkm I was able to use the script and modified it to backup to S3 bucket instead of a failover server. However, sometimes the backup does not upload the full file size. I don’t know why this is so. BTW I used the V2 tutorial.

Hey Felix,

Did you use the “scp” command to execute the large file transfer of some other file moving tool??

The reason I ask, is that I have found that “scp” will wait almost forever through all kinds of communications timeouts to complete the xfer. Most other tools give up and time out.

Also if you are using “incron” tool to wait for a file to drop, it is important to use it with the exact syntax that I have listed in the tutorial, otherwise it will leave more than half the file behind and it will be useless. You must use the [IN_CLOSE_WRITE] command with incron to make it wait until the complete file is finished before moving it.

Hope that helps. :sunglasses:

BKM

@bkm. The Database Backup is taking lot of time. Example, my compressed db size is 2.3GB and it takes close to 20 minutes to create the DB Backup(*.sql.gz)file. Any way can reduce this time?

I’m afraid, this backup time is going to increase with the DB size.

If we plan to do frequent backup, how do we reduce the backup time and how to mitigate any performance bottlenecks resulting out of this?

We are running on 4 Core 16 GB RAM, with both APP and DB on the same server.

Thanks,
Saravana

Related to db backup:

Try maria-backup command.

at 18:40 in the video you will find slide for performance.
at 19:35 it gives you idea of performance, DB size is 64GB, takes from 44s to 219s.

For public/private files: How to backup with restic to S3 compatible storage

I am currently using the Poor Man Backup v2 (different forum thread from this one) configuration on most of my servers and the backup of a 7.3gb database is taking about 2 minutes. This time is based on my 8 CPU cores and 16gb of memory VPS server. I had similar performance (about two and a half minutes) with a 4.8gb ERPNext database on a 4 CPU core 8gb memory server as well.

It may be time to evaluate the performance of your VPS. I have never had backup time of more than 3 to 5 minutes on 5+gb databases using the mysqldump command.

Additionally, you may be able to improve the mariadb performance by tuning the number and size of the buffers used and the number of workers. Search the forum on this and you will find several threads related to mariadb tuning.

Hope this helps.

BKM