Mysql Backup server / staging server

dooogen

WF Premium Member
Sep 20, 2009
779
9
0
Florida
I'm about to begin coding a SaaS project that has to do with client management for small businesses. It's just a hobby project for me right now which is why I'm allowing myself to code it instead of paying a developer to do it for me. My experience in PHP coding and working with server back ends is intermediate at best.

I've been reading into MySQL replication a bit so I can have a backup, but up to date, DB ready in case something should happen to my main server.

I already plan on having a staging server which will be identical to the main server. All files, crons, etc. Only difference is that no real customer data will be stored in its MySQL DB. It will be used for testing all updates before going live.

My questions:

Theoretically, my main server goes down... I change "localhost" to the IP of my Mysql replication slave server in the config file for connecting to the DB on my staging server, and point the domain to the staging server. Website should then be back up and working, correct?

If that's correct, what happens when the main server comes back up, and the slave MySQL server starts trying to pull data from the master again, which will be missing any new data that's gone in during downtime?

And lastly, What kind of specs (relative to main server) does the slave MySQL server need? I've never had a server that was purely used for an MySQL DB and nothing else. Would be a terrible idea to put it on a small VPS to start?


Hopefully these questions made some sense haha.
 


IMO, dev, staging and production should all have completely duplicated components. That means completely separate databases, therefore pointing production at the staging database wouldn't make sense.

If you want a slave server as part of your config, that's fine, just have one for each environment. I believe there are setups so that a slave can become a master if the master fails.

If you are thinking this sound like too much hardware / cost, check out vagrant, you can run multiple VMs to test all this stuff locally. It's very powerful combined with puppet or chef, so you can automatically create your environment from scratch.

Regarding specs, if the slave might become a master, I'd say give it the same specs as master.

Also, please tell me you are using revision control software, like git :) That way, you can have, at a minimum, dev, staging and production branches, which you can push to their respective environments.
 
Put your main db on a separate server from your web app. In production SaaS apps, you basically want all major architectural pieces on different server instances for both simplicity and easily monitoring what the fuck goes wrong when things fucking go wrong.
 
I think database replication is overkill for what you're doing, but anyway...

The master server will always be saving all writes to a binary log file, which is then shared amongst the slave servers. When you switch a slave to a master, the new master will begin a new binary log file. When you switch back, the original master will ensure it's up-to-date using that binary log.

MySQL :: MySQL 5.0 Reference Manual :: 16.3.6 Switching Masters During Failover

Please note, database replication isn't quite meant for this type of thing. It's meant more for high-load operations (eg. separating the reads from resource intensive writes) and/or security reasons.

FreeBSD is a common choice amongst DB servers, and sounds as though a small VPS will be just fine for what you're doing.
 
To add on to what Kiopa Matt said, for most projects, you can get away with hourly database dumps that are saved to S3 or somewhere else safe and away from your site. If shit ever catches on fire, you can just grab the latest back up and be pretty good to go, though you might have lost up to an hour of data...obviously, implement this as intelligently as possible (IE don't fuck with transaction database data or something else stupid).