I don't know if this post is meant to educate so much as to allow me to feel smug and self-satisfied about setting up a nice automated deployment scenario, but maybe it will give some of you some ideas if you're not doing this sort of thing already. I've set myself up a really cool continuous integration/automated deployment system, using free tools and some custom ones. Even though I'm developing using .Net and deploying on Windows servers, you can still do this sort of thing with your platform-of-choice equivalent.
Basically, I have three websites that are different facets of the same application. One is the public-facing website with members' area and so forth. The next is a pure REST API, hosted on its own subdomain. The third is my own personal admin dashboard for monitoring and maintaining the system on the go, and hosted using another domain name. On top of that I have a primary background worker process which is designed to self-update when I have a new version to deploy, and there's also windows service that detects crashed background processes and restarts them if necessary, though hopefully this is an extreme rarity. There's also the updater application which performs the actual background worker update, and also updates when configuration files have changed.
All of this automatically builds and deploys with a single push to my master git branch. Because my worker processes are designed to scale out to multiple EC2 servers enmasse, I deploy prebuilt updates, packaged as zip files, to S3, where my workers and updaters on each EC2 instance can look for them. Usually they don't have to though, because they're each subscribed to a Redis channel so I can push out an update notification the moment I complete a build/deploy, then they know to shut down, run the updater application and then restart.
The process flows like this:
1) I push to my master branch in my private Github repository.
2) My build server is running TeamCity (a freemium continuous integration tool), which picks up my changes from the git repo, then rebuilds my web apps and background apps to a build directory and updates config files with production settings and so forth.
3) I have a custom tool I whipped up that I invoke as part of TeamCity's build process. It zips up each web app and background app that I need to deploy, appends a timestamp to the filename of each package and uploads them to an S3 bucket, then publishes a notification to the Redis channel I mentioned earlier.
4) My worker processes, running on an arbitrary number of servers, each independently detect the new version (via the Redis notification or by an occasional automated check of the S3 bucket's contents), allow internal processes to finish cleanly, then download the latest version of the updater, which they then execute and quit. The updater (again, independently on each EC2 instance) then updates the worker binaries, relaunches the newly-updated worker process and quits, completing the background worker self-update process.
5) On my web server (again, an EC2 instance) I have running a simple node.js-driven service which listens for HTTP requests (authenticated of course) that indicate that it should redeploy a given web application (specified in the HTTP request). Each production app has alternating A and B folders, so when a deploy is requested, it downloads and unzips the specified package to one of those two folders (the one not currently in use), then updates the web server to point at the new folder for that web application, essentially meaning I can deploy a new version of the website without ever interrupting any users on the site. So, the TeamCity build process, upon completing steps 2-4 (above), then uses cURL to call the web deploy process mentioned above, once for each of the three web applications.
As a result of all of this, a single push to my master branch in my git repository causes any number of background processes to automatically shut down cleanly, update and relaunch, and each my web applications to update seamlessly without interrupting anyone using them.
Is good, yes?
(Now expecting someone to come along and make this seem like child's play. Mahzkrieg? dchuk?)
Basically, I have three websites that are different facets of the same application. One is the public-facing website with members' area and so forth. The next is a pure REST API, hosted on its own subdomain. The third is my own personal admin dashboard for monitoring and maintaining the system on the go, and hosted using another domain name. On top of that I have a primary background worker process which is designed to self-update when I have a new version to deploy, and there's also windows service that detects crashed background processes and restarts them if necessary, though hopefully this is an extreme rarity. There's also the updater application which performs the actual background worker update, and also updates when configuration files have changed.
All of this automatically builds and deploys with a single push to my master git branch. Because my worker processes are designed to scale out to multiple EC2 servers enmasse, I deploy prebuilt updates, packaged as zip files, to S3, where my workers and updaters on each EC2 instance can look for them. Usually they don't have to though, because they're each subscribed to a Redis channel so I can push out an update notification the moment I complete a build/deploy, then they know to shut down, run the updater application and then restart.
The process flows like this:
1) I push to my master branch in my private Github repository.
2) My build server is running TeamCity (a freemium continuous integration tool), which picks up my changes from the git repo, then rebuilds my web apps and background apps to a build directory and updates config files with production settings and so forth.
3) I have a custom tool I whipped up that I invoke as part of TeamCity's build process. It zips up each web app and background app that I need to deploy, appends a timestamp to the filename of each package and uploads them to an S3 bucket, then publishes a notification to the Redis channel I mentioned earlier.
4) My worker processes, running on an arbitrary number of servers, each independently detect the new version (via the Redis notification or by an occasional automated check of the S3 bucket's contents), allow internal processes to finish cleanly, then download the latest version of the updater, which they then execute and quit. The updater (again, independently on each EC2 instance) then updates the worker binaries, relaunches the newly-updated worker process and quits, completing the background worker self-update process.
5) On my web server (again, an EC2 instance) I have running a simple node.js-driven service which listens for HTTP requests (authenticated of course) that indicate that it should redeploy a given web application (specified in the HTTP request). Each production app has alternating A and B folders, so when a deploy is requested, it downloads and unzips the specified package to one of those two folders (the one not currently in use), then updates the web server to point at the new folder for that web application, essentially meaning I can deploy a new version of the website without ever interrupting any users on the site. So, the TeamCity build process, upon completing steps 2-4 (above), then uses cURL to call the web deploy process mentioned above, once for each of the three web applications.
As a result of all of this, a single push to my master branch in my git repository causes any number of background processes to automatically shut down cleanly, update and relaunch, and each my web applications to update seamlessly without interrupting anyone using them.
Is good, yes?
(Now expecting someone to come along and make this seem like child's play. Mahzkrieg? dchuk?)