Pushing to multiple EC2 instances on a load balancer
I am attempting to figure out a good way to push out a new commit to a group of EC2 server instances behind a ELB (load balancer). Each instance is running Nginx and PHP-FPM
I would like to perform the following workflow, but I am unsure of a good way to push out a new version to all instances behind the load balancer.
- Dev is done on a local machine
- Once changes are ready, I perform a “git push origin master” to push
the changes to BitBucket (where I host all my git repos)
- After being pushed to bitbucket, I would like to have the new version
pushed out to all EC2 instances simultaneously.
- I would like to do this without having to SSH in to each instance
Is there a way to configure the remote servers to accept a remote push? Is there a better way to do this?
5 Solutions collect form web for “Pushing to multiple EC2 instances on a load balancer”
Yes, I do this all of the time (with the same application stack, actually).
Use a base AMI from a trusted source, such as the default “Amazon Linux” ones, or roll your own.
As part of the launch configuration, use the “user data” field to bootstrap a provisioning process on boot. This can be as simple as a shell script that runs
yum install nginx php-fpm -yand copies files down from a S3 bucket or do a pull from your repo. The Amazon-authored AMI’s also include support for cloud-init scripts if you need a bit more flexibility. If you need even greater power, you can use a change management and orchestration tool like Puppet, Chef, or Salt (my personal favorite).
As far as updating code on existing instances: there are two schools of thought:
- Make full use of the cloud and just spin up an entirely new fleet of instances that grab the new code at boot. Then you flip the load balancer to point at the new fleet. It’s instantaneous and gives you a really quick way to revert to the old fleet if something goes wrong. Hours (or days) later, you then spin down the old instances.
- You can use a tool like Fabric or Capistrano to do a parallel “push” deployment to all the instances at once. This is generally just re-executing the same script that the servers ran at boot. Salt and Puppet’s MCollective also provide similar functionality that mesh with their basic “pull” provisioning.
- Push it to one machine.
- Have a git hook created on it http://git-scm.com/book/en/Customizing-Git-Git-Hooks.
- Make hook run pull on other machines.
Only problem , you’ll have to maintain list of machines to run update on.
Have cron job to pull from your bitbucket account. on a regular base.
The tool for this job is Capistrano.
I use an awesome gem called capistrano-ec2group in order to map capistrano roles with EC2 security groups.
This means that you only need to apply an EC2 security group (eg. app-web or app-db) to your instances in order for capistrano to know what to deploy to them.
This means you do not have to maintain a list of server IPs in your app.
The change to your workflow would be that instead of focussing on automating the deploy on pushing to bitbucket, you would push and then execute
If you really don’t want to do to steps, make an alias 😀
alias shipit=git push origin master && cap deploy
This solution builds on E_p’s idea. E_p says the problem is you’d need to maintain a server list somewhere in order to tell each server to pull the new update. If it was me, I’d just use tags in ec2 to help identify a group of servers (like “Role=WebServer” for example). That way you can just use the ec2 command line interface to list the instances and run the pull command on each of them.
for i in \ `ec2din --filter "tag-value=WebServer" --region us-east-1 \ | grep "running" \ | cut -f17`\ ; do ssh $i "cd /var/www/html && git pull origin"; done
Note: I’ve tested the code that fetches the ip addresses of all tagged instances and connects to them via ssh, but not the specific
git pull command.
You need the amazon cli tools installed wherever you want this to run, as well as the ssh keys installed for the servers you’re trying to update. Not sure what bitbucket’s capabilities are but I’m guessing this code won’t be able to run there. You’ll either need to do as E_p suggests and push your updates to a separate management instance, and include this code in your post-commit hook, OR if you want to save the headache you could do as I’ve done and just install the CLI tools on your local machine and run it manually when you want to deploy the updates.
Credit to AdamK for his response to another question which made it easy to extract the ip address from the
ec2din output and iterate over the results: How can I kill all my EC2 instances from the command line?
EC2 CLI Tools Reference: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/Welcome.html
Your best bet might be to actually use AMI’s for deployments.
Personally, I typically have a staging instance where I can pull any repo changes into. Once I have confirmed it is operating the way I want, I create an AMI from that instance.
For deployment, I use an autoscaling group behind the load balancer (doesn’t need to be dynamically scaling or anything). In a simple set up where you have a fixed number of servers in the autoscale group (for example 10 instances). I would change the AMI associated with the autoscale group to the new AMI, then start terminating a few instances at a time in the autoscale group. So, say I have 10 instances and I terminate two to take it down to 8 instances. The autoscale group is configured to have a minimum of 10 instances, so it will automatically start up two new instances with the new AMI. You can then keep removing instances at whatever rate makes sense for your level of load, so as to not impact the performance of your fleet.
You can obviously do this manually, even without an autoscale group by directly adding/removing instances from the ELB as well.
If you are looking to make this fully automated (i.e. continuous deployment), then you might want to look at using a build system such as Jenkins, which would allow for a commit to kick off a build and then run the necessary AWS commands to make AMI’s and deploy them.