Web-enabled S3 bucket migration using OctopusDeploy and PowerShell

I recently had to move a fairly large, fairly heavily trafficked web-enabled S3 bucket between two different AWS accounts. This turned out to be ever so slightly more than a simple drag-and-drop or copy operation.

Why? Well because it was web-enabled, mostly. Web-enabled S3 buckets have some restrictions on what they can be called, and buckets must be given a unique name. So you can’t just create a new webenabledbucket.com.au bucket in account B, copy the data across, flip the DNS and delete the bucket from account A. AWS won’t let you.

You have to have an intermediate stage where your files can live while the old bucket is deleted and the new one is provisioned. Which is where OctopusDeploy and the mighty Robot Army came to the rescue.

After declaring a 24-hour change freeze on the bucket, I created a new OctopusDeploy project, called it AdCentre Migration. To this I added three steps. Step one would pull all the files from the old S3 bucket using PowerShell’s Read-S3Object cmdlet, Step two would write a /ping endpoint for the AWS ELB healthchecks to hit. The third is called “null step”, runs on our psengine node (actually a tentacle on the Octopus Server itself) and does little more than writing a “Hello World” to the log. Why on earth would I do that? You’ll see in a second.

steps

The three steps

The project being all set to go,  I deployed the project into ‘dead space’.

deadspace

Deploying into Dead Space

Finally, I stood up a minimal-spec Robot Army Microcluster referring to the new project.

Robot Army servers, when provisioned, call out to OctopusDeploy using the API, and find the latest successful deployment for their role and environment. Octopus then triggers the deployment and they’re done. This is the reason I deployed into dead space – so I had a successful deployment job sitting there ready for the servers to ask for when they came online.

So, at this point, I had a new HA web cluster with all the files from the original bucket ready to serve content while I did the migration itself.

All I needed to do at this point was flip the DNS from the old web-enabled bucket onto the New Robot Army cluster.

I gave the DNS a bit of time to propagate globally, and kept an eye on the performance of the servers. They were performing admirably, despite the several-thousand requests per minute coming in.

Back in my target account I created a new, temporary bucket called “migration_temp”, pushed all the files over to it. I then altered the OctopusDeploy “Pull S3 Bucket” step and pushed a fresh deployment to one of the servers, so that autoscale events or self-healing would grab the correct content when the new server came up. If I’d omitted that step, and a node died for any reason – or if load had suddenly increased triggering a scale event – the new robot would have had no content to deploy. So a small, but important stage.

I then deleted the old bucket, and went away to eat a rather nice lunch.

It took a while for S3 to completely destroy all references to the old bucket, so I checked back in occasionally, and when it finally went away, I recreated that bucket in my target account, set it to be web-enabled, and pushed all the files in from the temporary bucket, remembering to make everything public-readable.

I then ran a quickie script to check all the files were there as expected, using Get-S3Object and comparing file hashes. Once confirmed, all I needed to do was to do the second DNS flip, transferring load from my microcluster to my new S3 bucket.

Then I went home, and gave it time for DNS to propagate globally.

The next morning bright and early, I logged in, checked that the microcluster was no longer receiving web requests, and deleted it. The robots had done their job, as had OctopusDeploy. Zero downtime, no alarms (and no surprises), the bucket was moved out of the old account and into the new and nobody noticed I’d done anything.

It only remained to make sure that the content editors who update the files had their shiny new access keys correctly set up, and we were done.

You could do this yourself quite easily, and you don’t even necessarily need to have an HA cluster in the middle, or OctopusDeploy in your stack. You could just use a simple EC2 instance, or an Azure VM, or (heaven forfend) a physical server. Whatever you like. These technologies happen to be pretty cool, available close at hand and actually very good at what they do, and they gave me a chance to blog about the sneaky “Deploy into Dead Space” trick that other OctopusDeploy users may not have stumbled onto yet.

 

One reply

  1. […] But the deployment task still completes successfully. I’ve called this trick “deployment to dead air” in the past, but with Robot Army v2.5, it’s anything but dead air. It’s a successful […]

Leave a Reply

Your email address will not be published. Required fields are marked *