Author Archives: Jason

Introducing Takofukku

I’ve long held the opinion that Octopus Deploy is a fantastic tool for at-scale ops tasks. My motto for a couple of years now has been “If you have a deployment target, you also have a management target”. Octopus’s amazingly flexible and nicely secured Server/Tentacle model means you’ve got a distributed task runner par excellence right there in your organisation, ready to use.

What it’s tended to lack is an easy, off-the-shelf automated trigger.

Let me explain:

The classic mode of operation for octopus is like this

However for most infrastructure-as-code applications, which are in interpreted languages, that build server step isn’t really needed. You could argue that “Well tests should run there”, but let’s not forget, Octopus is a distrbuted task runner already. Why not run tests in Octopus? It understands environments, so it understands branches, so you can easily separate dev/test/prod, or whatever process you use. If you’re smart, you can even get it to spin up arbitrary environments.

So back when I was at Domain Group, I set about building a little webhook server using PoSHServer to take git commits and fire Octopus, so the ops team could drive things like Robot Army in Octopus from a simple commit/push. It mapped known branches to known Octopus environments. No build server in the middle needed. A little custom code, but not much.

That was all very well but it wasn’t very usable. You had to delve into some powershell code every time you wanted to trigger Octopus for a new project. For Bash guys, that’s no good. For C# guys that may be no good. It also wasn’t very generic or well-defined.

So I started building Takofukku in my head.

Takofukku is a simple anglo-japanese portmanteau word. Tako (たこ) is “Octopus”. You may have had Takoyaki at a japanese restaurant. “Fukku” (フック) is hook. So Takofukku is an Octopus Hook – which is what this little app does.

I dithered over this app on personal time while working at Domain, wrote some PowerShell for it, scrapped it, messed around with AWS Lambda for a bit, then moved to Azure functions, started a PowerShell project, scrapped it, tried again win C#, scrapped that and generally made no concrete progress. I struggled with free time and attention span, to be frank. When you’ve been coding all day, coding when you get home is far less attractive than XBox.

Then I joined Octopus Deploy as a Cloud Architect. And still dithered over it.

Then a few weeks ago I got a Saturday entirely to myself with nothing to divert my attention. I’d  been learning F# for a bit, and was inspired by the language, so scrapped all the previous work and rewrote the whole thing in F#.

And so, Takofukku became a thing.

Takofukku is, in essence, a webhook endpoint that understands github push events. It defines a simple file format, a takofile, that lives in the root folder of a github repository, and defines how you trigger octopus when you push. It’s not unlike, say, appveyor or travis in that respect.

So when you commit, Takofukku is triggered, it goes and grabs your takofile, figures out how to invoke Octopus, and does it, using an API key you provide privately.

Of course, I wanted this app for Ops tasks, but what it actually enables is a lot more scenarios. using Takofukku, you could use Octopus as a hillbilly CI server, letting Octopus run your builds and tests, your packaging, and your deployment. Very good for a cheapskate shop such as, say, me. Sure, Octopus isn’t designed like a build server, but when you think about it… it sort of kind of is. You could easily roll an Octopus project that spins up, say, a Docker Container after a commit, then uses that container to run tests or builds, then drops a built artifact into the Octopus Nuget feed, which then triggers a deploy. This stuff is pretty much right there out of the box. Conversely, I’ve got Octopus projects that just send me a slack message when a push happens on a known branch.

So, long story short today I made open to the public, and the instructions to use it are at the Github readme.

Please do feel free to play with it. It’s open source, it’s free to use – will ALWAYS be free for public, open repos – and I’m open to feedback.

Do hit me up on twitter – I’m @cloudyopspoet

Cheers all!



Introducing StatusCakeDSC

I recently published a new open source PowerShell project for creating StatusCake checks and contact groups, called – logically enough – StatusCakeDSC.

If you use StatusCake, and you do Infrastructure as Code, then you may be thankful for a nice declarative way to define and manage your checks and groups. For PowerShell users like myself, that declarative syntax is generally DSC.

I happen to use StatusCake for my personal projects, which I’ve been modernising a little of late, and I figured I should just build and publish a nice little DSC resource to do what I need done. The syntax of it is pretty simple, it’s like pretty much every other DSC resource you’ve ever seen. Look

That example creates a test and a contact group. There are more examples over at the repo. There’s also a simple install script, and a little further down the line there’ll be an Octopus Deploy step template and maybe some other goodies.

It’s not production-perfect, but it certainly does the basics. I’ll probably declare it ready in a week or two, at which point I’ll publish it as a package and maybe even wrap it in Chocolatey or PowerShellGet to make it even easier to install. Do feel free to let me know if it’s useful, and drop in PRs if you spot bugs or gaps.

Turn your PowerShell scripts into a Web API – three options

So, you’ve got a load of PowerShell code that does useful stuff, and because you’re doing Modern Windows Ops now, you want to port that code into some kind of web-based API, so you can scale it out and make your epic PowerShell code accessible from more devices and environments.

You don’t want to rewrite it all in C# as an ASP.NET Web Api – for an ops engineer that seems like terrible overkill. Besides, who’s got time for that nonsense? Modern Ops teams are busy as hell, even though they’ve got automation everywhere. You could get devs to do it, but then you have to manage requirements and specs and Jira tickets oh god not Jira tickets. Please anything other than Jira tickets NOOOOOOOOO!

Ahem *cough* excuse me. Where was I?

Oh yes.

If only there was a way you could take your existing PowerShell code and turn it into an HTTP API quickly and easily.

Well, it turns out there are lots of options now. Check below the fold for three I’ve used recently.

Continue reading →

Blog Update 17 May 2017. Job stuff, Wannacry and MVA

So, what’s been going on? I’ve been a bit lax in blogging here of late, which I hope to fix in the near future. So what’s the news?

Well, new item number one if that I’m about to move on from Domain Group, where I’ve been Windows DevOps Wonk for the last three years, and head to Octopus Deploy, where I’ll be doing some really exciting Cloud Architecture work and generally trying to reach as many people as possible with the good news of Modern Windows Ops.

So that’s the first thing out of the way. What else has been going on?

Oh yeah, there’s that whole WannaCry thing which went by. At Domain we were entirely unaffected. Why?

Well, most of us are running Windows 10 on our laptops, which was immune to the specific vulnerability. That was a major factor. But I don’t manage the client OS fleet. I manage the servers.

Good solid firewall practice was a major factor. SMB would never be open to the internet, and we have periodic security scanning that checks our Cloud environments against an exhaustive set of rules. We absolutely don’t allow SMB shares on our fleet – that was common practice at one time, but rapidly deemed anticloud because it does nothing except enable bad deployment practice.

However, an interesting wrinkle on the subsequent community debate: At Domain, we turn off Windows Update on our Robot Army Windows Server fleet.

“WHAAAAAT?” you say. “WHYYYY?”

There’s a specific reason for these instances. We found quite early on that occasionally Windows Update would fire off on instances, and push CPU usage to 100%, triggering scale-up events. In some cases, we’d end up with alerts and minor outages as a result of this behaviour. It also skewed the metrics we collect on resource usage by causing spikes at weird times, and was known to delay deployments

So we made a considered, reasoned decision to disable Windows Update on the autoscaling fleet. That’s a few hundred boxes right there.

As threat mitigation, we don’t allow RDP access to those boxes, we run Windows Server Core Edition with a minimal set of features enabled, and we closely monitor and correct changes to things like Security Group rules, service state and installed Windows features. All boxes are disposable at a moment’s notice, and we renew our AMI images on a regular basis – sometimes several times a week. Those images are fully patched and tested – with a suite of Pester tests – at creation time. We also maintain a small number of more “traditional” servers, which do have updates enabled, but none of these run customer-facing workloads

Make no mistake, Troy Hunt is absolutely right that no client OS should have updates disabled. But a modern server workload may have a case for it, as long as other measures are taken to protect the OS from threats. Your mileage may vary. Treat all advice with caution. I am not a doctor.

Last, here’s a new bit from Microsoft Virtual Academy (published 15 May 2017) which I think did a decent job of explaining modern DevOps practices to the curious or confused. The video and I certainly differ on some specific points of dogma, but the big picture is good – automate, tighten your feedback loops, throw away the old stuff, treat servers as software objects, move fast, apply laziness early on, build often, deploy often etc. Worth a look even if you’re a grizzled veteran, I’d say.

You might be paying too much for S3

Actually… you almost definitely are. There’s almost a hoarder’s instinct with S3 and related cloud storage. You keep things in there that you might not need any more, because it can be hard to identify what’s needed. However that’s not what I’m talking about

I’m talking about Aborted multipart uploads.

S3 includes options for multipart uploads on large files. This makes uploads more reliable, by breaking them up into smaller chunks, and makes them resumable, by starting again from a failed chunk. A lot of S3 clients have multipart built-in, so you might well be doing a multipart upload without even knowing it.

However when a multipart upload aborts and does not complete, the slot can be held open – there is literally no timeout – and it’s an object in your account, for which you’ll be charged.

Luckily, AWS provide ways to deal with this. You just have to search them out

If you’re using PowerShell, as I am, you can use the Remove-S3MultipartUploads cmdlet. If you’re using, say, node.js, you can use s3-upload-cleaner. There’s a way to clean these up in your chosen SDK, you just need to know about it and act on it.

There’s even a way to do this with an S3 bucket lifecycle policy, as explained by Jeff Barr here.

Now go forth, and stop paying too much for S3. Also, clean out the attic while you’re in there. You don’t need most of those files, do you? Hoarder.

The PowerShell Pipeline, explained

So, my previous post on PowerShell has prompted some responses, internally and externally. Sufficient that I did actually re-word some parts of it, and sufficient that I feel a need to be positive and offer something to take away the burn.

So let’s have a go at explaining the pipeline, shall we?

To do this, I’m going to give an example of doing something without the pipeline. I hope that by the end of this post, the value of showing the other way first will be clear. But I’ll say up front, if you have written code like I’m about to show, don’t fret. It still works. There’s just a better way.

The example I’ve chosen is this:

You’re deploying an IIS Web Application using PowerShell, and as part of your deployment process, you want to delete the previous web site(s) from IIS.

So, let’s dig in. I’m going to be quite thorough, and it’s fine to follow along in the PowerShell prompt. You will, of course, need IIS installed if you do, but don’t worry, at the end there’s an example or two that should work for everyone.

Continue reading →

You don’t know PowerShell

We’ve been doing a lot of interviewing at work of late. You see, we’re looking for good Windows DevOps candidates, and the local market is… well… let’s just say that next-gen Windows guys are a little thin on the ground around here.

This is a problem. Because while next-gen windows candidates are thin on the ground, resumés claiming next-gen Windows skills are emphatically not thin on the ground. In fact, I have actually – literally – lost count of the number of resumés I’ve seen where PowerShell is touted as a key skill, only for the candidate to fail some very simple PowerShell questions in the initial screening. So let’s just run through a few basic principles so that we don’t all waste each other’s time.

Continue reading →

HOWTO: Whitelist Pingdom Probe IPs into AWS Security groups

This is something I’ve been meaning to write about for a while.

If you use pingdom for your monitoring, and you have a requirement to lock down your endpoints to a specific set of clients, you may have a painful job on your hands.

Some engineers I’ve spoken to have implemented a kind of proxy to forward pingdom requests through to their locked-down endpoints. Others rely on User-Agent detection to allow Pingdom probes through while denying other traffic.

In my case, I’ve implemented a powershell script that runs at intervals, checking Pingdom’s published Probe IP List and syncing it to my target Security Group. here’s how it’s done.

The very first thing you’ll need to do, if you haven’t already, is contact AWS Support and get your rules-per-group limit increased. By default, you get 50 (at the time of writing), and that’s not enough for this.

Then the code.

First up, you need a list of the IPs you want to whitelist other than pingdom. Not much use only opening your endpoint to the monitoring service, is it?

And so on. You may want to store this differently, but for me it’s just straight in the script. For now.

When you have those, you need to grab Pingdom’s probe IPs from their API

Excellent. Now, the pingdom addresses aren’t in CIDR format, so you need to convert them to CIDR and add them to the $whitelist array you set up earlier. For that, you need a function that does pipeline input.

And then you just stick that in your pipeline and get back an array of al the IP ranges that are meant to be in your security group.

And there you have a list of all the CIDR ranges that are meant to be in your security group’s ingress rule.

My rule literally only opens one port – 443 – so if you have multiple ports, you may want to do this differently. It also does nothing to try and compress down multiple adjacent addresses into a single CIDR, so if you need that, you’re going to need to do a little extra work.

Now, we compare the sec group’s existing rules, and the array we just obtained, like so

As you can see, we use Compare-Object to determine what needs to be added and what needs to be removed, and push just that rule up – or rip it out of – to the Security Group.

This technique can be used to whitelist any service that publishes its IPs in an API – in fact, if you’re whitelisting a client, you could get your client to publish their IP list to you and literally just put a script like this in place. Why do this crap manually? Let a script do it for you.

Getting Docker up and running on Windows (the long way round)

OK, So Docker Native is available for Windows, and in the wake of NDC Sydney I figured it was about time I got properly started with it. But getting it up and running wasn’t easy.

The installation, actually, was a breeze, with a reboot or two to get Hyper-V working on my laptop. But come time to run hello-world, there were problems.

Network timed out while trying to connect to You may want to check your internet connection or if you are behind a proxy.

OK, that’s bad. It suggests networking is broken. So, after a cursory inspection of my Hyper-V settings (everything looked OK), it was off to the internet. I found this article, which talks through setting up the beta version. Pretty good, but no dice.

Continue reading →

Extending Pester for fun and profit

Of late, I’ve been working on a little side project to test a rather complex Akamai Property. We wanted to be confident, after making changes, that the important bits were still working as we expected them to, and for some reason there was no easy, automated solution to test this.

Obviously I decided I’d write my testing project in Pester, and so it was that I began writing a whole bunch of tests to see what URLs returned what status code, which ones redirected, which ones were cache hits and cache misses and what headers were coming back.

First up, I wrote a generic function called “Invoke-AkamaiRequest”. This function would know whether we were testing against Staging or production, and would catch and correct PowerShell’s error behaviour – which I found undesirable – and allow us to send optional Akamai pragma headers (I’ll share this function in a later post).

With that up and running, I could start writing my tests. Here’s a set of simple examples

Now, that last one, testing a 301, is interesting. Not only do you need to test that a 301 or 302 status code is coming back, you also need to test where the redirect is sending you. So I started to write tests like this

And this worked fine. But it was a bit clunky. If only Pester had a RedirectTo assertion I could just throw in there, like so

If. Only.

Oh, but it can!

Yes, you can write custom assertions for Pester. They’re pretty easy to do, too. What you need is a trio of functions describing the logic of the test, and what to return if it fails in some way. They are named PesterAssertion, PesterAssertionFailureMessage and NotPesterAssertionFailureMessage, where Assertion is the assertion name, in my case “RedirectTo”

For my particular case, the logic was to take in an HTTP response object, and check that the status was 301 (or 302), and match the Location: header to a specified value. Pretty simple really. Here’s the basic code:

I put these into my supporting module (not into the Pester module) and ran my tests. Happy happy days, it worked perfectly. Throwing different URLs at it resulted in exactly the behaviour I wanted.

All that remained was to make the failure messages a little smarter and make the Not assertion more useful, but I figured before I did that I should write this little post with the nice clean code before the extra logic goes in and makes everything unreadable.

You can probably think of several ways you could streamline your tests with assertions right now. I’ve also written RedirectPermanently and ReturnStatus assertions, and I’m looking into HaveHeaders and BeCompressed. I may even release these as an add-on module at some point soon.

You can probably think of things that should go right back into the Pester codebase, too. And there are a number of other ways you can customise and extend pester to fit your own use cases.

To summarise: Pester is not just a flexible and powerful BDD framework for PowerShell. It’s also easily extensible, adding even more power to your PowerShell toolbox.

Now get out there and test stuff.