Category Archives: Snippets

Little chunks of code

HOWTO: Whitelist Pingdom Probe IPs into AWS Security groups

This is something I’ve been meaning to write about for a while.

If you use pingdom for your monitoring, and you have a requirement to lock down your endpoints to a specific set of clients, you may have a painful job on your hands.

Some engineers I’ve spoken to have implemented a kind of proxy to forward pingdom requests through to their locked-down endpoints. Others rely on User-Agent detection to allow Pingdom probes through while denying other traffic.

In my case, I’ve implemented a powershell script that runs at intervals, checking Pingdom’s published Probe IP List and syncing it to my target Security Group. here’s how it’s done.

The very first thing you’ll need to do, if you haven’t already, is contact AWS Support and get your rules-per-group limit increased. By default, you get 50 (at the time of writing), and that’s not enough for this.

Then the code.

First up, you need a list of the IPs you want to whitelist other than pingdom. Not much use only opening your endpoint to the monitoring service, is it?

And so on. You may want to store this differently, but for me it’s just straight in the script. For now.

When you have those, you need to grab Pingdom’s probe IPs from their API

Excellent. Now, the pingdom addresses aren’t in CIDR format, so you need to convert them to CIDR and add them to the $whitelist array you set up earlier. For that, you need a function that does pipeline input.

And then you just stick that in your pipeline and get back an array of al the IP ranges that are meant to be in your security group.

And there you have a list of all the CIDR ranges that are meant to be in your security group’s ingress rule.

My rule literally only opens one port – 443 – so if you have multiple ports, you may want to do this differently. It also does nothing to try and compress down multiple adjacent addresses into a single CIDR, so if you need that, you’re going to need to do a little extra work.

Now, we compare the sec group’s existing rules, and the array we just obtained, like so

As you can see, we use Compare-Object to determine what needs to be added and what needs to be removed, and push just that rule up – or rip it out of – to the Security Group.

This technique can be used to whitelist any service that publishes its IPs in an API – in fact, if you’re whitelisting a client, you could get your client to publish their IP list to you and literally just put a script like this in place. Why do this crap manually? Let a script do it for you.

Reliable File Downloads with BITS

Every so often, one of my favourite cycle training video vendors releases a new video or two. These videos are generally multi-gigabyte files and downloading them through a browser, especially over a possibly-flaky wireless network, can be an exercise in frustration. Browser crashes happen, network blips happen, sometimes you even exit the browser session without thinking and terminate a nearly-complete download. That’s why I generally use BITS to download them, in PowerShell. How? Pretty simple, really. Just use the Start-BITSTransfer cmdlet, specifying source and destination, and you’re away.

Running that will start your download, fire up a progress bar and some time later, you’ll have a usable file in your downloads folder. Of course, doing it this way will take over your PowerShell session for the duration of the download. Which is rubbish. Who wants to clutter up their desktop session with PowerShell windows? That’s why I do it asyncronously

Which is great. I can carry on using my PowerShell session in the foreground, or even close it, without interrupting the download process. I can even fire up another download next to the first one and just let them run in the background.

But how do I check on how the download is going?

I can use Get-BITSTransfer in any PowerShell session, and the BITS service will report the status of any currently running BITS jobs, like so

You could even pick out the BytesTransferred and BytesTotal properties and do some quick math on them to see the percentage of download complete. There’s a whole load of stuff you can do with BITS to make your downloads complete more reliably.

Once you see your downloads are done, use the Complete-BitsTransfer cmdlet to save the file from its temporary location to your target.

I’d recommend checking out the Get-Help and Get-Command output for these cmdlets to find out more if you want to get more advanced, or I might do a future blog post with some more advanced stuff like changing priorities, or downloading a list of files from a CSV or database. You can even use this system to do reliable uploads. It’s really a very handy set of cmdlets.

 

Quickie: opening all powershell scripts in a repo

At my workplace, I sometimes have to switch rapidly from working on one repository to another – for instance if I’m working on Robot Army and I get a request to change something in Sleepytime or Grapnel.

Well, I got sick of hunting down the specific files I needed in a given repo, and instead wrote a quick throwaway function in my $profile

Dead simple. Finds all powershell scripts and modules in the current working path, recursively, and opens them in the ISE.

Much easier than messing around hunting the right file in the right subdirectory.

Of course, if you have hundreds of powershell files, YMMV. But it works for me.

Learning To Love The Splat

As with all good scripting languages, there is more than one way to do things in PowerShell. The guidelines tend towards the conservative, encouraging you to eschew aliases, use full parameter names and use common idioms when writing scripts.

But hey, that’s no fun. And sometimes it’s downright verbose. And there’s only so much time in the day.

Besides, the shortcuts are there for a reason, right?

Right.

So on to splatting.

Ever had to call a cmdlet several times in a row, perhaps at the shell, perhaps in a script, or perhaps in a Pester test? Ever got sick of typing the parameters multiple times? Then splatting is for you. Continue reading →

Using Pester to save yourself from leaked API keys

I’m here at PowerShell Conference Asia and enjoying some superb content and insightful discussion. One thing that just came up was the idea that Pester doesn’t have to be solely for testing code  you can also test things related your code – metadata for instance.

The example I just mentioned on the hashtag is that I have a Pester test which scans the entire repository for things that look like API keys – in my case for Octopus Deploy and AWS.

The code isn’t too tricky, to be honest. Just recurse over your files, open them up and test them against a regex. Here’s the code in question

This does come with caveats – AWS make no guarantee that their API key format won’t change. This certainly works right now, but might not work next week. Same with Octopus, as far as I’m aware. But it’ll protect the keys you have now from being exposed on github, potentially costing you thousands.

Notes to self: How do you know if a Redis cache is contactable?

I stood up a new Elasticache Redis cluster today for a colleague, and he was having trouble connecting. Often in AWS this means there’s a screwed up security group, but after checking the groups, he was still unable to connect.

So I logged into the staging server in question, raised my fingers to the keyboard and…

Realised I had no idea how to talk to Redis.

Continue reading →

Filtering resources by tag in AWS PowerShell

If you’ve worked with AWS PowerShell for any length of time, you’re probably well used to filtering resources based on attributes. For instance, grabbing any Autoscaling groups with a name that matches a given filter, like this.

Easy, isn’t it? Just uses the Where-Object cmdlet, with the filter parameter set to a simple -like match

And that’s about as far as many people go with Where-Object. Simple, first level matching. However when you’re dealing with AWS tags, you’ve got to do a bit more work. Tags are not exposed as first-level properties on your object. Instead, the Tags[] object is a first-level property, and the tags themselves are objects, with Key and Value properties. So you have to have a filter in your filter so you can filter while you filter.

With EC2, you can use the -Filter parameter on Get-EC2Instance, but Get-ASAutoScalingGroup doesn’t have this parameter. So you have to get smarter with Where-Object.

Luckily, the filter you pass into Where-Object is in fact a script block. You can do as much work as you like in there. It’s much more versatile than a simple string match. Let’s look, for example, at filtering AutoScaling Groups based on a tag named “Sleepytime” with value of “Yes”. I’ve expanded the properties a bit and added some formatting, to make it easier to read:

Or, as I’d have it in my own script

Taking this to its logical extent, you could take a huge object structure and zoom right in to a property many branches deep into the object tree, with a relatively readable filter structure. If you’ve read a big XML or JSON document into memory, for instance, this will allow you to filter by attributes buried far into the tree.

Of course, if your objects are extremely complex, there may be better, faster ways to query them, but in the case of AWS tags, this is a quick, simple and effective way of getting it done.

Friday, Friday. Gotta get down on Friday

A few of you might have spotted this via the Octopus Deploy June newsletter.

Friday

You might be wondering how it’s done, as at least one person has requested via Twitter. Well wait no longer. Here’s how it’s done.

What you’re looking at is a Slack notification from Octopus Deploy, featuring a little image that’s been doing the rounds on Twitter. It exists already as a Capistrano template, and I figured it wouldn’t be too hard to do in Octopus. Continue reading →

On Warmup Scripts and HTTP Methods

So I’m rewriting our venerable GuyGarvie IIS warmup script at the moment. This script started life as a way to warm up a very big .NET application on IIS6 using a list of URLs. Guy would simply hit each URL in turn with a GET request, checking the status thereof and returning an error if a significant number returned a non-200 status.

But Guy is terribly slow. And his list of URLs is full of legacy pages, broken links, duplications and general horror. And the application he warms up has moved on.

For this reason, I’m in the throes of rewriting him, converting his URL list from XML to JSON, cleaning it up and trying to tweak his speed, maybe parallelising him a bit too. And I have called him LiamFray, for reasons explained in the footnote. Continue reading →