Category Archives: Snippets

Little chunks of code

Pester Mocking, ParameterFilters and Write-Output

I just spend an annoyed 45 minutes or so puzzling over a Pester test I was writing. I needed to test an if/elseif/else branching block of PowerShell code, written by someone else, which did very similar things to install powershell modules, uninstall, reinstall, or not, depending on some evaluated conditions and based on a list of modules and their versions

The final else condition was, in essence, not doing anything other than writing to the output stream to say “I’m not doing anything at this point”, so obviously I figured I could use Assert-MockCalled on Write-Output and I’d know what path the code was going down and I’d be sorted.

Anything but. Continue reading →

CloudFormation’s Update-CfnStack and “No Updates Are To Be Performed”

If you use Cloudformation with Powershell – and I do – you’ve probably run into this error message more than once.

If you’re like me, you’ll have tried to suppress it in several ways. You’ve tried to check your own template for changes, avoiding the call if there are none. And you’ve probably fallen foul of tiny, insignificant differences that CloudFormation doesn’t care about. You’ve probably done the same with your parameters and hit similar problems.

You might even have taken to adding a timestamp to a harmless field of your template so that at least something has updated. At an unnamed previous company, we’d just add a comment to the end of a launchconfiguration’s powershell script, which was enough to force an update and suppress the error. You might even be fine with just ignoring the rain of red text.

In my current project, we can’t just ignore it, as we’re driving our templates out of Octopus Deploy. “No updates are to be performed” throws and aborts our pipeline, which is annoying. But we want to catch other, genuine error messages from Update-CfnStack.

So what to do?

Well, turns out this is an object lesson in correct use of try/catch.

Continue reading →

HOWTO: Whitelist Pingdom Probe IPs into AWS Security groups

This is something I’ve been meaning to write about for a while.

If you use pingdom for your monitoring, and you have a requirement to lock down your endpoints to a specific set of clients, you may have a painful job on your hands.

Some engineers I’ve spoken to have implemented a kind of proxy to forward pingdom requests through to their locked-down endpoints. Others rely on User-Agent detection to allow Pingdom probes through while denying other traffic.

In my case, I’ve implemented a powershell script that runs at intervals, checking Pingdom’s published Probe IP List and syncing it to my target Security Group. here’s how it’s done.

The very first thing you’ll need to do, if you haven’t already, is contact AWS Support and get your rules-per-group limit increased. By default, you get 50 (at the time of writing), and that’s not enough for this.

Then the code.

First up, you need a list of the IPs you want to whitelist other than pingdom. Not much use only opening your endpoint to the monitoring service, is it?

$whitelist = @(
   "123.123.123.123/32",
   "124.124.124.124/32",
   "52.52.52.0/24"
)

And so on. You may want to store this differently, but for me it’s just straight in the script. For now.

When you have those, you need to grab Pingdom’s probe IPs from their API

$probeIPs = Invoke-RestMethod https://my.pingdom.com/probes/feed

Excellent. Now, the pingdom addresses aren’t in CIDR format, so you need to convert them to CIDR and add them to the $whitelist array you set up earlier. For that, you need a function that does pipeline input.

Function Join-IPs # pipeline function to append to an incoming array of strings
{
    param
    (
        [Parameter(ValueFromPipeline=$true)]
        [string]
        $In,
        [string]
        $JoinTo = "/32"
    )
    process
    {
        return ("" + $_ + "" + $jointo + "")
    }
}

And then you just stick that in your pipeline and get back an array of al the IP ranges that are meant to be in your security group.

$ranges = $whitelist += ($probeIps | select -expand ip | Join-Ips -JoinTo "/32" )

And there you have a list of all the CIDR ranges that are meant to be in your security group’s ingress rule.

My rule literally only opens one port – 443 – so if you have multiple ports, you may want to do this differently. It also does nothing to try and compress down multiple adjacent addresses into a single CIDR, so if you need that, you’re going to need to do a little extra work.

Now, we compare the sec group’s existing rules, and the array we just obtained, like so

$targetGroup = Get-EC2SecurityGroup -Region $region | `
               ? {$_.GroupName -eq "s-fictional-svc-WebElbGroup-H97BD3LE36JI"}


$currentranges = $targetgroup.IpPermissions |`
               ? {$_.FromPort -eq 443} | select -expand IpRanges
$groupID = $targetgroup.GroupId

$diff = Compare-Object $currentranges $ranges 

$diff | % {
    # If the side indicator is =>, we add it
    # if the side indicator is <=, we remove it
    if($_.SideIndicator -eq "=>")
    {
        Write-Host "Granting Ingress perms to" $_.InputObject 
        Grant-EC2SecurityGroupIngress -GroupId $groupID `
                        -IpPermission @{
                                 FromPort = 443; 
                                 IpProtocol = "tcp"; 
                                 IPRanges = $_.InputObject; 
                                 ToPort = 443
                         }
    }

    if($_.SideIndicator -eq "<=")
    {
        Write-Host "Revoking Ingress perms from" $_.InputObject 
        Revoke-EC2SecurityGroupEgress -GroupId $groupId `
                        -IpPermission @{
                                 FromPort = 443; 
                                 IpProtocol = "tcp"; 
                                 IPRanges = $_.InputObject; 
                                 ToPort = 443
                         }
    } 
}

As you can see, we use Compare-Object to determine what needs to be added and what needs to be removed, and push just that rule up – or rip it out of – to the Security Group.

This technique can be used to whitelist any service that publishes its IPs in an API – in fact, if you’re whitelisting a client, you could get your client to publish their IP list to you and literally just put a script like this in place. Why do this crap manually? Let a script do it for you.

Reliable File Downloads with BITS

Every so often, one of my favourite cycle training video vendors releases a new video or two. These videos are generally multi-gigabyte files and downloading them through a browser, especially over a possibly-flaky wireless network, can be an exercise in frustration. Browser crashes happen, network blips happen, sometimes you even exit the browser session without thinking and terminate a nearly-complete download. That’s why I generally use BITS to download them, in PowerShell. How? Pretty simple, really. Just use the Start-BITSTransfer cmdlet, specifying source and destination, and you’re away.

Start-BITSTransfer http://www.myawesomevideosite.com/files/somebigfile $home\Downloads\somebigfile.zip

Running that will start your download, fire up a progress bar and some time later, you’ll have a usable file in your downloads folder. Of course, doing it this way will take over your PowerShell session for the duration of the download. Which is rubbish. Who wants to clutter up their desktop session with PowerShell windows? That’s why I do it asyncronously

Start-BITSTransfer -source http://www.myawesomevideosite.com/files/somebigfile -destination $home\Downloads\somebigfile.zip -asyncronous

Which is great. I can carry on using my PowerShell session in the foreground, or even close it, without interrupting the download process. I can even fire up another download next to the first one and just let them run in the background.

But how do I check on how the download is going?

I can use Get-BITSTransfer in any PowerShell session, and the BITS service will report the status of any currently running BITS jobs, like so

C:\> Get-BitsTransfer | Format-List

JobId               : d3c1a9a0-68f0-4831-939b-95ab0122476c
DisplayName         : BITS Transfer
TransferType        : Download
JobState            : Transferring
OwnerAccount        : DOMAIN\jason.brown
Priority            : Foreground
TransferPolicy      : Always
FilesTransferred    : 0
FilesTotal          : 1
BytesTransferred    : 208207360
BytesTotal          : 2430734370
CreationTime        : 27/10/2015 12:56:17 PM
ModificationTime    : 27/10/2015 1:09:08 PM
MinimumRetryDelay   :
NoProgressTimeout   :
TransientErrorCount : 1
ProxyUsage          : SystemDefault
ProxyList           :
ProxyBypassList     :

JobId               : 1d0a4b78-7b9c-4977-9b32-b962c754e8f6
DisplayName         : BITS Transfer
TransferType        : Download
JobState            : Transferring
OwnerAccount        : DOMAIN\jason.brown
Priority            : Foreground
TransferPolicy      : Always
FilesTransferred    : 0
FilesTotal          : 1
BytesTransferred    : 15883778
BytesTotal          : 2394848910
CreationTime        : 27/10/2015 1:08:02 PM
ModificationTime    : 27/10/2015 1:09:08 PM
MinimumRetryDelay   :
NoProgressTimeout   :
TransientErrorCount : 1
ProxyUsage          : SystemDefault
ProxyList           :
ProxyBypassList     :

You could even pick out the BytesTransferred and BytesTotal properties and do some quick math on them to see the percentage of download complete. There’s a whole load of stuff you can do with BITS to make your downloads complete more reliably.

Once you see your downloads are done, use the Complete-BitsTransfer cmdlet to save the file from its temporary location to your target.

Get-BitsTransfer | Complete-BitsTransfer

I’d recommend checking out the Get-Help and Get-Command output for these cmdlets to find out more if you want to get more advanced, or I might do a future blog post with some more advanced stuff like changing priorities, or downloading a list of files from a CSV or database. You can even use this system to do reliable uploads. It’s really a very handy set of cmdlets.

 

Quickie: opening all powershell scripts in a repo

At my workplace, I sometimes have to switch rapidly from working on one repository to another – for instance if I’m working on Robot Army and I get a request to change something in Sleepytime or Grapnel.

Well, I got sick of hunting down the specific files I needed in a given repo, and instead wrote a quick throwaway function in my $profile

Function Open-AllPowerShell
{
    gci *.ps*1 -Recurse -File | % { ise $_.FullName }
}

Dead simple. Finds all powershell scripts and modules in the current working path, recursively, and opens them in the ISE.

Much easier than messing around hunting the right file in the right subdirectory.

Of course, if you have hundreds of powershell files, YMMV. But it works for me.

Learning To Love The Splat

As with all good scripting languages, there is more than one way to do things in PowerShell. The guidelines tend towards the conservative, encouraging you to eschew aliases, use full parameter names and use common idioms when writing scripts.

But hey, that’s no fun. And sometimes it’s downright verbose. And there’s only so much time in the day.

Besides, the shortcuts are there for a reason, right?

Right.

So on to splatting.

Ever had to call a cmdlet several times in a row, perhaps at the shell, perhaps in a script, or perhaps in a Pester test? Ever got sick of typing the parameters multiple times? Then splatting is for you. Continue reading →

Using Pester to save yourself from leaked API keys

I’m here at PowerShell Conference Asia and enjoying some superb content and insightful discussion. One thing that just came up was the idea that Pester doesn’t have to be solely for testing code  you can also test things related your code – metadata for instance.

The example I just mentioned on the hashtag is that I have a Pester test which scans the entire repository for things that look like API keys – in my case for Octopus Deploy and AWS.

The code isn’t too tricky, to be honest. Just recurse over your files, open them up and test them against a regex. Here’s the code in question

Describe "Overview Tests" {
    Context "Checking Repo integrity" {   
         It "The repo does not include anything that looks like an Octopus API key" {
            # Octopus API keys are 31 chars and start with API-

            $ok = $true
            $badfiles = @()

            $regex = "\bAPI-\w{27}\b" 
            gci -recurse -File | % {
                $filecontent = gc $_.FullName -raw
                if($filecontent -match $regex)
                {
                    $ok = $false
                    Write-Host $_.FullName "has an Octopus API key warning"
                }
            }
            $ok | Should Be $true
        }

        It "Doesn't contain anything that looks like an AWS Key or secret" {
            $ok = $true
            $badfiles = @()

            $regex = "\b(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])\b" 
            $secretregex = "\b(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])\b"

            gci -recurse -File | % {
                $filecontent = gc $_.FullName -raw
                if($filecontent -match $regex -or $filecontent -match $secretregex)
                {
                    $ok = $false
                    Write-Host $_.FullName "has an AWS API key warning"
                }
            }
            $ok | Should Be $true
        }
    }
}

This does come with caveats – AWS make no guarantee that their API key format won’t change. This certainly works right now, but might not work next week. Same with Octopus, as far as I’m aware. But it’ll protect the keys you have now from being exposed on github, potentially costing you thousands.

Notes to self: How do you know if a Redis cache is contactable?

I stood up a new Elasticache Redis cluster today for a colleague, and he was having trouble connecting. Often in AWS this means there’s a screwed up security group, but after checking the groups, he was still unable to connect.

So I logged into the staging server in question, raised my fingers to the keyboard and…

Realised I had no idea how to talk to Redis.

Continue reading →

Filtering resources by tag in AWS PowerShell

If you’ve worked with AWS PowerShell for any length of time, you’re probably well used to filtering resources based on attributes. For instance, grabbing any Autoscaling groups with a name that matches a given filter, like this.

Get-ASAutoScalingGroup | ? { $_.AutoScalingGroupName -like "production-*" }

Easy, isn’t it? Just uses the Where-Object cmdlet, with the filter parameter set to a simple -like match

And that’s about as far as many people go with Where-Object. Simple, first level matching. However when you’re dealing with AWS tags, you’ve got to do a bit more work. Tags are not exposed as first-level properties on your object. Instead, the Tags[] object is a first-level property, and the tags themselves are objects, with Key and Value properties. So you have to have a filter in your filter so you can filter while you filter.

With EC2, you can use the -Filter parameter on Get-EC2Instance, but Get-ASAutoScalingGroup doesn’t have this parameter. So you have to get smarter with Where-Object.

Luckily, the filter you pass into Where-Object is in fact a script block. You can do as much work as you like in there. It’s much more versatile than a simple string match. Let’s look, for example, at filtering AutoScaling Groups based on a tag named “Sleepytime” with value of “Yes”. I’ve expanded the properties a bit and added some formatting, to make it easier to read:

Get-ASAutoScalingGroup | Where-Object -FilterScript {
    $_.Tags | Where-Object {
        $_.Key -eq "Sleepytime" -and $_.Value -eq "Yes" 
    }
}

Or, as I’d have it in my own script

Get-ASAutoScalingGroup | ? { $_.Tags | ? { $_.Key -eq "Sleepytime" -and $_.Value -eq "Yes" }}

Taking this to its logical extent, you could take a huge object structure and zoom right in to a property many branches deep into the object tree, with a relatively readable filter structure. If you’ve read a big XML or JSON document into memory, for instance, this will allow you to filter by attributes buried far into the tree.

Of course, if your objects are extremely complex, there may be better, faster ways to query them, but in the case of AWS tags, this is a quick, simple and effective way of getting it done.