Category Archives: Posts

Longer Posts

Reliable File Downloads with BITS

Every so often, one of my favourite cycle training video vendors releases a new video or two. These videos are generally multi-gigabyte files and downloading them through a browser, especially over a possibly-flaky wireless network, can be an exercise in frustration. Browser crashes happen, network blips happen, sometimes you even exit the browser session without thinking and terminate a nearly-complete download. That’s why I generally use BITS to download them, in PowerShell. How? Pretty simple, really. Just use the Start-BITSTransfer cmdlet, specifying source and destination, and you’re away.

Start-BITSTransfer http://www.myawesomevideosite.com/files/somebigfile $home\Downloads\somebigfile.zip

Running that will start your download, fire up a progress bar and some time later, you’ll have a usable file in your downloads folder. Of course, doing it this way will take over your PowerShell session for the duration of the download. Which is rubbish. Who wants to clutter up their desktop session with PowerShell windows? That’s why I do it asyncronously

Start-BITSTransfer -source http://www.myawesomevideosite.com/files/somebigfile -destination $home\Downloads\somebigfile.zip -asyncronous

Which is great. I can carry on using my PowerShell session in the foreground, or even close it, without interrupting the download process. I can even fire up another download next to the first one and just let them run in the background.

But how do I check on how the download is going?

I can use Get-BITSTransfer in any PowerShell session, and the BITS service will report the status of any currently running BITS jobs, like so

C:\> Get-BitsTransfer | Format-List

JobId               : d3c1a9a0-68f0-4831-939b-95ab0122476c
DisplayName         : BITS Transfer
TransferType        : Download
JobState            : Transferring
OwnerAccount        : DOMAIN\jason.brown
Priority            : Foreground
TransferPolicy      : Always
FilesTransferred    : 0
FilesTotal          : 1
BytesTransferred    : 208207360
BytesTotal          : 2430734370
CreationTime        : 27/10/2015 12:56:17 PM
ModificationTime    : 27/10/2015 1:09:08 PM
MinimumRetryDelay   :
NoProgressTimeout   :
TransientErrorCount : 1
ProxyUsage          : SystemDefault
ProxyList           :
ProxyBypassList     :

JobId               : 1d0a4b78-7b9c-4977-9b32-b962c754e8f6
DisplayName         : BITS Transfer
TransferType        : Download
JobState            : Transferring
OwnerAccount        : DOMAIN\jason.brown
Priority            : Foreground
TransferPolicy      : Always
FilesTransferred    : 0
FilesTotal          : 1
BytesTransferred    : 15883778
BytesTotal          : 2394848910
CreationTime        : 27/10/2015 1:08:02 PM
ModificationTime    : 27/10/2015 1:09:08 PM
MinimumRetryDelay   :
NoProgressTimeout   :
TransientErrorCount : 1
ProxyUsage          : SystemDefault
ProxyList           :
ProxyBypassList     :

You could even pick out the BytesTransferred and BytesTotal properties and do some quick math on them to see the percentage of download complete. There’s a whole load of stuff you can do with BITS to make your downloads complete more reliably.

Once you see your downloads are done, use the Complete-BitsTransfer cmdlet to save the file from its temporary location to your target.

Get-BitsTransfer | Complete-BitsTransfer

I’d recommend checking out the Get-Help and Get-Command output for these cmdlets to find out more if you want to get more advanced, or I might do a future blog post with some more advanced stuff like changing priorities, or downloading a list of files from a CSV or database. You can even use this system to do reliable uploads. It’s really a very handy set of cmdlets.

 

Rightsizing Your AWS Cloud Infrastructure: A Rumination

I am currently engaged in a mid-to-long-term project to rightsize the Cloud infrastructure at work. During our rapid change phase, shifting from a co-located infrastructure to public cloud, our primary priority was to get things done and minimise disruption while shifting services into the cloud quickly. Consider the things that cloud infrastructure should be

  • Cheap
  • Fast
  • Scalable
  • Resilient
  • Reliable*

During our migration phase, we weren’t too bothered about cheap. We wanted everything else, but the price ticket was… flexible, within limits. Now, some months later, we’re considering a number of our core services to be stable and mature, and therefore they’re prime candidates for aggressive cost optimisation. Cost optimisation in this case can imply a few different actions. Continue reading →

Unit Testing Functions that return random values with pester

Pester testing – and unit testing in general – is interesting. Take, for example, this scenario

Yep. Unit testing Functions which are designed to return a random value is most tricky. Take, for example, a Function I knocked up a little while ago that’s meant to return a random date and time during working hours in the following week.

Function Get-RandomDate
{
    [CmdletBinding()]
    param()
    # weekday, in the coming week, during business hours

    $now = Get-Date -Hour ((9..17) | Get-Random) -Minute ((0..59) | Get-Random)
    $now = $now.AddDays(7) # move it into next week
    $now = $now.AddDays( ((1..5) | Get-Random) - $now.DayOfWeek.value__)  # randomise the day    
    return $now 
}

Now, I am not 100% sure as I write this blog whether or not I’ve screwed up this function completely. Luckily, I’m using Pester, so I can test it. But because it returns a random value, this makes things a bit… tricky. You may be getting a regressed-to-the-mean middle result while your test runs, but out in the wild you may be returned an outlier and suddenly your function is causing all manner of screw-ups.

Continue reading →

Video interviews from #PSConfAsia have started appearing

PowerShell Magazine have started publishing interviews with speakers and experts from PowerShell.asia. For instance, here’s on with Jaap Brasser, and here’s another with Ben Hodge.

There should be one with your humble author that’ll drop at some point (update: here it is on YouTube and here it is on PowerShell magazine). I really do recommend checking out some of the content from the conference when you can.

While I’m on the subject, the Sydney PowerShell User Group now has a venue – here at Domain‘s workspace in Pyrmont – and will be having IRL meetups from November onwards. Ben will no doubt be calling for submissions shortly, and I’ll be working up some content on topics such as Pester, Octopus Deploy, WMF 5.0 and DevOps in general. If you’re in the area and you use PowerShell – or would like to start using PowerShell – I really recommend you come along.

Learning To Love The Splat

As with all good scripting languages, there is more than one way to do things in PowerShell. The guidelines tend towards the conservative, encouraging you to eschew aliases, use full parameter names and use common idioms when writing scripts.

But hey, that’s no fun. And sometimes it’s downright verbose. And there’s only so much time in the day.

Besides, the shortcuts are there for a reason, right?

Right.

So on to splatting.

Ever had to call a cmdlet several times in a row, perhaps at the shell, perhaps in a script, or perhaps in a Pester test? Ever got sick of typing the parameters multiple times? Then splatting is for you. Continue reading →

PowerShell.asia Roundup

It’s Sunday and I’m sitting at a tiny craft beer bar in Penang Lane, Singapore. Last night was the conclusion of the first PowerShell.asia summit, held over two days at Microsoft’s premises in Singapore, overlooking Marina Bay and the Formula One racetrack.

It’s a brand new conference, and as such there were varied expectations. However the conference delivered in spades. Continue reading →

Invoking Pester Tests on commit with client-side git hooks

So I’m sitting here at PowerShell.asia and thought I’d best blog a cool nugget from the day, lifted from Ravikanth Chaganti’s session on “Infrastructure as Code with Desired State Configuration (DSC)”

Using git local hooks, you can have poor man’s CI on your PowerShell scripts. What do I mean? Well, let’s imagine your powershell script has Pester tests rolled up with it in your repo. And let’s imagine you commit some bad code without having fired Invoke-Pester.

Manual steps like that are a muda – a way of introducing waste via defects and a way of introducing wasted work by extra meaningless typing. We hate manual steps here in DevOps land.

Now, those of us with the luxury of CI or CD pipelines can integrate our Pester tests there. Indeed at Domain, we have a box that runs tests on behalf of Octopus Deploy, and we have the option of using TeamCity or Bamboo to run Pester tests. But lots of people don’t have the luxury of spare environments and perhaps don’t need the complexity.

Ravi’s recommendation was to use git client-side hooks to automatically trigger pester tests on your local machine. Which is great. So I had a quick look.

Turns out there is a gotcha in there. It’s not sufficient to just drop in a post-commit.ps1 and hope for that to run. git won’t run .ps1 files by default. Being a bit linux-centric, it expects a bash script, or perhaps perl or python in an executable script, with no file extension.

The trick is to use bash to fire posh.

I found the solution over here. Take that bash script, put it into <repository>\.git\hooks with filename “post-commit”. Change it slightly so it points to your <repository>\.git\hooks\post-commit.ps1 script, and you’re pretty much done.

I conigured it up, changed a readme line and committed.

Pester fired up. Yay!

Pester failed. Booo!

Turns out, I had a step which checks that all exported functions in my module have a valid “SYNOPSIS” in their Get-Help text. And I’d spelled “Synopsis” wrong. Twice.

Fixed that, and I was up and flying.

Incidentally, the Pester script that checks for Documentation looks a little like this, as a bonus:

        It "Has Documentation on every exported Function" {
            $valid = $true
            $exportedCommands = (gmo Kraken).ExportedCommands
            $exportedCommands.GetEnumerator() | % {
                $functionName = $_.Key
                $help = Get-Help $functionName 
                if($help.synopsis -match $functionName)
                {
                    Write-Host $_.Key "has no valid help" 
                    # help has been generated, not written
                    $valid = $false
                }
            }
            $valid | Should Be $true
        }

HOWEVER if you want to use a pre-commit hook, and abort a commit if your tests fail, this method will not work because of a bug and because of the way Pester works by default.

First of all, to get Pester to return a non-zero status on failure, you need to add the -EnableExit parameter. This basically causes Pester to exit with an integer equal to the number of failed tests – zero for a good run, 1 or greater for a bad one.

Adding that is not enough. You need to invoke powershell.exe with a -command, not a -file, because of the bug I mentioned above.

Then, you need a shell script file that looks like this

#!/bin/sh
echo 
powershell.exe -NoProfile -ExecutionPolicy Bypass -Command "Write-Host "Invoking Pester" -fore DarkYellow; Invoke-Pester -EnableExit;"
exit $?

This makes a complex pre-commit command a little trickier to write, but no massive biggie. But it certainly aborts a commit if your tests fail – and THAT will make your repo cleaner and meaner immediately.

PoSHServer for Fun and Profit (and open-source props)

At my workplace, we use a lot of PowerShell. I mean a lot. I just did a quick script to count lines of PowerShell in my local git repos – admittedly very quick and dirty – and came up with going on for a quarter of a million. 221668 to be exact.

$x=0; gci -recurse -Filter "*.ps*1" | % { $x += (gc $_.FullName).count}; $x

And, of course, we’re a company that’s heavily bought into the API-first philosophy. Or at least the “while we’re rearchitecting, we’re doing the APIs first” philosophy. We deal with incoming and outgoing hooks and API calls a lot.

So it was kind of inevitable that eventually, the DevOps team would buy in to PoSHServer. Continue reading →

My new favourite feature

As our cloud server fleet grows, the DevOps team is increasingly nickel-and-dimed with small requests and troubleshooting tasks, for which we sometimes needed to RDP into instances. Indeed it’s for this reason that my Connect-RobotArmyv2Group and Connect-EC2Instance scripts were put together. But it’s far from ideal. For one thing, RDPing into individual instances is very Anticloud indeed.

But, we think we’ve got it under control now, and as per usual, it’s down to our trusty multi-limbed friend, OctopusDeploy.

Continue reading →

Capturing The Penguin

I recently observed, in one of my more lucid moments, that sometimes doing DevOps feels like hurtling down a railway, building the track as you go. Things get faster and faster and you’re constantly getting everything in a row just-in-time.

Later I realised that Wallace and Gromit, featuring those automation-obsessed northerners – is an end-to-end DevOps metaphor. Observe.