Continuous Delivery on a

by Mike Linnen 15. February 2015 23:11

I have always wanted to remotely update my IoT devices when they are in need of a software change.  It is really a pain when I have to disassemble a dedicated device just because I need to update the software that is embedded in it.  It has been so much of a pain for me I have gone as far as trying to make the device as dumb as possible and leave the smarts in a service located on a computer with an OS that supports remote access.  Doing so seems to make the device just a remote I/O extension and not an actual Smart Device. 

Continuous delivery takes the remote update one step further.  If I have a source control repository that I commit some new code to, then my automated process should be smart enough to pick up that change, compile, test and deploy the new bits to it’s final destination.  Wow wouldn’t that be cool!!!

In order to do this the device would need to be smart enough to support a remote update process.  That means a device would have to be able to check for updates and automatically apply them.  You could write your own remote update process and I have done this many times for desktop applications, but doing this on an embedded device is little more tricky.  Sometimes it would require you to actually change the device firmware especially if the device doesn’t support dynamically loaded software.  Basically the process to do so would require you to do the following:

  • Detect if an update exists on some remotely accessible resource
  • Download the update and unpack it
  • Unload the current version of the software
  • Load the new version of the software
  • Start executing the new version
  • Do this reliably without bricking the device

Fortunately the Spark.Io platform does all this for you so you don’t have to.  The platform even supports an open source backend that will take source code, compile and deploy it.  Now that makes it very possible to do continuous delivery for IoT devices.  The rest of this blog post will explain how easy it is to set this up.

There are several things you will need to have in order to accomplish this:

  1. A account and a device that is connected/registered with the account.
  2. A source control repository that is supported by your CI tool of choice (I am using GitHub)
  3. A build script that can either call restful APIs or can shell out to curl.exe (I am using psake with curl.exe).
  4. A CI tool that watches the source control for commits and kicks off the build process (I am using Team City on a Windows Azure VM).

You can see my source repository over at SparkContinuousDeliveryExample. Feel free to fork it and use it if you like.  I used psake as my build scripting language of choice as it is supported by Team City and I find it very easy to understand. 

Build script

My build script is called default.ps1.

properties { 
  $deviceId = $env:SparkDeviceId 
  $token = $env:SparkToken 
  $base_dir = resolve-path . 
  $curl = "$base_dir\lib\curl\curl.exe" 

task default -depends Deploy

task Clean { 

task Deploy { 
  $url = "" + $deviceId + "?access_token=" + $token 
  exec { 
     .$curl -X PUT -F file=@src\helloworld.ino "$url" -k 

} -depends Clean

task ? -Description "Helper to display task info" { 

The first thing you will notice is that the script takes 4 parameters that are defaulted but also can be overridden by Team City.  The $deviceId represents the device Id and can be found in the build website after you have logged in and picked one of your devices.  The $token is your access token and can be found in the build website after you have logged in and selected the settings menu option.  You really only have 1 access token for an account but you could have multiple device id’s for an account.  The other 2 parameters are really only for executing the curl.exe program that interfaces with the platform.  Since the repository has the curl.exe you wont have to worry about installing it on your build server, but if you don’t like putting EXE’s in your repository you could override these parameters to launch the curl.exe that is installed on your build agent. The Deploy task is the only task in this build script that does anything.  Basically it launches curl.exe passing in the device id, token and the file name to upload to the platform for compilation and deployment.  In this case the program source file is helloworld.ino.

Testing the build script locally

You can test the psake build script on your local machine by first setting the environment variables for the $env:SparkDeviceId and $env:SparkToken.


Of course I didn’t include my real Device ID and Token in the example above so make sure you set them using your own settings.

Make sure you are in the directory where you cloned the repository and use the Invoke-psake keyword to kick off the build.


Your code will be uploaded to, compiled and then deployed to your device.  As you can see here the process completed successfully.

Setting up Team City:

You will have to follow the instructions on the Team City web site on how to install Team City.  It is very easy to do but I wont detail it here.  I am also going to ignore the step of setting up Team City to connect to your Github repository. Assuming you have a project created in Team City and you have assigned the VCS root to the project you need a build step added to your project.  You want to make sure you select Powershell as the runner type.  See the other options I selected in the following screen snapshot:


Make sure you select Source Code as the script type and then type in the powershell script but make sure you replace the YourDeviceId and YourToken.


Here is the script so you can copy and paste it easier.

import-module .\lib\psake\psake.psm1
invoke-psake .\default.ps1 -properties @{"deviceId"="YourID";"token"="YourToken"}
if ($psake.build_success -eq $false) {exit 1} else { exit 0}

Here are the rest of the options to finish off the build step:


And that is all there is to it.  You will want to add a Trigger to your build project so that it fires off a build when source is committed to the repository.  Also make sure you have the VCS Root pointed to the “master” branch.  Now whenever you commit changes to the master branch the build will pick it up and send it off to for compile and deploy.


  • Works on single file source solutions at this time.
  • If a compile error exists the build does not fail (but the device does not update).
  • If you edit the readme or any other non-source files and commit it then the device gets updated even though no source changes were made.


  • Get this working on a free hosted build platform
  • Use a Raspberry PI as a build server
  • Remove the dependency on curl.exe and use powershell to make the REST api calls
  • Add in multi-file support
  • If the platform is down fail the build
  • Make sure a compile error fails the build


That is all I wanted to cover on this blog post.  Of course there a million other ways you could set this up and I would also like to hear how others might do this.  I have always dreaded pulling devices apart just to upgrade them and therefore I tended to not update them as frequently as I would like.  I just need to get the hardware nailed down on some projects so that I can get them installed and hooked up to a build.

Raleigh Code Camp 2013 Netduino Azure Session

by Mike Linnen 30. October 2013 21:31

I am presenting two sessions in the Raleigh Code Camp 2013 Builder Faire track on November 9th.  The first session is called Building a cloud enabled home security system Part 1 of 2 (the presentation).  The second session is Building a cloud enabled home security system Part 2 of 2 (the lab).  You really need to come to both sessions as the first session explains what you will be building in the second session.  Yes I said that right, if you attend the second session you will be building a Netduino based security system that connects to Windows Azure.  Check out the project website for more details at Cloud Home Security.

I hope to see you there!!

Carolina Code Camp 2013 Netduino Azure Session

by Mike Linnen 3. May 2013 21:39

I am presenting 2 sessions in the Carolina Code Camp 2013 Builder Faire track.  The first session is called Building a Home Security System – The Introduction.  The second session is Building a Home Security System – The Lab.  You really need to come to both sessions as the first session explains what you will be building in the second session.  Yes I said that right, if you attend the second session you will be building a Netduino based security system that connects to Windows Azure.  Check out the project website for more details at Cloud Home Security.    

Demo connecting 11 Netduinos to Windows Azure Service

by Mike Linnen 14. February 2013 23:28

I put together a talk that includes a lab on building a security/home automation system using 11 netduinos communicating over MQTT with a broker located in Windows Azure.  The attendees of this talk will walk through the lab and build out various components of a security system.

Here is a video demonstrating the various components of the system.  

The source for the project can be found on github:

The Security System website is hosted on a Web Role and it contains all the documentation for the lab.

Announcing: Hands on Hacknight connecting Netduino's to an Azure cloud service

by Mike Linnen 27. October 2012 00:00

In December I will be presenting this talk for the Charlotte Alt.Net users group. This talk is less about presenting and more about actually coding up a device that connects to the cloud.

This is not a sit back and watch the speaker meeting. As a participant in this project you will be building the devices that complete a Simulated Home Security system. There will be some basic code that is written for you but for the most part it will be your job to complete the code and make the device functional. The cloud service that connects all the devices via a message bus will already be completed and deployed to Windows Azure for you to use. Your device will publish and subscribe to messages on the bus.

Come out to the event and learn how to connect a Netduino Plus to Windows Azure.

Head on over to the meeting invite and sign up now.

Getting Really Small Message Broker running in Azure

by Mike Linnen 1. June 2012 23:34

In my previous post I talked about changing my home automation messaging infrastructure over to MQTT.  One of my goals was to also be able to remotely control devices in my house from my phone while I am not in my home.  The good news is that this is easily done by setting up a bridge between two brokers.  My on-premise broker is configured to connect to the off-premise broker as a bridge.  This allows me to publish and subscribe to topics on the off-premise broker which in turn get relayed to the on-premise broker. Well we need to host the off-premise broker somewhere and that somewhere can be an Azure worker role.

Really Small Message Broker (RSMB)  for windows is simply a console application that can be launched in a Worker Role.  In this blog post I will be showing you how to do just that.  One thing to note here is make sure you read the License agreement of RSMB before you use this application for your purposes.

Of course to actually publish this to Azure you will need to have an Azure account but this will also run under the emulator.  If you don’t have the tools to build windows azure applications head on over to the Windows Azure Developer portal  and check out the .Net section to get the SDK bits.  Also the following instructions assume you have downloaded RSMB and installed it onto your windows machine.

Create a new Cloud Windows Azure Project


Once you press the Ok button you will be asked what types of roles you want in the new project.  Just select a Worker Role and add it to the solution.


To make things easier rename the role as I have done below.


After selecting the Ok button you need to set up an endpoint for the worker role that will be exposed through the load balancer for clients to connect to.  Select the worker role and view the properties of the role.  Select the Endpoints tab and add a new endpoint with the following settings:

  • Name: WorkerIn
  • Type:
  • Protocol: tcp
  • Public Port: 1883


Add a new folder under the RSMBWorkerRole project called rsmb


Copy the following RSMB files to the new folder and add them to the RSMBWorkerRole project with Copy to Output Directory set to Copy Always

  • rsmb_1.2.0\windows\broker.exe
  • rsmb_1.2.0\windows\mqttv3c.dll
  • rsmb_1.2.0\windows\mqttv3c.lib
  • rsmb_1.2.0\messages\Messages.1.2.0


Add a class level declaration as follows:

Process _program = new Process();

Make sure you have a using statement for at the top of the class.

using System.IO;

Add code to the OnStart Method as follows:

public override bool OnStart()
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;

    string rsbroot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot") + @"\\", @"approot\\rsmb");
    int port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WorkerIn"].IPEndpoint.Port;

    ProcessStartInfo pInfo = new ProcessStartInfo(Path.Combine(rsbroot, @"broker.exe"))
        UseShellExecute = false,
        WorkingDirectory = rsbroot,
        ErrorDialog = false,
        CreateNoWindow = true,
    _program.StartInfo = pInfo;

    return true;

You should be able to launch the project under the Azure emulator and then use an MQTT client to connect to a topic like $SYS/# and the client should connect without error and start receiving notifications for the published messages.  If you need to setup some additional broker configurations such as the broker.cfg then just add it to the project under the rsmb folder and make sure it is set to copy to the output directory always.  You might want to enhance the code in the OnStart method to redirect output of the RSMB console to the azure diagnostics to make troubleshooting issues easier.  You also need to setup the on-premise broker to connect to the remote broker as a bridge.  The instructions to set up the local broker as a bridge can be found in the README.htm where you installed RSMB.

Lawn Sprinkler the Data Flow Part 3

by Mike Linnen 4. September 2011 16:57

This is part 3 of a multipart blog series that shows how commands that control my sprinkler flow through the infrastructure and reach the final destination of the Netduino Plus.  This video covers what components are used in the system to connect a Windows Phone 7 to the Netduino based Lawn Sprinkler as well as how a Weather Service can send data to the Netduino.  So connecting devices to devices and services to devices all using the Azure AppFabric Service Bus.

Here is the video:

Lawn Sprinkler the Demo Part 2

by Mike Linnen 21. August 2011 18:34

As mentioned in a previous post I am building a Home Automation project that consists of replacing my Lawn Sprinkler.  This is part 2 of the blog series but if you want to look at other posts related to this then here are some links to them as well:

Here is a video that demonstrates how the Lawn Sprinkler system works.

The code for the project is located on bitbucket at

I will be doing a presentation on this project at the Charlotte Alt.Net meeting on August 24th 2011

The power point slides for the presentation is also posted in the docs folder of the source repository

Lawn Sprinkler the Introduction Part 1

by Mike Linnen 2. July 2011 09:00


The new craze for Home Automation is to use technology to Go Green.  One aspect of Going Green is about managing resources in a more efficient way.  I have seen a number of other hobbyists build projects that manage the amount of electricity or gas that they use within their home.  In this project I am going to manage the amount of water I use for watering my lawn.  In part 1 of this series I am going to cover the big picture of what I am attempting to do.

Since this is a multipart post I am including the links to the other parts here as well:


Of course I needed a few requirements to define the scope of what I am attempting to do.

  • Support for up to 4 zones
  • Be able to manually turn on 1 or more zones (max 4) and have them run for a period of time
  • Be able to schedule 1 or more zones (max 4) to come on daily at a specific time of the day multiple times a day.
  • Be able to schedule 1 or more zones (max 4) to come on every Mon, Wed and Friday at a specific time of the day multiple times a day.
  • Be able to schedule 1 or more zones (max 4) to come on every Tuesday and Thursday at a specific time of the day multiple times a day.
  • Be able to turn off the system so that the scheduled or manual zones will immediately turn off or not turn on at their scheduled time.
  • Be able to do any of the above requirements remotely.
  • Do not turn on the sprinkler if rain is in the forecast (Go Green)
  • Do not turn on the sprinkler if the ground is already moist enough (Go Green)
  • Be able to automatically set the clock when daylight savings time changes.

At first I was going to make the sprinkler system a completely stand alone device where I could setup the schedule by using a keypad and an LCD.  This would allow me to completely control the device without remotely connecting to it.  But since I wanted to control the device remotely anyway and the cost of hardware and development efforts would be higher for a stand alone device, I decided to abandon the “Stand Alone” capabilities.  I did want the ability to turn off the sprinkler system without remotely connecting to it and I also wanted a quick way to know if the device was off or not.  A push button switch can be used to turn the sprinkler immediately off.  A couple LEDs can be used to let you know what mode the sprinkler is in.

The Sprinkler

I am using a Netduino Plus as the microcontroller that operates my sprinkler heads.  I choose this device because it uses the .Net Micro framework and it also has an onboard Ethernet controller which makes connecting it to my network a real easy task.  You could very easily use another device to control the sprinklers as long as it could handle the HTTP messages and had enough I/O to interface to the rest of the needed hardware. 

This device is responsible for the following:

  • Monitor the schedule and turn on the sprinklers if it is time to do so
    • 4 Digital Outputs
    • Onboard clock to know when to run the scheduled time
  • Watch for HTTP JSON requests that originate from the Windows Phone
    • The onboard Etherent works well for this
  • Watch for HTTP JSON requests that originate from the weather service telling the sprinkler the chance of rain
    • The onboard Etherent works well for this
  • Watch for HTTP JSON requests that originate from the time service telling the sprinkler to change it’s onboard clock
    • The onboard Etherent works well for this
  • On power up ask the time service for the correct time
    • The onboard Etherent works well for this
  • Monitor the Off pushbutton and cycle the mode of the sprinkler through the 3 states: Off/Manual/Scheduled
    • 1 Digital Input
  • Yellow LED goes on when in the Manual state
    • 1 Digital Output
  • Green LED goes on when in the Schedule state
    • 1 Digital Output
  • Monitor the ground moisture (Note: I haven’t done much research on how these sensors work so this might change)
    • 1 Analog Input
  • Persist the Manual and Scheduled programs so that a power cycle wont these values

The sprinkler modes need a little more discussion.  When in the Off mode the sprinkler heads will not turn on but the board will be powered up and listen for any HTTP requests and monitor the push button.  When cycling to the Off mode from any other mode the sprinklers will turn off if they where on.  When cycled to the Manual mode from any other mode the sprinkler will immediately run the manual schedule turning on the appropriate zones for the appropriate length of time.  If no Manual schedule exists then the sprinkler does nothing. In Scheduled mode the sprinkler waits for the programmed day and time to turn on the appropriate zones for the appropriate length of time unless the ground is already wet or rain is in the forecast.

The Remote Control

The remote control is the only way to program the sprinkler since it doesn’t have any UI for this task.  There can be many different devices that serve as the remote control but I intend to use my Samsung Focus Windows Phone 7 for this purpose. 

The application on this device just needs to send HTTP Get and Post requests.  Depending on the type of request a JSON message might be required in the body of the request (i.e. sending data to the sprinkler).  Also depending on the type of request the response may contain JSON(i.e. returning data from the sprinkler).

I chose to use HTTP and JSON as the communication mechanism between the remote control and the sprinkler so that I could remain platform independent.       

Connecting the Remote to the Sprinkler

The Netduino sprinkler sits behind my home firewall.  If I want to talk to the sprinkler with a device that is not behind the firewall then things start to get a little painful.  I would basically have the following options:

  • Don’t expose the sprinkler to the outside world (kind of limiting).
  • The sprinkler microcontroller would have to poll some server on the internet for any new messages that it should process (lots of busy work for the controller).
  • Punch a hole in my firewall so I can get through it from the internet (can you please hack me).
  • Use Windows Azure Service Bus(no brainer).

The Service Bus allows me to make outbound connections to Windows Azure cloud infrastructure and it keeps that connection open so that any external device can make remote procedure calls to the endpoint behind the firewall. I have decided to use the v 1.0 release of service bus for now, but in the future I could see this changing where I would use more of a publish/subscribe messaging infrastructure (which is in a future release of service bus) rather than a remote procedure call.

To leverage the Service Bus you must have a Host that sits behind the firewall and makes the connection to the Azure cloud platform.  For the purpose of this post I am calling this service the Home Connector.  The responsibility of this service is to connect to the Service Bus as a host so that it can accept remote procedure calls from a client.  The client in this case I call the remote connector.

The Home Connector

The Home Connector is a windows service that runs on one of my windows machines behind my firewall.  When a Remote Procedure Call comes in it is converted to an HTTP Get or Post JSON request that is sent to the Netdunio sprinkler.  The response from the Netduino is then parsed and returned back to the RPC caller.  This routing of Service Bus messages to devices behind my firewall is built with the mindset that more than one Netduino microcontroller will be servicing RPC calls from a remote device over the internet.  So this architecture is not limited to just the Sprinkler System.  I intend to add more microcontrollers in the same manor and register them with the home connector so that they too can service RPC requests. 

The Remote Connector

I could have skipped this layer between the phone and the sprinkler.  Since the phone would not be able to use the Service Bus DLL’s directly I could have used the Service Bus WebHttpRelayBinding which would allow me to submit messages to the bus over a REST style api directly from the phone.  But I wanted another layer between the Phone and the Sprinkler so that I could cache some of the requests to prevent my sprinkler from getting bombarded with messages.  I needed a lightweight web framework that would make creating HTTP Get/Post JSON messages easy.

I choose to use the NancyFX framework because it seemed to fit the bill of being quick and easy to get up and running.  That sure was the case when I pulled it down and started building out the first HTTP Get handler.  I simply created an empty web site and used nugetto install NancyFX into this existing blank site.  After that I created a module class and defined my routes and handlers for the routes and I was running with my first Get request in about 15 minutes.  The NancyFX framework also handled processing my JSON messages with very little effort on my part.  All I really needed to do is have a model that represented the JSON message and performed a bind operation on it and the model ended up fully populated.  I haven’t tried to play around with caching the responses yet but I don’t think that will be too hard.

It is important to understand that this remote connector does not have to be on an Azure web role to work.  I could easily deploy this web site to another hosting provider that might be a little cheaper to use.


The Netduino, Service Bus and NanacyFX web framework all seemed to be pretty easy to get me going on connecting devices in my home to my phone.  At the time of this post I haven’t finished the sprinkler system but I got an end to end example of using the Windows Phone to control my Netduino behind my firewall without punching any holes in my router.  I spent most of my time working out the JSON parsing issues across multiple devices then actually getting the infrastructure in place.

This opens up a whole new world of possibilities for me of connecting multiple home devices to my phone and other services.  Before I go to a multiple device household I will most likely move away from the RPC calls and introduce a more publish/subscribe model of passing messages around.  That way I can decouple the message producers from the message consumers.  I will probably wait for the newer Azure Service Bus bits before I tackle that problem though. 

One thing that I started to think about while doing this project is how much smarts (code) should I be placing in the Netduino device.  Right now I have a considerable amount of code that performs all the scheduling functionality in the Netduino.  So once the Netduino receives its pre-programmed schedule it basically can run without any other communications from the outside world (as long as the power doesn’t cycle).  However the scheduling functionality that is built into my sprinkler code is kind of limiting.  If I wanted to add more features to the scheduling functionality it would require me to build a lot of the logic into the Netduino sprinkler code.  This also means I need to deploy more bits to my sprinkler device.  As you can imaging this could develop into a deployment nightmare if a lot of customers are using this product.  There are ways to solve that kind of deployment issues by automating the update process but another solution is to remove the scheduling smarts from the sprinkler device itself and place that logic into a cloud service.  Basically the sprinkler device would know nothing about a schedule and it would be told when it should turn on and how long the zones should run for.  This would eliminate a lot of code that is on the device and make it easier to add new features to the service.  Of course that means the sprinkler device has to be connected to the internet at all times in order to work but that’s doable.  Well I don’t intend to move in that direction yet but I think once I finish out the original design I will explore building out a Home Automation as a Service (HAAS) model.

Keep a watch on my blog for the future posts where I will be diving deeper into each layer of the system and showing some code.  Also I will be posting the source code to the project at some point for others to see.

About the author

Mike Linnen

Software Engineer specializing in Microsoft Technologies

Month List