Garage door and the Spark Core on github

by Mike Linnen 25. December 2014 21:11

I just posted my initial code for my Garage door project using the Spark Core on github.  This is a work in progress and will evolve over time.

Garage door and the Spark Core

by Mike Linnen 20. December 2014 01:21

I recently moved to a new home and I now have 2 garage doors to control instead of one.  So I decided to revamp my garage door home automation project by using the Spark Core.  This is a fascinating device as it is designed to connect to the Spark.IO cloud service without doing a lot of coding to maintain the connectivity to the cloud.  The default firmware in the device allows you to remotely connect to it and invoke functions, expose variables to the cloud service as well as perform pub/sub between devices.  Make sure you check out their website to gain a better understanding of all the capabilities of this small packaged IoT controller.

My goals in this project was to achieve the following:

  • know when either garage door is opened or closed.
  • remotely close or open the garage door.
  • have the door automatically close when I go to bed.
  • keep track of how long it takes to open/close the garage door.
    • if the door starts to take longer to open and close this might be a sign that it needs maintenance.
  • automatically open the garage door when I arrive home.
  • automatically close the garage door when I leave my home.
  • notify me when the door needs maintenance because it has been opened/closed so many times.
  • monitor the temperature and humidity in the garage.
  • monitor the garage for motion when the security system is on.
  • notify me that I left the door open after a specific time of night.

That is certainly a large number of goals and I don’t intend to complete all of them initially but you kind of get the idea of what the possibilities are.

Initially I intended to do all of this for 2 garage doors with one Spark Core, but after thinking about it a bit it made more sense to use at least two Spark Cores

One thing I decided to do right off the bat is to make sure I have enough sensors that could determine when the door was opened and when it was closed.  I have seen other remote garage door projects that simply have one sensor that detects if the door is closed or not.  I wanted more inputs so that I could time how long it took for a door to complete it’s open or close command.  I want to keep track of this in order to determine if the door will need maintenance when it starts to take longer to open or close.  I can also gather a little more analytics around the timing of the door command and the temperature in the garage.  I don’t know if I will use this more detailed information for anything or not but I thought it would be fun to play around with.

So I will have 2 magnetic reed switches that I plan on placing on the door track to determine if the door is closed or opened.  The status of the door will be 5 different states: opened, closed, opening, closing and unknown.  The unknown state will only be for when neither of the sensors are triggered and the device doesn’t know if the door was previously opened or closed.  I have most of the code written to handle the basic door operations and I will be sharing that code in a future post.

So stay tuned on future posts on this topic as I move forward with it.  Please feel free to give me feedback or ask questions on items I haven’t clarified very well.  I am very interested in anyone’s thoughts on the Spark Core as well as home automation in general.

BuilderFaire track at Raleigh Code Camp 2014

by Mike Linnen 8. November 2014 10:04

I gave a talk at the Raleigh Code Camp called "What can I do with a Raspberry PI". The slide deck is attached to this post for those of you that attended.

Raspberry PIv2.pptx (6.67 mb)

Raleigh Code Camp 2013 Netduino Azure Session

by Mike Linnen 30. October 2013 21:31

I am presenting two sessions in the Raleigh Code Camp 2013 Builder Faire track on November 9th.  The first session is called Building a cloud enabled home security system Part 1 of 2 (the presentation).  The second session is Building a cloud enabled home security system Part 2 of 2 (the lab).  You really need to come to both sessions as the first session explains what you will be building in the second session.  Yes I said that right, if you attend the second session you will be building a Netduino based security system that connects to Windows Azure.  Check out the project website for more details at Cloud Home Security.

I hope to see you there!!

Carolina Code Camp 2013 Netduino Azure Session

by Mike Linnen 3. May 2013 21:39

I am presenting 2 sessions in the Carolina Code Camp 2013 Builder Faire track.  The first session is called Building a Home Security System – The Introduction.  The second session is Building a Home Security System – The Lab.  You really need to come to both sessions as the first session explains what you will be building in the second session.  Yes I said that right, if you attend the second session you will be building a Netduino based security system that connects to Windows Azure.  Check out the project website for more details at Cloud Home Security.    

Demo connecting 11 Netduinos to Windows Azure Service

by Mike Linnen 14. February 2013 23:28

I put together a talk that includes a lab on building a security/home automation system using 11 netduinos communicating over MQTT with a broker located in Windows Azure.  The attendees of this talk will walk through the lab and build out various components of a security system.

Here is a video demonstrating the various components of the system.  

The source for the project can be found on github:

The Security System website is hosted on a Web Role and it contains all the documentation for the lab.

Getting Really Small Message Broker running in Azure

by Mike Linnen 1. June 2012 23:34

In my previous post I talked about changing my home automation messaging infrastructure over to MQTT.  One of my goals was to also be able to remotely control devices in my house from my phone while I am not in my home.  The good news is that this is easily done by setting up a bridge between two brokers.  My on-premise broker is configured to connect to the off-premise broker as a bridge.  This allows me to publish and subscribe to topics on the off-premise broker which in turn get relayed to the on-premise broker. Well we need to host the off-premise broker somewhere and that somewhere can be an Azure worker role.

Really Small Message Broker (RSMB)  for windows is simply a console application that can be launched in a Worker Role.  In this blog post I will be showing you how to do just that.  One thing to note here is make sure you read the License agreement of RSMB before you use this application for your purposes.

Of course to actually publish this to Azure you will need to have an Azure account but this will also run under the emulator.  If you don’t have the tools to build windows azure applications head on over to the Windows Azure Developer portal  and check out the .Net section to get the SDK bits.  Also the following instructions assume you have downloaded RSMB and installed it onto your windows machine.

Create a new Cloud Windows Azure Project


Once you press the Ok button you will be asked what types of roles you want in the new project.  Just select a Worker Role and add it to the solution.


To make things easier rename the role as I have done below.


After selecting the Ok button you need to set up an endpoint for the worker role that will be exposed through the load balancer for clients to connect to.  Select the worker role and view the properties of the role.  Select the Endpoints tab and add a new endpoint with the following settings:

  • Name: WorkerIn
  • Type:
  • Protocol: tcp
  • Public Port: 1883


Add a new folder under the RSMBWorkerRole project called rsmb


Copy the following RSMB files to the new folder and add them to the RSMBWorkerRole project with Copy to Output Directory set to Copy Always

  • rsmb_1.2.0\windows\broker.exe
  • rsmb_1.2.0\windows\mqttv3c.dll
  • rsmb_1.2.0\windows\mqttv3c.lib
  • rsmb_1.2.0\messages\Messages.1.2.0


Add a class level declaration as follows:

Process _program = new Process();

Make sure you have a using statement for at the top of the class.

using System.IO;

Add code to the OnStart Method as follows:

public override bool OnStart()
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;

    string rsbroot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot") + @"\\", @"approot\\rsmb");
    int port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WorkerIn"].IPEndpoint.Port;

    ProcessStartInfo pInfo = new ProcessStartInfo(Path.Combine(rsbroot, @"broker.exe"))
        UseShellExecute = false,
        WorkingDirectory = rsbroot,
        ErrorDialog = false,
        CreateNoWindow = true,
    _program.StartInfo = pInfo;

    return true;

You should be able to launch the project under the Azure emulator and then use an MQTT client to connect to a topic like $SYS/# and the client should connect without error and start receiving notifications for the published messages.  If you need to setup some additional broker configurations such as the broker.cfg then just add it to the project under the rsmb folder and make sure it is set to copy to the output directory always.  You might want to enhance the code in the OnStart method to redirect output of the RSMB console to the azure diagnostics to make troubleshooting issues easier.  You also need to setup the on-premise broker to connect to the remote broker as a bridge.  The instructions to set up the local broker as a bridge can be found in the README.htm where you installed RSMB.

Switching out my home automation messaging infrastructure for MQTT

by Mike Linnen 1. June 2012 21:09

I am re-vamping my home automation strategy from a home grown publish/subscribe messaging system to use MQTT instead.  I was using Azure Service Bus to connect remote devices such as my phone with devices in my home such as my lawn irrigation system.  This worked well as a messaging infrastructure for remote devices but I wanted to have a more standard messaging infrastructure that could work in my home network without connectivity to the outside world. 

A few reasons why I switched to MQTT:

  • Light weight
  • Many Clients already exist for many platforms and languages
  • Support for on-premise message broker
  • Support for off-premise message broker
  • Support for bridging brokers (on-premise to off-premise)
  • Fast
  • Used by companies like COSM (was Pachube) and Github
  • Simple topic subscription model that is also powerful
  • I don’t want to write a message broker

For the most part I am moving toward having devices in the home that are relatively dumb and having services running on a home server that add the smarts behind the devices.  This will give me the flexibility to change the behavior of the system a lot quicker without the hassle of tearing apart a device to upgrade the software on it.  This means I needed to have my services available all the time.  Placing these services in the cloud for mission critical things would mean I am left with devices in the home that cannot function while my internet connectivity is down.  This was the biggest reason I moved to an off the self pub/sub infrastructure like MQTT.

Like most messaging protocols, MQTT works on the notion of a topic and a message.  The topic is just a unique way of addressing a message.  I struggled a lot and I probably will continue to struggle on what my topic structure for my home automation should look like.  One thing I wanted to do is try to make the topics readable so that troubleshooting message problems would be easier.  Hear are a few standards I am trying to settle on:

  • When a device or service changes state and wishes to notify interested parties the topic will end with /event 
  • When a device or service wants to know the status of another device or service the topic will end with /getstatus
  • When a device or service receives a topic /getstatus the response topic it generates will end with /status
  • When a device or service needs to set the state of another device or service the topic will end with /set

Here are a few examples of topics and messages for my irrigation system:

Description Topic Message
Zone 1 turned on irrigation/zone/event z1 On
Request the current schedule from the irrigation service. The service will respond to this request by publishing various status topics irrigation/schedule/getstatus  
Set the time that the irrigation service should start watering irrigation/schedule/starttime/set 09:00 AM
Status of the schedule start time in response to the getstatus request irrigation/schedule/starttime/status 09:00 AM
Set the days of the week that the irrigation system will water irrigation/schedule/days/set MON WED FRI
Status of the scheduled days of the week in response to the getstatus request irrigation/schedule/days/status MON WED FRI
Set the zones that the irrigation system will run and how long irrigation/schedule/zones/set z1 10 z2 8 z3 10
Status of the scheduled zones in response to the getstatus request irrigation/schedule/zones/status z1 10 z2 8 z3 10
Sets the zones to run on the irrigation device and how long irrigation/zones/run z1 10 z2 8 z3 10

MQTT does have a concept of publishing messages with a retain bit. This just tells the broker to hang onto the last message for a topic and when a new subscription arrives the client will receive the last message.  I could have used this concept instead of the /getstatus standard that I have show above.  I might change over to using the retain bit but for now the /getstatus works for me. I am also making my messages a little verbose as they tend to contain multiple values that could have been broken down into more granular topics.

Overall I really like how simple MQTT is and it is very easy to get a device like the Netduino to understand MQTT messages.  I am sure I will make modifications on how I actually define my topics and message body over time as I develop more and more devices and services that do useful stuf in my home.    

Presenting “Getting Started with Microsoft Robotics Developer Studio 4 and the Kinect”

by Mike Linnen 6. May 2012 14:16

I am excited about presenting on this topic for the Charlotte Alt.Net users group on May 8th in Charlotte. Head on over to the event posting and sign up to attend.

Here are the details about the talk:

The most recent release of Microsoft Robotics Developer Studio 4 (RDS4) has introduced two very exciting  concepts that make building robotic applications a reality to all developers: Kinect and Reference Platform Design specification.  The Kinect is the hot device that gives a new perspective on sensing your surroundings.  RDS 4 fully supports the Kinect and opens up all kinds of opportunities for awesome applications.  Do you want skeletal tracking in a robotics application, RDS 4 gives you that.  Do you want to perform obstacle avoidance with Kinect's depth sensor, RDS 4 gives you that. Do you want to simulate a Kinect in a virtual environment  to test out your high level code, RDS 4 gives you that.  The Reference Platform gives vendors a common design specification for building a working robot that includes sensors, motors and low level control. This allows for a developer that has little hardware experience to get up and running fast.  In this session I will introduced you to RDS 4 using the Kinect and an Eddie robot.

Eddie Robot

Microsoft Robotics Developer Studio

Using StudioShell to automate repetitive tasks

by Mike Linnen 12. February 2012 19:44

I have been building and releasing software for over 20 years now.  One thing I have learned over the years is that if you have a lot of manual steps in your release process then you will end up making mistakes.  There are many tools available that can help reduce those mistakes.  StudioShell is one of those tools and it has some unique characteristics that make it stand out over the rest of the tools I have used for automating release processes.

I have been maintaining software that manages the FIRST Robotics Competition for the last 5 years.  The Team Foundation Server build process for the software has been modified to package up the bits into an MSI installer.  The installer and a manifest file is deployed to a webserver.  This MSI and manifest make up an auto update process for all the events that are scattered across the US.  The problem I have had over the years is that the process of maintaining the Wix files, manifest, and assembly info meant I had to edit the version number for the next release in multiple places.  As you can imagine this was a recipe for easy mistakes that just do not need to exist.

Well I could solve this problem by modifying the build process to checkout the files that needed to have the new version number.  This would certainly remove the manual process of doing this by hand before the build executes.  However I am not a fan of having the build process modifying source files (Wix, manifest and assembly info).  So I was left with automating the manual process outside the build.  This is where StudioShell shines.

First I created a StudioShell Solution Module only because I wanted to be able to automatically change to the directory that the solution is located once the StudioShell view is opened up inside of Visual Studio.  This will allow me to easily launch other powershell scripts and do some relative paths within these scripts. 

Here is what is in the Solution Module:

function Set-Folder
    $file = (get-item(get-item dte:\solution).FullName);
    cd $file.DirectoryName

"Loading Solution Module" | out-outputpane;

$m = $MyInvocation.MyCommand.ScriptBlock.Module;
$m.OnRemove = {"Unloading Solution Module" | out-outputpane;}


As you can see in the script it uses the DTE drive to get the full path to the solution and it simply changes to that directory.  Now any scripts executed in the StudioShell host can use relative paths.

Next I needed a new powershell script that did all the manual operations as follows:

  • Prompt for the version number
  • Change the AssemblyFileVersion in the VersionInfo.cs file to have the new version
  • Change the manifest files to have the new version
  • Change the Wix files to have the new version number

Here is what is in the SetVersion.ps1 file:

function Set-Version
    Param ([string]$newVersion)
    $tmp = '"' + $newVersion + '.0"'
    (get-item dte:\solution\projects\\versioninfo.cs\codemodel\assemblyfileversion) | set-itemproperty -name value -value $tmp
    $file = get-item dte:\solution\projects\\versioninfo.cs

    Set-ManifestVersion $newVersion "Full"
    Set-ManifestVersion $newVersion "Delta"
    Set-Wix $newVersion "Full" "Server"
    Set-Wix $newVersion "Full" "App"
    Set-Wix $newVersion "Delta" "Server"
    Set-Wix $newVersion "Delta" "App"
    Set-Wix $newVersion "" "Light"
function Get-Version
    return (get-item dte:\solution\projects\\versioninfo.cs\codemodel\assemblyfileversion).Value
function Set-ManifestVersion
    Param ([string]$newVersion,[string]$installType)
    $tmp = '"' + $newVersion + '"'
    $fileName = ((get-item dte:\solution\projects\\$installType\manifest.xml).FileName)
    $xml = [xml](get-content $fileName)
    $appsNode = $xml.Manifest.SelectSingleNode("./Versions/ManifestItem").Clone()
    $serverNode = $xml.Manifest.SelectSingleNode("./Versions/ManifestItem").Clone()
    $nodes = $xml.Manifest.SelectNodes("./Versions/ManifestItem")

    $node = $xml.Manifest.SelectNodes("./Versions/ManifestItem[Version='$newVersion']")
    $node | ForEach-Object  {$xml.Manifest.Versions.RemoveChild($_)}

    $appsNode.Version = "$newVersion"
    $appsNode.FileName = "FMSApps$installType.msi"
    $appsNode.InstallerType = "Apps"
    $serverNode.Version = "$newVersion"
    $serverNode.FileName = "FMSServer$installType.msi"
    $serverNode.InstallerType = "Server"
function Set-Wix
    Param ([string]$newVersion,[string]$installType,[string]$targetType)
    $tmp = '"' + $newVersion + '"'
    $location = get-location
    $fileName = $location.Path + "\Source\fms.Installer\$targetType\Setup$installType.wxs"
    $xml = [xml](get-content $fileName)
    $node = $xml.Wix.Product
    $node.Version = "$newVersion"
    $nodes = $xml.Wix.Product.Upgrade.UpgradeVersion
    foreach ($node in $nodes)
      if ($node.Property -eq "NEWPRODUCTFOUND")
        $node.Minimum = "$newVersion"
      if ($node.Property -eq "UPGRADEFOUND")
        $node.Maximum = "$newVersion"

$current = Get-Version
Write-Host "The current version is:" $current
$newVersion = read-host "What is the new version you want"
Set-Version $newVersion

As you can see the script is not all that complex.  The DTE drive makes finding resources in the solution and accessing properties of the resource such as file name very easy.  Since the solution is bound to source control, modifying the VersionInfo.cs file automatically checks it out of source control.  I was even able to use the codemodel to easily find the AssemblyFileVersion and set its value to the next version number.

To release the next version of my software I simply launch the StudioShell view and type .\SetVersion in the shell and the script will read the current version from the VersionInfo.cs file and prompt me for the next version number.  After I enter the new version number and hit return the necessary files will be updated and saved.  All I have to do is check in the changes and kick off the build.

In conclusion I was able to use the powerful features of StudioShell to take a very manual multi step process and reduce it down to only a couple steps.     



About the author

Mike Linnen

Software Engineer specializing in Microsoft Technologies

Month List