Lawn Sprinkler the Demo Part 2

by Mike Linnen 21. August 2011 18:34

As mentioned in a previous post I am building a Home Automation project that consists of replacing my Lawn Sprinkler.  This is part 2 of the blog series but if you want to look at other posts related to this then here are some links to them as well:

Here is a video that demonstrates how the Lawn Sprinkler system works.

The code for the project is located on bitbucket at https://bitbucket.org/mlinnen/lawnsprinkler.

I will be doing a presentation on this project at the Charlotte Alt.Net meeting on August 24th 2011

The power point slides for the presentation is also posted in the docs folder of the source repository https://bitbucket.org/mlinnen/lawnsprinkler/src/f5cd6cda9501/Docs/.

Lawn Sprinkler the Introduction Part 1

by Mike Linnen 2. July 2011 09:00

Overview

The new craze for Home Automation is to use technology to Go Green.  One aspect of Going Green is about managing resources in a more efficient way.  I have seen a number of other hobbyists build projects that manage the amount of electricity or gas that they use within their home.  In this project I am going to manage the amount of water I use for watering my lawn.  In part 1 of this series I am going to cover the big picture of what I am attempting to do.

Since this is a multipart post I am including the links to the other parts here as well:

Requirements 

Of course I needed a few requirements to define the scope of what I am attempting to do.

  • Support for up to 4 zones
  • Be able to manually turn on 1 or more zones (max 4) and have them run for a period of time
  • Be able to schedule 1 or more zones (max 4) to come on daily at a specific time of the day multiple times a day.
  • Be able to schedule 1 or more zones (max 4) to come on every Mon, Wed and Friday at a specific time of the day multiple times a day.
  • Be able to schedule 1 or more zones (max 4) to come on every Tuesday and Thursday at a specific time of the day multiple times a day.
  • Be able to turn off the system so that the scheduled or manual zones will immediately turn off or not turn on at their scheduled time.
  • Be able to do any of the above requirements remotely.
  • Do not turn on the sprinkler if rain is in the forecast (Go Green)
  • Do not turn on the sprinkler if the ground is already moist enough (Go Green)
  • Be able to automatically set the clock when daylight savings time changes.

At first I was going to make the sprinkler system a completely stand alone device where I could setup the schedule by using a keypad and an LCD.  This would allow me to completely control the device without remotely connecting to it.  But since I wanted to control the device remotely anyway and the cost of hardware and development efforts would be higher for a stand alone device, I decided to abandon the “Stand Alone” capabilities.  I did want the ability to turn off the sprinkler system without remotely connecting to it and I also wanted a quick way to know if the device was off or not.  A push button switch can be used to turn the sprinkler immediately off.  A couple LEDs can be used to let you know what mode the sprinkler is in.

The Sprinkler

I am using a Netduino Plus as the microcontroller that operates my sprinkler heads.  I choose this device because it uses the .Net Micro framework and it also has an onboard Ethernet controller which makes connecting it to my network a real easy task.  You could very easily use another device to control the sprinklers as long as it could handle the HTTP messages and had enough I/O to interface to the rest of the needed hardware. 

This device is responsible for the following:

  • Monitor the schedule and turn on the sprinklers if it is time to do so
    • 4 Digital Outputs
    • Onboard clock to know when to run the scheduled time
  • Watch for HTTP JSON requests that originate from the Windows Phone
    • The onboard Etherent works well for this
  • Watch for HTTP JSON requests that originate from the weather service telling the sprinkler the chance of rain
    • The onboard Etherent works well for this
  • Watch for HTTP JSON requests that originate from the time service telling the sprinkler to change it’s onboard clock
    • The onboard Etherent works well for this
  • On power up ask the time service for the correct time
    • The onboard Etherent works well for this
  • Monitor the Off pushbutton and cycle the mode of the sprinkler through the 3 states: Off/Manual/Scheduled
    • 1 Digital Input
  • Yellow LED goes on when in the Manual state
    • 1 Digital Output
  • Green LED goes on when in the Schedule state
    • 1 Digital Output
  • Monitor the ground moisture (Note: I haven’t done much research on how these sensors work so this might change)
    • 1 Analog Input
  • Persist the Manual and Scheduled programs so that a power cycle wont these values

The sprinkler modes need a little more discussion.  When in the Off mode the sprinkler heads will not turn on but the board will be powered up and listen for any HTTP requests and monitor the push button.  When cycling to the Off mode from any other mode the sprinklers will turn off if they where on.  When cycled to the Manual mode from any other mode the sprinkler will immediately run the manual schedule turning on the appropriate zones for the appropriate length of time.  If no Manual schedule exists then the sprinkler does nothing. In Scheduled mode the sprinkler waits for the programmed day and time to turn on the appropriate zones for the appropriate length of time unless the ground is already wet or rain is in the forecast.

The Remote Control

The remote control is the only way to program the sprinkler since it doesn’t have any UI for this task.  There can be many different devices that serve as the remote control but I intend to use my Samsung Focus Windows Phone 7 for this purpose. 

The application on this device just needs to send HTTP Get and Post requests.  Depending on the type of request a JSON message might be required in the body of the request (i.e. sending data to the sprinkler).  Also depending on the type of request the response may contain JSON(i.e. returning data from the sprinkler).

I chose to use HTTP and JSON as the communication mechanism between the remote control and the sprinkler so that I could remain platform independent.       

Connecting the Remote to the Sprinkler

The Netduino sprinkler sits behind my home firewall.  If I want to talk to the sprinkler with a device that is not behind the firewall then things start to get a little painful.  I would basically have the following options:

  • Don’t expose the sprinkler to the outside world (kind of limiting).
  • The sprinkler microcontroller would have to poll some server on the internet for any new messages that it should process (lots of busy work for the controller).
  • Punch a hole in my firewall so I can get through it from the internet (can you please hack me).
  • Use Windows Azure Service Bus(no brainer).

The Service Bus allows me to make outbound connections to Windows Azure cloud infrastructure and it keeps that connection open so that any external device can make remote procedure calls to the endpoint behind the firewall. I have decided to use the v 1.0 release of service bus for now, but in the future I could see this changing where I would use more of a publish/subscribe messaging infrastructure (which is in a future release of service bus) rather than a remote procedure call.

To leverage the Service Bus you must have a Host that sits behind the firewall and makes the connection to the Azure cloud platform.  For the purpose of this post I am calling this service the Home Connector.  The responsibility of this service is to connect to the Service Bus as a host so that it can accept remote procedure calls from a client.  The client in this case I call the remote connector.

The Home Connector

The Home Connector is a windows service that runs on one of my windows machines behind my firewall.  When a Remote Procedure Call comes in it is converted to an HTTP Get or Post JSON request that is sent to the Netdunio sprinkler.  The response from the Netduino is then parsed and returned back to the RPC caller.  This routing of Service Bus messages to devices behind my firewall is built with the mindset that more than one Netduino microcontroller will be servicing RPC calls from a remote device over the internet.  So this architecture is not limited to just the Sprinkler System.  I intend to add more microcontrollers in the same manor and register them with the home connector so that they too can service RPC requests. 

The Remote Connector

I could have skipped this layer between the phone and the sprinkler.  Since the phone would not be able to use the Service Bus DLL’s directly I could have used the Service Bus WebHttpRelayBinding which would allow me to submit messages to the bus over a REST style api directly from the phone.  But I wanted another layer between the Phone and the Sprinkler so that I could cache some of the requests to prevent my sprinkler from getting bombarded with messages.  I needed a lightweight web framework that would make creating HTTP Get/Post JSON messages easy.

I choose to use the NancyFX framework because it seemed to fit the bill of being quick and easy to get up and running.  That sure was the case when I pulled it down and started building out the first HTTP Get handler.  I simply created an empty web site and used nugetto install NancyFX into this existing blank site.  After that I created a module class and defined my routes and handlers for the routes and I was running with my first Get request in about 15 minutes.  The NancyFX framework also handled processing my JSON messages with very little effort on my part.  All I really needed to do is have a model that represented the JSON message and performed a bind operation on it and the model ended up fully populated.  I haven’t tried to play around with caching the responses yet but I don’t think that will be too hard.

It is important to understand that this remote connector does not have to be on an Azure web role to work.  I could easily deploy this web site to another hosting provider that might be a little cheaper to use.

Conclusion

The Netduino, Service Bus and NanacyFX web framework all seemed to be pretty easy to get me going on connecting devices in my home to my phone.  At the time of this post I haven’t finished the sprinkler system but I got an end to end example of using the Windows Phone to control my Netduino behind my firewall without punching any holes in my router.  I spent most of my time working out the JSON parsing issues across multiple devices then actually getting the infrastructure in place.

This opens up a whole new world of possibilities for me of connecting multiple home devices to my phone and other services.  Before I go to a multiple device household I will most likely move away from the RPC calls and introduce a more publish/subscribe model of passing messages around.  That way I can decouple the message producers from the message consumers.  I will probably wait for the newer Azure Service Bus bits before I tackle that problem though. 

One thing that I started to think about while doing this project is how much smarts (code) should I be placing in the Netduino device.  Right now I have a considerable amount of code that performs all the scheduling functionality in the Netduino.  So once the Netduino receives its pre-programmed schedule it basically can run without any other communications from the outside world (as long as the power doesn’t cycle).  However the scheduling functionality that is built into my sprinkler code is kind of limiting.  If I wanted to add more features to the scheduling functionality it would require me to build a lot of the logic into the Netduino sprinkler code.  This also means I need to deploy more bits to my sprinkler device.  As you can imaging this could develop into a deployment nightmare if a lot of customers are using this product.  There are ways to solve that kind of deployment issues by automating the update process but another solution is to remove the scheduling smarts from the sprinkler device itself and place that logic into a cloud service.  Basically the sprinkler device would know nothing about a schedule and it would be told when it should turn on and how long the zones should run for.  This would eliminate a lot of code that is on the device and make it easier to add new features to the service.  Of course that means the sprinkler device has to be connected to the internet at all times in order to work but that’s doable.  Well I don’t intend to move in that direction yet but I think once I finish out the original design I will explore building out a Home Automation as a Service (HAAS) model.

Keep a watch on my blog for the future posts where I will be diving deeper into each layer of the system and showing some code.  Also I will be posting the source code to the project at some point for others to see.

Unit Testing Netduino code

by Mike Linnen 20. March 2011 20:19

I really enjoy being able to write C# code and deploy/debug it on my Netduino device.  However there are many cases where I would like to do a little Test Driven Development to flush out a coding problem without deploying to the actual hardware.  This becomes a little difficult since the .Net Micro Framework doesn’t have an easily available testing framework.  There are some options that you have in order to write and execute tests:

Well I have another option that works with the full blown NUnit framework if you follow a few conventions when writing your Netduino code.  This approach does not use the emulator so you need to be able to break up your application into two different types of classes:

  • Classes that use .Net Micro or Netduino specific libraries
  • Classes that do not use .Net Micro or Netduino specific libraries.

This seems a little strange but another way to look at the organization of your classes is that if any code does any IO then it belongs in the non testable classes.  Any other code that performs logical decisions or calculations belongs in the testable classes.  This is a common approach that is done in a lot of systems and the IO classes are usually categorized as a hardware abstraction layer.  You can use interfaces to the hardware abstraction layer so that during unit testing you can fake or mock out the hardware and simulate conditions that test the decision making code.

Enough talking about approaches to unit testing lets get going on an example project that shows how this will work.  For this example I am creating a Netduino application that reads an analog sensor that measures the intensity of light and turns an LED on or off.   

Here is how the solution is set up so that I can do unit testing.

The solution consists of two projects:

  • Netduino.SampleApplication - a Netduino Application built against the .Net Micro Framework
  • Netduino.SampleApplication.UnitTests – a .Net 4.0 Class Library

The Netduino.SampleApplication.UnitTests project references the following:

image

Notice that this unit test project does not reference the assembly that it will be targeting for testing.  This is done on purpose because a .Net 4.0 Assembly cannot reference an assembly built against the .Net Micro Framework.  The project does reference the NUnit testing framework.  

Now lets talk about the class that we are going to write tests against. Since analog sensors can sometimes be a little noisy I wanted to take multiple samples of the sensor and average the results so that any noisy readings will be smoothed out.  This class is able to accept sensor readings and provides an average of the last N readings. 

Here is the AnalogSmoother class

image

This is a pretty simple class that exposes one operation called Add and one property called Average.  One thing to notice is that I have removed any using statements (Microsoft.SPOT) that would make this class .Net Micro specific or Netduino specific. 

To test this we need to use a cool Visual Studio feature called “Add as Link” where you can add an existing class to another project by linking to the original file.  If you change the original file the project that has the linked file will also see the change.  To add the linked file you simply right click on the Netduino.SampleApplication.UnitTests project and select Add –> Existing Item and navigate to the AnalogSmother.cs file and select the down arrow on the Add button.

image

So now you have a single file that is compiled in the Netduino project and the Unit Test project.  This makes it very easy to create a test fixture class in the unit test project that exercises the linked class. 

Here is the test fixture class:

image

So I was able to test this class without starting up an emulator or deploying to the Netduino.  This is great for classes that do not need to perform any IO but eventually you are going to run into a case where you need to access the specific hardware of the Netduino.  This is where the hardware abstraction layer comes into play. 

In this sample application I created the following interface:

image

Here is the class that implements the interface and does all the actual IO:

image

Here is the class that uses the IHardwareLayer interface that has some more logic that can be tested using the same approach of adding the linked file to the unit test project.

image

This class will have to be tested a little differently though because it actually expects the IHardwareLayer to return values when calling ReadLight.  We can simulate the hardware returning correct values by providing a fake implementation of the IHardwareLayer interface.  This can be done easily by creating a FakeHardwareLayer that implements the IHardwareLayer and returns the expected values.  Or you can use a mocking framework such as Moqto do the work for you.

image

The Moq mocking framework allows you to Setup specific scenarios and Verify that those scenarios are working.  The above test verifies that the LED does turn on and off for specific values of Light Readings.

Conclusion

I have been able to show you that unit testing is doable for Netduino projects if you follow a couple design patterns and you don’t have to wait for a testing framework to be available for the .Net Micro Framework.   

UPDATE: I made a couple small tweaks to the code and posted it on my NetduinoExamples repository under the UnitTestingExample subfolder.

Twitter Feed format for FIRST FRC 2010 Season

by Mike Linnen 20. January 2010 21:43

UPDATE: The feed changed a little bit from the first time I published the format

I made changes to the twitter feed format to match the game for the FIRST FRC 2010 Season.  You can follow the tweets for this season at http://twitter.com/Frcfms 

The new format is as follows:

#FRCABC - where ABC is the Event Code. Each event has a unique code.
TY X - where x is P for Practice Q for qualification E for Elimination
MC X - where X is the match number
RF XXX - where XXX is the Red Final Score
BF XXX - where XXX is the Blue Final Score
RE XXXX YYYY ZZZZ - where XXXX is red team 1 number, YYYY is red team 2 number, ZZZZ is red team 3 number
BL XXXX YYYY ZZZZ - where XXXX is blue team 1 number, YYYY is blue team 2 number, ZZZZ is blue team 3 number
RB X - where X is the Bonus the Referee gave to Red
BB X - where X is the Bonus the Referee gave to Blue
RP X - where X are the Penalties the Referee gave to Red
BP X - where X are the Penalties the Referee gave to Blue
RG X - where X is the Goals scored by Red
BG X - where X is the Goals scored by Blue
RGP X - where X is the Goal Penalties by Red
BGP X - where X is the Goal Penalties by Blue

Example tweet in text:

#FRCTEST TY Q MC 2 RF 5 BF 3 RE 3224 2119 547 BL 587 2420 342 RB 1 BB 1 RP 0 BP 0 RG 0 BG 5 RGP 2 BGP 1

I sure would like to know if anyone builds anything that parses these tweets.

Monitoring a BBQ Smoker with a Pub/Sub Message System

by Mike Linnen 3. January 2010 20:43

At one of the Charlotte Alt.Net meetings I gave a presentation on how I utilized a simple Pub/Sub messaging architecture to allow for several applications to communicate across machine boundaries.  This blog post won’t be about that system I demoed at the meeting, but it will be about a suggested example that will get the Pub/Sub concept across in a simple way.

Requirements 

The system that is being built to demonstrate the Pub/Sub concept will fulfill the following requirements:

  • Simulate a BBQ Smoker
  • Monitor the temperature of a BBQ Smoker
  • Provide feedback at what temperature the Smoker is currently at.
  • Provide visual Alarm indicators that identify three states of the temperature
    • Low
    • Normal
    • High
  • Ability to have multiple smokers
  • Ability to have multiple monitors monitoring a single smoker

Pub/Sub Infrastructure

The Pub/Sub Infrastructure was taken from a previous MSDN article written by Juval Lowy.  The infrastructure uses WCF as the communications mechanism. The source code I have in my sample project might be a little different than what was in the original MSDN article so I will explain how it is organized in the Visual Studio Solution. 

First a little background on what makes up a Pub/Sub Messaging system.  A Pub/Sub Messaging system has three components: Broker, Publisher and Subscriber.

Broker

The broker is the enabler.  The broker’s job is to connect publishers with subscribers. The broker contains a list of subscribers and what messages they are interested in.  The broker exposes endpoints that allow for subscribers to subscribe to messages and a publisher to publish interesting messages.  In my example solution the broker is a WCF service that is hosted by a console application (Broker.ConsoleHost).  Since this is a WCF service it can also be hosted under IIS or a Windows Service just as easily.

The WCF Contract for messages that the broker accepts for the BBQ Smoker is as follows:

image    

Since the broker also manages what subscribers want to subscribe to a contract also exists for subscribers as follows:

image

Publisher

The only thing the publisher knows is that when it has anything to publish it simply sends the message to the Broker.  The publisher has no idea on the final destination of the message or if there even is a final destination.  This promotes a very decoupled system in that the publisher knows nothing about their subscribers.  In my example solution the publisher is the Smoker device and it publishes the, Smoker Alarm and Temperature Changed messages.  A Smoker Alarm message is published when the smoker reaches a temperature that is too low of too high.  A Temperature Changed message is published when the temperature of the smoker has changed at all. 

Since the Smoker Temperature is simulated by a slider on the UI here is the event that fires when the slider changes and publishes the Alarm and Temperature changed messages:

image

The _publish object in the above code snippet is simply a web service proxy that calls the broker.

Subscriber

A subscriber communicates with the broker to tell it what published messages it is interested in.  The subscriber in my example solution is the Smoker Monitor.  The job of the subscriber is to listen for Smoker Alarm and Temperature Changed messages and display information to the user when these messages arrive.  The Smoker Alarm message will turn the display Yellow when the temperature is too low and Red when the temperature is too high.  The temperature changed message will update the screen with the actual temperature that came from the smoker.

The subscriber needs to register the class that implements the interface (IMessage) that will be the callback that gets executed when either of messages are received.  In my example solution this is done in the constructor of the Form1 class in the Smoker Monitor project as shown below:

image

Additional logic exists in the SmokerAlarm (callback) method that sets the UI components to the proper color and text based on the alarm state as shown below:

image

Conclusion

Since the broker is capable of hooking up multiple subscribers to a single publisher you can run multiple monitors across multiple machines all monitoring the same smoker.  This is very powerful because the monitors could be providing the same feedback to the user just in a different location of the house or the monitor could be providing feedback via a different means. For example another monitor could be created that sent a text message or a twitter message when the alarm is published.  These new monitors can be added without changing the existing contract.  Even a third party could create a monitor that did some special thing on published messages and you could run their monitor without changing the broker or smoker.      

A zip that contains the entire code for the BBQ Smoker Monitor via Pub/Sub messaging can be found here.  Feel free to download the code and look at it in more detail.  Also take a look at the ReadMe.txt for additional information on how to run the application.

I hope to expand on this in a later project that will allow me to create an actual embedded device that can detect the temperature of a smoker instead of using a simulated Smoker Device.  This embedded device will have to be able to publish the messages to the broker so that a PC does not have to be connected to the smoker.

Tags: ,

Using Powershell to manage application configuration

by Mike Linnen 1. June 2009 20:03

Doing agile development means deploying your application very frequently into many environments.  You might have several environments that all have different configuration settings.  Changing these settings by hand results in time consuming mistakes.  I have built a couple different console deployment tools over the years that handled this.  Usually you would run the console tool with a command line argument that would specify the environment that you wanted to configure and this tool would look up all the settings in a property file and make the changes in the app.config or web.config file.  I thought it would be fun to do something similar using powershell.

So what do I want this script to do? 

  • Keep environment specific settings in a property file
  • Support having 1 to N property files (i.e. Dev, Test, Build etc) for a project that is accepted as a parameter into the script
  • The web.config or app.config file is modified with the values that come from the property file
  • Support changing connection strings and application settings in version 1 of this script

The first steps in creating this script is to have a set of functions that can easily be used to set the entries in a config file.  I would use this script in all my projects that needed to have the ability to set connection and application settings.  This library of useful functions will be called Xml-Config.ps1

function Set-ConnectionString([string]$fileName, [string]$outFileName, [string]$name, [string]$value)
{

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the connection string to change
    $a.configuration.connectionstrings.selectsinglenode("add[@name='" + $name + "']")
       .connectionString = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
}
function Set-ConnectionString
([string]$fileName, [string]$outFileName, [string]$name, [string]$value)
{

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the cennection string to change
    $a.configuration.connectionstrings.selectsinglenode("add[@name='" + $name + "']")
             .connectionString = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
}
function Set-ApplicationSetting 
([string]$fileName, [string]$outFileName, [string]$name, [string]$value)
{

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the app settings item to change
    $a.configuration.appSettings.selectsinglenode("add[@key='" + $name + "']").value = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
}
function Format-XML ([xml]$xml, $indent=2) 
{ 
    $StringWriter = New-Object System.IO.StringWriter 
    $XmlWriter = New-Object System.XMl.XmlTextWriter $StringWriter 
    $xmlWriter.Formatting = "indented" 
    $xmlWriter.Indentation = $Indent 
    $xml.WriteContentTo($XmlWriter) 
    $XmlWriter.Flush() 
    $StringWriter.Flush() 
    Write-Output $StringWriter.ToString() 
}

Next I needed a script file that was specific to the software project.  I wouldn’t be re-using this script from project to project as it has very specific details that only apply to one project.  For example a software project might have a web.config for the web application but an app.config for a windows service.  It would be the job of this second script to know where these configs are located and tie the property values to the functions above.  In the following example the web.config has two settings that I want to change at deployment time (connection string and application setting).  These settings will be different for Test, Build and Development environments.  This script will be called dev.ps1.

dev.ps1

param([string]$propertyFile)

$workDir = Get-Location
. $workDir\Xml-Config.ps1
. $workDir\$propertyFile.ps1

# Change the connection string
Set-ConnectionString "web.config" "FMS_DB" $connectionString

# Change the app setting for the path to the backups
Set-ApplicationSetting "web.config" "DatabaseBackupRoot" $backupPath

Next I needed to create the property files that represented each environment. The following property file was called Test.ps1.

[string]$connectionString = "Data Source=(local); Database=FMS_TST; Integrated Security=true;"
[string]$backupPath = "c:\data\test"

So now the dev.ps1 script can be called passing in the environment that is being deployed.  In the following example the Test environment is being deployed. 

./dev.ps1 -propertyFile Test

Conclusion

I have shown you a simple way to use powershell to easily configure your application by using property files.  This technique can also be used in the automated build process to set the configuration before running integration tests.  I plan on expanding on this technique and exposing functions to set other frequently used configuration settings (logging details, WCF Endpoints, Other XML configurations).  Powershell is very powerful and makes automating complex tasks easier with less code.  In my command line console applications that performed the same function it would take a lot more lines of code to achieve the same results. 

Tools for Agile Development presentation materials

by Mike Linnen 17. May 2009 18:19

In this post you will find my PowerPoint and source code I used for my presentation at Charlotte Alt.Net May meeting.  I had a good time presenting this to the group even though it was very broad and shallow.  I covered the basis on why you want to leverage tools and practices in a Lean Agile environment.  I got into topics like Source Control, Unit Testing, Mocking, Continuous Integration and Automated UI Testing.  Each of these topics could have been an entire 1 hour presentation on its own. 

Here are the links to the tools that I talked about in the presentation:

Power Point “Tools for Agile Development: A Developer’s Perspective” http://www.protosystem.net/downloads/ToolsForAgileCharlotteAlt.Net/ToolsForAgile.ppt

NerdDinner solution with MSTests re-written as NUnit Tests and WatiN Automated UI Tests http://www.protosystem.net/downloads/ToolsForAgileCharlotteAlt.Net/NerdDinner_ToolsForAgileDev.zip

CI Factory modified solution that I used for creating a CI Build from scratch http://www.protosystem.net/downloads/ToolsForAgileCharlotteAlt.Net/CIFactory_ToolsForAgileDev.zip

Presenting at Charlotte Alt.Net user’s group

by Mike Linnen 4. May 2009 20:41

I will be presenting “Tools for Agile Development: A Developer’s Perspective” at the Charlotte Alt.Net User’s Group May 7th.  Get the details here http://www.charlottealtnet.org/.  Also there will be a second presentation after mine called “PowerShell as a Tools Platform”. 

I better get moving on cleaning up my presentation :)

Using PowerShell in the build process

by Mike Linnen 8. April 2009 15:37

I have used NAnt and MSBuild for years on many projects and I always thought there has to be a better way to script build processes.  Well I took a look at PowerShell and psake and so far I like it.  psake is a PowerShell script that makes breaking up your build script into target tasks very easy.  These tasks can also have dependencies on other tasks.  This allows you to call into the build script requesting a specific task to be built and have the dependant tasks get executed first.  This concept is not anything new to build frameworks but it is a great starting point to use the goodness of PowerShell in a build environment.

You can get psake from the Google Code repository.  I first tried the download link for v0.21 but I had some problems getting it to run my tasks so I went directly to the source and grabbed the tip version (version r27 at the time) of psake.ps1 and my problems went away. 

You can start off by using the default.ps1 script as a basis for your build process.  For a simple build process that I wanted to have for some of my small projects I wanted to be able to do the following:

  • “Clean”the local directory
  • “Compile” the VS 2008 Solution
  • “Test” the nunit tests
  • “Package” the results into a zip for easy xcopy deployment

Here is what I ended up with as a starting point for my default.ps1.

image

This really doesn't do anything so far except set up some properties for paths and define the target tasks I want to support.  The psake.ps1 script assumes your build script is named default.ps1 unless you pass in another script name as an argument.  Also since the task default is defined in my build script if I don't pass in a target task then the default task is executed which I have set as Test.

Build invoked without any target task:

image

Build invoked with the target task Package specified:

image

So now I have the shell of my build so lets get it to compile my Visual Studio 2008 solution.  All I have to do is add the code to the Compile task to launch VS 2008 passing in some command line options.

image

And here it is in action:

image

Notice I had to pipe the result of the command line call to “out-null”.  If I didn't do this the call to VS 2008 would run in the background and control would be passed back to my PowerShell script immediately.  I want to be sure that my build script would wait for the compile to complete before it would continue on. 

What about if the compile fails?  As it is right now the build script does not detect that the compile completed successfully or not.  VS 2008 (and previous versions of VS) return an exit code that defines if the compile was successful or not.  If the exit code = 0 then you can assume it was successful.  So all we need to do is test the exit code after the call to VS 2008.  PowerShell makes this easy with the $LastExitCode variable.  Throwing an exception in a task is detected by the psake.ps1 script and stops the build for you.

image

I placed a syntax error in a source file and when I call the Test target the Compile target fails and the Test target never executes:

image

Now I want to add in the ability to get the latest code from my source control repository.  Here is were I wanted the ability to support multiple solutions for different source control repositories like Subversion or SourceGear Vault.  But lets get it to work first with Subversion and then later refactor it to support other repositories.  For starters lets simply add the support for getting latest code in the Compile task.

image

As you can see right now this is very procedural and could certainly use some refactoring, but lets get it to work first and then worry about refactoring.  Here it is in action:

image

As I mentioned before I want to be able to support multiple source control solutions.  The idea here is something similar to what CI Factory uses.  In CI Factory you have what is known as a Package.  A Package is nothing more than an implementation of a build script problem for a given product.  For example you might have a source control package that uses Subversion and another source control package that uses SourceGear Vault.  You simply include the package you want for the source control product that you are using.  Psake also allows for you to include external scripts in your build process.  Here is how we would change what we have right now to support multiple source control solutions.

So I created a Packages folder under the current folder that my psake.ps1 script resides. I then created a file called SourceControl.SVN.ps1 in the Packages folder that looked like the following:

image

In the default.ps1 script Compile task I replaced the source control get latest code (told you I was going to refactor it) with a call to the SourceControl.GetLatest function (Line #20).  I also added a call to the psake include function (Line #10) passing in the following “Packages\SourceControl.SVN.ps1”.  Here is what the default.ps1 looks like now:

image

So if I wanted to support SourceGear Vault I would simply create another package file called SourceControl.Vault.ps1 and place the implementation inside the GetLatest function and change the include statement in the default.ps1 script to reference the vault version of source control.  I plan on adding in support for my Unit Tests the same way I did the source control, that way I can easily have support for multiple unit testing frameworks.

Conclusion

As you can see it is pretty easy to use PowerShell to replace your build process.  This post was just a short introduction on how you might get started and end that crazy XML syntax that has been used for so long in build scripting.  I have a lot more to do on this to make it actually usable for some of my small projects but hopefully I can evolve it into something that will be easy to maintain and reliable.  All in all I think PowerShell has some pretty cool ways of scripting some nice build solutions. 

Charlotte Lean-Agile Open - Tools for Agile development

by Mike Linnen 24. March 2009 09:05

I had a lot of fun presenting at the Lean-Agile Open.  It was a good turnout also.  I think there were at least 15 in my track.  I wanted shout out and thank Guy Beaver from Netobjectives and Ettain group for inviting me to present and making it all happen.  I wish I would of been able to attend Guy's presentation but I was a little bit busy.  Also Steve Collins of Richland County Information Technology gave a powerful presentation on his experiences with Agile.  You could tell he was very passionate about Lean-Agile approaches.

 

If any of you reading this attended my session I welcome feed back both good and bad, just send it along in an email to mlinnen@protosystem.net.  Attached to the end of this post is the slide deck I used in the presentation. 

Here are some links to the tools that I have used and that I spoke about in the presentation:

SourceGear Vault - Source control used by the developers and the build

NUnit - Unit Testing Framework used by the developers for TDD and Unit Tests. Also used in the build.

Test Driven .Net - Visual Studio Add-in used to make doing TDD easier and just launching tests for debug or code coverage purposes.

NCover - Code coverage of unit tests used by the developers and the build.

Rhino Mocks - Mocking our dependencies to make unit testing easier.

CI Factory - This was used to get our build up and running fast.  It includes build script solutions for many different build problems that you might want to solve.  It uses CruiseControl.Net (CCNet) under the hood. 

NAnt - Used to script the build.  If you are using CI Factory or CCNet NAnt is already packaged with these products so there is no need to download it.  The web site is a great resource when it comes time to alter the build.

WatiN - Used for UI Automated testing of web pages.  WatiN was used both by the developers and the build.

Sandcastle - Used by the build to create documentation of our code.

NDepend - Used by the build for static analysis of the code base and dependency graphs.   

IE Developer Tool bar - Internet Explorer add in to analyze web pages. 

Firebug - Firefox addon to analyze web pages.

ScrewTurn Wiki - For documenting release notes and provide a place for customer feedback.

Google Docs - Sprint backlog and burn down charts.

 

As I stated in my session the above tools are what I used and had good success with but my needs may not be the same as your needs so you owe it to your team to evaluate your own tools based on your own needs.

ToolsForAgile.ppt (1.88 mb)

Tags:

Software

About the author

Mike Linnen

Software Engineer specializing in Microsoft Technologies

Month List