Monitoring a BBQ Smoker with a Pub/Sub Message System

by Mike Linnen 3. January 2010 20:43

At one of the Charlotte Alt.Net meetings I gave a presentation on how I utilized a simple Pub/Sub messaging architecture to allow for several applications to communicate across machine boundaries.  This blog post won’t be about that system I demoed at the meeting, but it will be about a suggested example that will get the Pub/Sub concept across in a simple way.


The system that is being built to demonstrate the Pub/Sub concept will fulfill the following requirements:

  • Simulate a BBQ Smoker
  • Monitor the temperature of a BBQ Smoker
  • Provide feedback at what temperature the Smoker is currently at.
  • Provide visual Alarm indicators that identify three states of the temperature
    • Low
    • Normal
    • High
  • Ability to have multiple smokers
  • Ability to have multiple monitors monitoring a single smoker

Pub/Sub Infrastructure

The Pub/Sub Infrastructure was taken from a previous MSDN article written by Juval Lowy.  The infrastructure uses WCF as the communications mechanism. The source code I have in my sample project might be a little different than what was in the original MSDN article so I will explain how it is organized in the Visual Studio Solution. 

First a little background on what makes up a Pub/Sub Messaging system.  A Pub/Sub Messaging system has three components: Broker, Publisher and Subscriber.


The broker is the enabler.  The broker’s job is to connect publishers with subscribers. The broker contains a list of subscribers and what messages they are interested in.  The broker exposes endpoints that allow for subscribers to subscribe to messages and a publisher to publish interesting messages.  In my example solution the broker is a WCF service that is hosted by a console application (Broker.ConsoleHost).  Since this is a WCF service it can also be hosted under IIS or a Windows Service just as easily.

The WCF Contract for messages that the broker accepts for the BBQ Smoker is as follows:


Since the broker also manages what subscribers want to subscribe to a contract also exists for subscribers as follows:



The only thing the publisher knows is that when it has anything to publish it simply sends the message to the Broker.  The publisher has no idea on the final destination of the message or if there even is a final destination.  This promotes a very decoupled system in that the publisher knows nothing about their subscribers.  In my example solution the publisher is the Smoker device and it publishes the, Smoker Alarm and Temperature Changed messages.  A Smoker Alarm message is published when the smoker reaches a temperature that is too low of too high.  A Temperature Changed message is published when the temperature of the smoker has changed at all. 

Since the Smoker Temperature is simulated by a slider on the UI here is the event that fires when the slider changes and publishes the Alarm and Temperature changed messages:


The _publish object in the above code snippet is simply a web service proxy that calls the broker.


A subscriber communicates with the broker to tell it what published messages it is interested in.  The subscriber in my example solution is the Smoker Monitor.  The job of the subscriber is to listen for Smoker Alarm and Temperature Changed messages and display information to the user when these messages arrive.  The Smoker Alarm message will turn the display Yellow when the temperature is too low and Red when the temperature is too high.  The temperature changed message will update the screen with the actual temperature that came from the smoker.

The subscriber needs to register the class that implements the interface (IMessage) that will be the callback that gets executed when either of messages are received.  In my example solution this is done in the constructor of the Form1 class in the Smoker Monitor project as shown below:


Additional logic exists in the SmokerAlarm (callback) method that sets the UI components to the proper color and text based on the alarm state as shown below:



Since the broker is capable of hooking up multiple subscribers to a single publisher you can run multiple monitors across multiple machines all monitoring the same smoker.  This is very powerful because the monitors could be providing the same feedback to the user just in a different location of the house or the monitor could be providing feedback via a different means. For example another monitor could be created that sent a text message or a twitter message when the alarm is published.  These new monitors can be added without changing the existing contract.  Even a third party could create a monitor that did some special thing on published messages and you could run their monitor without changing the broker or smoker.      

A zip that contains the entire code for the BBQ Smoker Monitor via Pub/Sub messaging can be found here.  Feel free to download the code and look at it in more detail.  Also take a look at the ReadMe.txt for additional information on how to run the application.

I hope to expand on this in a later project that will allow me to create an actual embedded device that can detect the temperature of a smoker instead of using a simulated Smoker Device.  This embedded device will have to be able to publish the messages to the broker so that a PC does not have to be connected to the smoker.

Tags: ,

Using Powershell to manage application configuration

by Mike Linnen 1. June 2009 20:03

Doing agile development means deploying your application very frequently into many environments.  You might have several environments that all have different configuration settings.  Changing these settings by hand results in time consuming mistakes.  I have built a couple different console deployment tools over the years that handled this.  Usually you would run the console tool with a command line argument that would specify the environment that you wanted to configure and this tool would look up all the settings in a property file and make the changes in the app.config or web.config file.  I thought it would be fun to do something similar using powershell.

So what do I want this script to do? 

  • Keep environment specific settings in a property file
  • Support having 1 to N property files (i.e. Dev, Test, Build etc) for a project that is accepted as a parameter into the script
  • The web.config or app.config file is modified with the values that come from the property file
  • Support changing connection strings and application settings in version 1 of this script

The first steps in creating this script is to have a set of functions that can easily be used to set the entries in a config file.  I would use this script in all my projects that needed to have the ability to set connection and application settings.  This library of useful functions will be called Xml-Config.ps1

function Set-ConnectionString([string]$fileName, [string]$outFileName, [string]$name, [string]$value)

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the connection string to change
    $a.configuration.connectionstrings.selectsinglenode("add[@name='" + $name + "']")
       .connectionString = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
function Set-ConnectionString
([string]$fileName, [string]$outFileName, [string]$name, [string]$value)

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the cennection string to change
    $a.configuration.connectionstrings.selectsinglenode("add[@name='" + $name + "']")
             .connectionString = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
function Set-ApplicationSetting 
([string]$fileName, [string]$outFileName, [string]$name, [string]$value)

    # Load the config file up in memory
    [xml]$a = get-content $fileName;

    # Find the app settings item to change
    $a.configuration.appSettings.selectsinglenode("add[@key='" + $name + "']").value = $value

    # Write it out to the new file
    Format-XML $a | out-file $outFileName
function Format-XML ([xml]$xml, $indent=2) 
    $StringWriter = New-Object System.IO.StringWriter 
    $XmlWriter = New-Object System.XMl.XmlTextWriter $StringWriter 
    $xmlWriter.Formatting = "indented" 
    $xmlWriter.Indentation = $Indent 
    Write-Output $StringWriter.ToString() 

Next I needed a script file that was specific to the software project.  I wouldn’t be re-using this script from project to project as it has very specific details that only apply to one project.  For example a software project might have a web.config for the web application but an app.config for a windows service.  It would be the job of this second script to know where these configs are located and tie the property values to the functions above.  In the following example the web.config has two settings that I want to change at deployment time (connection string and application setting).  These settings will be different for Test, Build and Development environments.  This script will be called dev.ps1.



$workDir = Get-Location
. $workDir\Xml-Config.ps1
. $workDir\$propertyFile.ps1

# Change the connection string
Set-ConnectionString "web.config" "FMS_DB" $connectionString

# Change the app setting for the path to the backups
Set-ApplicationSetting "web.config" "DatabaseBackupRoot" $backupPath

Next I needed to create the property files that represented each environment. The following property file was called Test.ps1.

[string]$connectionString = "Data Source=(local); Database=FMS_TST; Integrated Security=true;"
[string]$backupPath = "c:\data\test"

So now the dev.ps1 script can be called passing in the environment that is being deployed.  In the following example the Test environment is being deployed. 

./dev.ps1 -propertyFile Test


I have shown you a simple way to use powershell to easily configure your application by using property files.  This technique can also be used in the automated build process to set the configuration before running integration tests.  I plan on expanding on this technique and exposing functions to set other frequently used configuration settings (logging details, WCF Endpoints, Other XML configurations).  Powershell is very powerful and makes automating complex tasks easier with less code.  In my command line console applications that performed the same function it would take a lot more lines of code to achieve the same results. 

Tools for Agile Development presentation materials

by Mike Linnen 17. May 2009 18:19

In this post you will find my PowerPoint and source code I used for my presentation at Charlotte Alt.Net May meeting.  I had a good time presenting this to the group even though it was very broad and shallow.  I covered the basis on why you want to leverage tools and practices in a Lean Agile environment.  I got into topics like Source Control, Unit Testing, Mocking, Continuous Integration and Automated UI Testing.  Each of these topics could have been an entire 1 hour presentation on its own. 

Here are the links to the tools that I talked about in the presentation:

Power Point “Tools for Agile Development: A Developer’s Perspective”

NerdDinner solution with MSTests re-written as NUnit Tests and WatiN Automated UI Tests

CI Factory modified solution that I used for creating a CI Build from scratch

Presenting at Charlotte Alt.Net user’s group

by Mike Linnen 4. May 2009 20:41

I will be presenting “Tools for Agile Development: A Developer’s Perspective” at the Charlotte Alt.Net User’s Group May 7th.  Get the details here  Also there will be a second presentation after mine called “PowerShell as a Tools Platform”. 

I better get moving on cleaning up my presentation :)

Using PowerShell in the build process

by Mike Linnen 8. April 2009 15:37

I have used NAnt and MSBuild for years on many projects and I always thought there has to be a better way to script build processes.  Well I took a look at PowerShell and psake and so far I like it.  psake is a PowerShell script that makes breaking up your build script into target tasks very easy.  These tasks can also have dependencies on other tasks.  This allows you to call into the build script requesting a specific task to be built and have the dependant tasks get executed first.  This concept is not anything new to build frameworks but it is a great starting point to use the goodness of PowerShell in a build environment.

You can get psake from the Google Code repository.  I first tried the download link for v0.21 but I had some problems getting it to run my tasks so I went directly to the source and grabbed the tip version (version r27 at the time) of psake.ps1 and my problems went away. 

You can start off by using the default.ps1 script as a basis for your build process.  For a simple build process that I wanted to have for some of my small projects I wanted to be able to do the following:

  • “Clean”the local directory
  • “Compile” the VS 2008 Solution
  • “Test” the nunit tests
  • “Package” the results into a zip for easy xcopy deployment

Here is what I ended up with as a starting point for my default.ps1.


This really doesn't do anything so far except set up some properties for paths and define the target tasks I want to support.  The psake.ps1 script assumes your build script is named default.ps1 unless you pass in another script name as an argument.  Also since the task default is defined in my build script if I don't pass in a target task then the default task is executed which I have set as Test.

Build invoked without any target task:


Build invoked with the target task Package specified:


So now I have the shell of my build so lets get it to compile my Visual Studio 2008 solution.  All I have to do is add the code to the Compile task to launch VS 2008 passing in some command line options.


And here it is in action:


Notice I had to pipe the result of the command line call to “out-null”.  If I didn't do this the call to VS 2008 would run in the background and control would be passed back to my PowerShell script immediately.  I want to be sure that my build script would wait for the compile to complete before it would continue on. 

What about if the compile fails?  As it is right now the build script does not detect that the compile completed successfully or not.  VS 2008 (and previous versions of VS) return an exit code that defines if the compile was successful or not.  If the exit code = 0 then you can assume it was successful.  So all we need to do is test the exit code after the call to VS 2008.  PowerShell makes this easy with the $LastExitCode variable.  Throwing an exception in a task is detected by the psake.ps1 script and stops the build for you.


I placed a syntax error in a source file and when I call the Test target the Compile target fails and the Test target never executes:


Now I want to add in the ability to get the latest code from my source control repository.  Here is were I wanted the ability to support multiple solutions for different source control repositories like Subversion or SourceGear Vault.  But lets get it to work first with Subversion and then later refactor it to support other repositories.  For starters lets simply add the support for getting latest code in the Compile task.


As you can see right now this is very procedural and could certainly use some refactoring, but lets get it to work first and then worry about refactoring.  Here it is in action:


As I mentioned before I want to be able to support multiple source control solutions.  The idea here is something similar to what CI Factory uses.  In CI Factory you have what is known as a Package.  A Package is nothing more than an implementation of a build script problem for a given product.  For example you might have a source control package that uses Subversion and another source control package that uses SourceGear Vault.  You simply include the package you want for the source control product that you are using.  Psake also allows for you to include external scripts in your build process.  Here is how we would change what we have right now to support multiple source control solutions.

So I created a Packages folder under the current folder that my psake.ps1 script resides. I then created a file called SourceControl.SVN.ps1 in the Packages folder that looked like the following:


In the default.ps1 script Compile task I replaced the source control get latest code (told you I was going to refactor it) with a call to the SourceControl.GetLatest function (Line #20).  I also added a call to the psake include function (Line #10) passing in the following “Packages\SourceControl.SVN.ps1”.  Here is what the default.ps1 looks like now:


So if I wanted to support SourceGear Vault I would simply create another package file called SourceControl.Vault.ps1 and place the implementation inside the GetLatest function and change the include statement in the default.ps1 script to reference the vault version of source control.  I plan on adding in support for my Unit Tests the same way I did the source control, that way I can easily have support for multiple unit testing frameworks.


As you can see it is pretty easy to use PowerShell to replace your build process.  This post was just a short introduction on how you might get started and end that crazy XML syntax that has been used for so long in build scripting.  I have a lot more to do on this to make it actually usable for some of my small projects but hopefully I can evolve it into something that will be easy to maintain and reliable.  All in all I think PowerShell has some pretty cool ways of scripting some nice build solutions. 

Charlotte Lean-Agile Open - Tools for Agile development

by Mike Linnen 24. March 2009 09:05

I had a lot of fun presenting at the Lean-Agile Open.  It was a good turnout also.  I think there were at least 15 in my track.  I wanted shout out and thank Guy Beaver from Netobjectives and Ettain group for inviting me to present and making it all happen.  I wish I would of been able to attend Guy's presentation but I was a little bit busy.  Also Steve Collins of Richland County Information Technology gave a powerful presentation on his experiences with Agile.  You could tell he was very passionate about Lean-Agile approaches.


If any of you reading this attended my session I welcome feed back both good and bad, just send it along in an email to  Attached to the end of this post is the slide deck I used in the presentation. 

Here are some links to the tools that I have used and that I spoke about in the presentation:

SourceGear Vault - Source control used by the developers and the build

NUnit - Unit Testing Framework used by the developers for TDD and Unit Tests. Also used in the build.

Test Driven .Net - Visual Studio Add-in used to make doing TDD easier and just launching tests for debug or code coverage purposes.

NCover - Code coverage of unit tests used by the developers and the build.

Rhino Mocks - Mocking our dependencies to make unit testing easier.

CI Factory - This was used to get our build up and running fast.  It includes build script solutions for many different build problems that you might want to solve.  It uses CruiseControl.Net (CCNet) under the hood. 

NAnt - Used to script the build.  If you are using CI Factory or CCNet NAnt is already packaged with these products so there is no need to download it.  The web site is a great resource when it comes time to alter the build.

WatiN - Used for UI Automated testing of web pages.  WatiN was used both by the developers and the build.

Sandcastle - Used by the build to create documentation of our code.

NDepend - Used by the build for static analysis of the code base and dependency graphs.   

IE Developer Tool bar - Internet Explorer add in to analyze web pages. 

Firebug - Firefox addon to analyze web pages.

ScrewTurn Wiki - For documenting release notes and provide a place for customer feedback.

Google Docs - Sprint backlog and burn down charts.


As I stated in my session the above tools are what I used and had good success with but my needs may not be the same as your needs so you owe it to your team to evaluate your own tools based on your own needs.

ToolsForAgile.ppt (1.88 mb)



Adding Twitter Notifications to your build

by Mike Linnen 10. March 2009 21:01

I added twitter posts to the FIRST FRC Field Management System, so that interested parties could get the results of a match in near real time.  Since twitter is focused around sending small messages I thought it would be a great mechanism to notify team members when the status of the build changes.  Most build solutions have a way already to do this, but they come in the form of an email or a custom program that sits in your tray waiting to notify you.  Twitter messages on the other hand can be consumed many different ways (web, twitter client, cell phone etc).  This gives great flexibility on how each team member decides on how he/she want to monitor the build process.  In this blog post I will show you how you can add twitter build notifications to a build process.

First you should get a twitter account so you can tell your team members what account they should follow to get the notifications.  You might want to set up your twitter account as private so you can manage who is allowed to follow.  Also this brings up a good point as you should not send any sensitive data in your build message tweet because the messages are sent across the wire and anyone can intercept them.

Next go get the Yedda Twitter C# Library.  This is a C# wrapper around the Twitter API.  It is very easy to use.  You can use the binary from the project or use the Twitter class that is part of the project.

All build processes that I have used (TFS, NANT, CCNET, and MSBUILD) allow for command line applications to be called from the build script.  So we will use the Twitter.cs class found in the Yedda C# Library in a console application to expose it's capabilities of sending twitter updates.  Go ahead and create the Console application and add the Twitter.cs class to it.  Then in the Program.cs Main method write some code to parse a few command line options to pass along to the Twitter,Update method. 


Example command line call to the executable:

tc -user twitterUserName -password twitterPassword -message "Build Failed to compile"

Example tweet generated from the above command line:


Now all you have to do is put the new console executable in a place on your build box that is accessible by the automated build and change your build script to call it with the right message.  You can make the tweet a little more informative as to why the build failed, or you can have the build tweet at certain key points of the process so you know exactly what step the build is on.  Be creative but don't send too many messages or the team members will soon ignore all build tweets as they end up being annoying.

Possibilities for improvement

  • You could make a twitter client that monitors the build tweets and it parses the message from the build and reacts differently based on if the build failed on a compiler error or a unit test.  Maybe some static analysis failed but it isn't severe enough to grant immediate attention.  The client might attempt to grab the team members attention more if the severity of the message is high enough.
  • What about a twitter client that parses the messages and controls a traffic red light.  Green is build passed.  Red is build failed. Yellow is unit tests failed. (6.46 kb)

New Twitter feed for FIRST FRC 2009 Field Management System

by Mike Linnen 24. February 2009 21:09

I have blogged several times before about my involvement in building the Field Management System that runs the FIRST FRC events.  Each year I have worked very hard with 2 other engineers on trying to build the best possible experience for the volunteers that run the event, teams that participate in the event, and the audience that attends the event.  This year we wanted to extend the experience to beyond those that actually attend the event.  We wanted to have a way to announce the results of the matches as they are happening on the field.  This has been done in the past by updating an HTML web page that gets posted on the FIRST web site.  But we wanted something more that could be used by the teams in their quest for knowledge on what is happening during each event on their device of choice.

So I am very pleased to say that this years event will have twitter updates for each match as they are completed on the field.  All you have to do is follow the FRCFMS twitter account in order to get match updates from all events.  The tweets that are posted follow a specific format that should allow the teams to build really cool applications on top of the twitter data.  Here is an example tweet of our test event:


As you can see in the tweet it is a little hard to read as we are jamming a bunch of information into the 140 character limitation but this should be vary easy to parse the information with a bot of some sort.

The format is defined as follows:

#FRCABC - where ABC is the Event Code.  Each event has a unique code.

TYP X - where x is P for Practice Q for qualification E for Elimination

MCH X - where X is the match number

ST X - where X is A for Autonomous T for Teloperated C for complete

TIM XXX - where XXX is the time left

RFIN XXX - where XXX is the Red Final Score

BFIN XXX - where XXX is the Blue Final Score

RED XXX YYY ZZZ - where XXX is red team 1 number, YYY is red team 2 number, ZZZ is red team 3 number

BLUE XXX YYY ZZZ - where XXX is blue team 1 number, YYY is blue team 2 number, ZZZ is blue team 3 number

RCEL X - where X is the red Super cell count

BCEL X - where X is the blue Super cell count

RROC X - where X is the red rock and red Empty Cell count

BROC X - where X is the blue rock and blue Empty Cell count

There are some cool ways you can use twitter to get the information you want for a specific event.  Hop on over to and enter in the following #FRCTEST TYP Q and you will get a list of all qualifying matches for the TEST event.  When the events start this weekend you can substitute the TEST code with the event code of your choice.  The FIRST FRC Team update has a list of all the valid event codes.

You can also use the with your favorite RSS reader to get updates in RSS format.

If other tweeple are tweeting about the event and using the same hashcode that the Field Management System uses then you can hop on over to #hashtags and enter in the hash code for the event and see all tweets for that event.  For example try navigating to and you will see all the tweets for the #frctest event that we have been running to test the Field Management System.

Although for week one the match tweets will only be at the end of each match, week 2 we are thinking about upping the frequency of these tweets so that you get more of them while the match is in play.  This will make it very difficult for a human to read the tweets on a small device because they will be too many of them coming.  I would like to hear any ones thought on what the frequency of tweets should be and if they expect to be reading the tweets rather than parsing them with another tool.  Of course if you intend to read the tweets and you are only interested in the final match result you could use the advanced search capabilities to only view tweets that have the status of complete.  That search would look something like this:


It will be really cool to see how the information we are posting is going to be used!


Robotics | Software

TFS Build reports partial fail even when all tests passed

by Mike Linnen 8. January 2009 09:33

I ran into a strange issue today when I was trying to figure out why my TFS Build was reporting that the build was partially successful even though every test was passing.  The normal build report really did not give any good reason why it was partially successful other than the fact that it was something related to the unit tests (I am using MS Test in this case).  So I cracked open the build log and peeled through the entries.  I noticed that when the code coverage was attempting to instrument the assemblies it reported that several of the assemblies could not be located.  Then I remembered I did some refactoring and I renamed and consolidated some assemblies. 

Well that should be an easy fix all I had to do was remove these assemblies from the test run config in the Code Coverage section.  So I opened up the LocalTestRun.testrunconfig file in Visual Studio 2008 and selected the Code Coverage section to make my changes.  As soon as I did this the config editor closed down (crashed).  Wow that was weird I never saw that before.  Hmmm I wonder what it could be.  Well here is what I did to try and locate the issue.

  1. Perhaps the Test Run Config file needs to be checked out of source control for write access.  Nope that wasn't it.
  2. Well if I cant edit it in VS 2008 then I might as well try notepad.  I removed the offending assemblies using notepad in the LocalTestRun.testrunconfig.  However once I opened up the Test Run Config editor and selected the Code Coverage the editor still crashed.
  3. Perhaps I malformed the Test Run Config xml file.  So I opened it up again in notepad and the XML looked fine.  Besides if this XML was malformed I don't think the Test Run Config editor would not open at all.
  4. Consult almighty search engine.  Wow look what I found and it was only reported 2 days ago.

So to be sure that I got the Test Run Config file right I removed my Database project from my solution made my edits for Code Coverage in the Test Run Config editor, then added the database project back into the solution. 

After fixing the Test Run Config file my build ran successfully.

Vista Media Center

by Mike Linnen 28. December 2008 02:34

I have wanted to set up a Media Center PC for a long time now and I finally got a chance to do just that this weekend.  I have to say that Vista Media Center has really impressed me.  The flexibility of having a PC that manages many media elements such as pictures, music, and movies and being able to stream that content to multiple devices in the house is awesome.  At this point I don't have my media center PC hooked up to my current broadcasting provider as I do not have a tuner that is capable of receiving the signal.  Not having a tuner is probably the one thing that has kept me so long from setting this up.  However currently I am not disappointed in missing the tuner because there is so much that can be done with media center.

In my house we currently have 4 PCs and an XBOX 360.  I have all of them networked together to gain access to the Internet.  Beyond a small amount of file and printer sharing the network served as an Internet provider.  Well now all that has changed.  With a central machine in place acting as the media center all other Vista PCs have direct access to the same content.  This content is currently music, pictures and recorded video.  It is such a nice thing to be able to stream this content to the XBOX 360. 

I used to think that having a large coax video distribution network throughout the house was a requirement in order to get video from one room to the next.  However video cable routing to each room can be pretty expensive.  Then you have to have the means to be able to control the video source when you are in another room.  Making this solution all Ethernet based is a real nice alternative, especially when you have PC's all over the house anyway.  And the benefits of not limiting the content to video only is really cool too.

We have a pretty large DVD collection as well.  With kids in the house the DVDs always get some abuse.  They soon start skipping or wont work at all.  So I have started saving our collection to the PC in order to preserve the original DVDs.  This works really well with a couple added applications.  First of all you need My Movies 2 in order to manage the collection and extend VMC to make it easy to view the movie on any PC in the house.  My Movies 2 comes as a server and client component.  You only need to install the server part on the master media center.  All other PCs get the client part of My Movies 2My Movies 2 has a really slick install that walks you through modifying the VMC menu options.  Next you need DVDfab in order to rip your collection.  I installed DVDfab on the master Media Center as that is were I will be managing my collection anyway.  Lastly if you don't convert the ripped video files to a well known format for the XBOX 360 you will need another application called Transcode 360.  This product will take the video files and convert them on the fly to a compatible format for the XBOX 360.

I think my next step in getting a great audio visual experience in my house is to get one of the Media Center Extenders and place it in our living room.  I think the extender would provide me with all I need for living room entertainment. 

Beyond that the only thing I would be missing is a way to get my DirecTV recorded programs accessible from VMC.  As I see it there are 2 options for this.  Option 1 would be to get a DirecTV tuner that would go into my PC.  However this currently is not available.  Option 2 is to get my existing DirecTV DVRs on the network so that I can expose the programs to VMC. 

About the author

Mike Linnen

Software Engineer specializing in Microsoft Technologies

Month List