Presenting “Getting Started with Microsoft Robotics Developer Studio 4 and the Kinect”

by Mike Linnen 6. May 2012 14:16

I am excited about presenting on this topic for the Charlotte Alt.Net users group on May 8th in Charlotte. Head on over to the event posting and sign up to attend.

Here are the details about the talk:

The most recent release of Microsoft Robotics Developer Studio 4 (RDS4) has introduced two very exciting  concepts that make building robotic applications a reality to all developers: Kinect and Reference Platform Design specification.  The Kinect is the hot device that gives a new perspective on sensing your surroundings.  RDS 4 fully supports the Kinect and opens up all kinds of opportunities for awesome applications.  Do you want skeletal tracking in a robotics application, RDS 4 gives you that.  Do you want to perform obstacle avoidance with Kinect's depth sensor, RDS 4 gives you that. Do you want to simulate a Kinect in a virtual environment  to test out your high level code, RDS 4 gives you that.  The Reference Platform gives vendors a common design specification for building a working robot that includes sensors, motors and low level control. This allows for a developer that has little hardware experience to get up and running fast.  In this session I will introduced you to RDS 4 using the Kinect and an Eddie robot.

Eddie Robot http://www.parallax.com/eddie

Microsoft Robotics Developer Studio http://www.microsoft.com/robotics/

Unit Testing Netduino code

by Mike Linnen 20. March 2011 20:19

I really enjoy being able to write C# code and deploy/debug it on my Netduino device.  However there are many cases where I would like to do a little Test Driven Development to flush out a coding problem without deploying to the actual hardware.  This becomes a little difficult since the .Net Micro Framework doesn’t have an easily available testing framework.  There are some options that you have in order to write and execute tests:

Well I have another option that works with the full blown NUnit framework if you follow a few conventions when writing your Netduino code.  This approach does not use the emulator so you need to be able to break up your application into two different types of classes:

  • Classes that use .Net Micro or Netduino specific libraries
  • Classes that do not use .Net Micro or Netduino specific libraries.

This seems a little strange but another way to look at the organization of your classes is that if any code does any IO then it belongs in the non testable classes.  Any other code that performs logical decisions or calculations belongs in the testable classes.  This is a common approach that is done in a lot of systems and the IO classes are usually categorized as a hardware abstraction layer.  You can use interfaces to the hardware abstraction layer so that during unit testing you can fake or mock out the hardware and simulate conditions that test the decision making code.

Enough talking about approaches to unit testing lets get going on an example project that shows how this will work.  For this example I am creating a Netduino application that reads an analog sensor that measures the intensity of light and turns an LED on or off.   

Here is how the solution is set up so that I can do unit testing.

The solution consists of two projects:

  • Netduino.SampleApplication - a Netduino Application built against the .Net Micro Framework
  • Netduino.SampleApplication.UnitTests – a .Net 4.0 Class Library

The Netduino.SampleApplication.UnitTests project references the following:

image

Notice that this unit test project does not reference the assembly that it will be targeting for testing.  This is done on purpose because a .Net 4.0 Assembly cannot reference an assembly built against the .Net Micro Framework.  The project does reference the NUnit testing framework.  

Now lets talk about the class that we are going to write tests against. Since analog sensors can sometimes be a little noisy I wanted to take multiple samples of the sensor and average the results so that any noisy readings will be smoothed out.  This class is able to accept sensor readings and provides an average of the last N readings. 

Here is the AnalogSmoother class

image

This is a pretty simple class that exposes one operation called Add and one property called Average.  One thing to notice is that I have removed any using statements (Microsoft.SPOT) that would make this class .Net Micro specific or Netduino specific. 

To test this we need to use a cool Visual Studio feature called “Add as Link” where you can add an existing class to another project by linking to the original file.  If you change the original file the project that has the linked file will also see the change.  To add the linked file you simply right click on the Netduino.SampleApplication.UnitTests project and select Add –> Existing Item and navigate to the AnalogSmother.cs file and select the down arrow on the Add button.

image

So now you have a single file that is compiled in the Netduino project and the Unit Test project.  This makes it very easy to create a test fixture class in the unit test project that exercises the linked class. 

Here is the test fixture class:

image

So I was able to test this class without starting up an emulator or deploying to the Netduino.  This is great for classes that do not need to perform any IO but eventually you are going to run into a case where you need to access the specific hardware of the Netduino.  This is where the hardware abstraction layer comes into play. 

In this sample application I created the following interface:

image

Here is the class that implements the interface and does all the actual IO:

image

Here is the class that uses the IHardwareLayer interface that has some more logic that can be tested using the same approach of adding the linked file to the unit test project.

image

This class will have to be tested a little differently though because it actually expects the IHardwareLayer to return values when calling ReadLight.  We can simulate the hardware returning correct values by providing a fake implementation of the IHardwareLayer interface.  This can be done easily by creating a FakeHardwareLayer that implements the IHardwareLayer and returns the expected values.  Or you can use a mocking framework such as Moqto do the work for you.

image

The Moq mocking framework allows you to Setup specific scenarios and Verify that those scenarios are working.  The above test verifies that the LED does turn on and off for specific values of Light Readings.

Conclusion

I have been able to show you that unit testing is doable for Netduino projects if you follow a couple design patterns and you don’t have to wait for a testing framework to be available for the .Net Micro Framework.   

UPDATE: I made a couple small tweaks to the code and posted it on my NetduinoExamples repository under the UnitTestingExample subfolder.

Twitter Feed format for FIRST FRC 2010 Season

by Mike Linnen 20. January 2010 21:43

UPDATE: The feed changed a little bit from the first time I published the format

I made changes to the twitter feed format to match the game for the FIRST FRC 2010 Season.  You can follow the tweets for this season at http://twitter.com/Frcfms 

The new format is as follows:

#FRCABC - where ABC is the Event Code. Each event has a unique code.
TY X - where x is P for Practice Q for qualification E for Elimination
MC X - where X is the match number
RF XXX - where XXX is the Red Final Score
BF XXX - where XXX is the Blue Final Score
RE XXXX YYYY ZZZZ - where XXXX is red team 1 number, YYYY is red team 2 number, ZZZZ is red team 3 number
BL XXXX YYYY ZZZZ - where XXXX is blue team 1 number, YYYY is blue team 2 number, ZZZZ is blue team 3 number
RB X - where X is the Bonus the Referee gave to Red
BB X - where X is the Bonus the Referee gave to Blue
RP X - where X are the Penalties the Referee gave to Red
BP X - where X are the Penalties the Referee gave to Blue
RG X - where X is the Goals scored by Red
BG X - where X is the Goals scored by Blue
RGP X - where X is the Goal Penalties by Red
BGP X - where X is the Goal Penalties by Blue

Example tweet in text:

#FRCTEST TY Q MC 2 RF 5 BF 3 RE 3224 2119 547 BL 587 2420 342 RB 1 BB 1 RP 0 BP 0 RG 0 BG 5 RGP 2 BGP 1

I sure would like to know if anyone builds anything that parses these tweets.

New Twitter feed for FIRST FRC 2009 Field Management System

by Mike Linnen 24. February 2009 21:09

I have blogged several times before about my involvement in building the Field Management System that runs the FIRST FRC events.  Each year I have worked very hard with 2 other engineers on trying to build the best possible experience for the volunteers that run the event, teams that participate in the event, and the audience that attends the event.  This year we wanted to extend the experience to beyond those that actually attend the event.  We wanted to have a way to announce the results of the matches as they are happening on the field.  This has been done in the past by updating an HTML web page that gets posted on the FIRST web site.  But we wanted something more that could be used by the teams in their quest for knowledge on what is happening during each event on their device of choice.

So I am very pleased to say that this years event will have twitter updates for each match as they are completed on the field.  All you have to do is follow the FRCFMS twitter account in order to get match updates from all events.  The tweets that are posted follow a specific format that should allow the teams to build really cool applications on top of the twitter data.  Here is an example tweet of our test event:

 image

As you can see in the tweet it is a little hard to read as we are jamming a bunch of information into the 140 character limitation but this should be vary easy to parse the information with a bot of some sort.

The format is defined as follows:

#FRCABC - where ABC is the Event Code.  Each event has a unique code.

TYP X - where x is P for Practice Q for qualification E for Elimination

MCH X - where X is the match number

ST X - where X is A for Autonomous T for Teloperated C for complete

TIM XXX - where XXX is the time left

RFIN XXX - where XXX is the Red Final Score

BFIN XXX - where XXX is the Blue Final Score

RED XXX YYY ZZZ - where XXX is red team 1 number, YYY is red team 2 number, ZZZ is red team 3 number

BLUE XXX YYY ZZZ - where XXX is blue team 1 number, YYY is blue team 2 number, ZZZ is blue team 3 number

RCEL X - where X is the red Super cell count

BCEL X - where X is the blue Super cell count

RROC X - where X is the red rock and red Empty Cell count

BROC X - where X is the blue rock and blue Empty Cell count

There are some cool ways you can use twitter to get the information you want for a specific event.  Hop on over to search.twitter.com and enter in the following #FRCTEST TYP Q and you will get a list of all qualifying matches for the TEST event.  When the events start this weekend you can substitute the TEST code with the event code of your choice.  The FIRST FRC Team update has a list of all the valid event codes.

You can also use the search.twitter.com with your favorite RSS reader to get updates in RSS format.

If other tweeple are tweeting about the event and using the same hashcode that the Field Management System uses then you can hop on over to #hashtags and enter in the hash code for the event and see all tweets for that event.  For example try navigating to http://www.hashtags.org/tag/frctest and you will see all the tweets for the #frctest event that we have been running to test the Field Management System.

Although for week one the match tweets will only be at the end of each match, week 2 we are thinking about upping the frequency of these tweets so that you get more of them while the match is in play.  This will make it very difficult for a human to read the tweets on a small device because they will be too many of them coming.  I would like to hear any ones thought on what the frequency of tweets should be and if they expect to be reading the tweets rather than parsing them with another tool.  Of course if you intend to read the tweets and you are only interested in the final match result you could use the search.twitter.com advanced search capabilities to only view tweets that have the status of complete.  That search would look something like this:

image

It will be really cool to see how the information we are posting is going to be used!

Tags:

Robotics | Software

Using Microsoft Technologies for the FIRST 2007 robotics competition

by Mike Linnen 20. December 2007 23:20

I have been meaning to blog about one of the coolest projects I have been involved with for a while now but I have been too busy to do so.  Back in June of 2006 I got involved with Bob Pitzer (Botbash), Chris Harriman and Joel Meine, in providing a software/hardware solution for the FIRST Robotics Competition for their upcoming 2007 season.  FIRST is a non profit organization that is dedicated to teaching young minds about science and technology through several fun filled robotics competitions.  Make sure you check out their web site to see how you might be able to get involved in this great program.  Also check out the 2007 video archive to see how exciting these events can be.  Make sure you pay attention to the computer graphics that are superimposed over the live video because that is what we built! 

The software and hardware we built was used to manage multiple 3 day events over a months time frame.  This ended up being around 40 events across the US in roughly 30 days.  Each event had anywhere from 30 to 70 teams competing.  Each day of the event had to be executed in an efficient manor in order to complete the tournament style competition.  The system was designed at a high level to do the following:

  • Lead an event coordinator through the steps of managing a tournament
  • Maintain a schedule of matches over a 3 day period
  • Inform the audience of match scoring in real time
  • Broadcast live video mixed with real time match score information to the web
  • Inform the other competitors in the Pit area of upcoming matches, match results, and real time team ranking details as each match completes.
  • Control the field of play from a central location
  • Gather scoring details from judges located on the field
  • Provide periodic event reports for online viewing as the tournament progresses

All of this was done using Microsoft .Net along with various open source .Net projects to speed up the development process and provide a robust system that was easy to use by various volunteers. 

I used various pre-built components in order to build this system.  Since this application needed to interface with multiple hardware components and provide a rich user interface a Windows Forms application was going to be required.  I chose the Patterns and Practices Smart Client Software Factory (SCSF) as the basis for the Windows Forms framework.  I had used this Framework in a previous project and I knew it would give me the modular design I needed to accomplish the functional goals of the design.  I wanted a solution that allowed for me to do a lot of Unit Testing as I was going to be the only developer doing the work and the amount of QA testing was going to be very small.  Since the SCSF used a Model View Presenter pattern I knew that testing would not be an issue.  Also SCSF uses a dependency injection pattern that would also lend itself very well for unit testing.  Another benefit of the dependency injection pattern was that I could mock out some of the hardware interfaces so that I did not have to have a fully functional robot arena in my home office!  I actually developed the hardware interface without ever connecting to the hardware on my development machine.  This was done by establishing a good interface and using a mock implementation of this interface to complete all the business logic without having any hardware.  Then at a later date we implemented the real hardware layer and even to this day I use the mock implementation for all development work since I do not have an arena in my office. 

For the data access layer I choose to use SubSonic as it provided a very fast way to generate the data access layer from a database schema.  Using SubSonic gave me the flexibility to grow the data model really fast as the solution emerged over time.  The database back end was SQL Server Express 2005.  Since the solution only required a small set of clients and it had to be disconnected from the Internet  SQL Server Express was right for the job. 

Deployment of the application was done with ClickOnce in a full trust environment.  This enabled me to make changes to the application throughout the tournament and the software on each playing field computer remained at the most recent version.  The click once deployment also managed the upgrade process for any database changes as well. 

The central control of the field of play was handled by a single .Net Win Forms application that interfaced to Programmable Logic Controllers (PLC) via a third party managed library.  This library allowed for me to set PLC memory locations as well as monitor locations without having to worry about the TCP/IP communications protocol.  This was a great time saver as I could concentrate on high level business value rather than low level communications.  Since the low level communications was not required I did not have to spend a lot of time with debugging hardware/software integration problems.

Another key area of integration was providing a Hardware UI that consisted of LCDs and Buttons that enabled a field operator to manage the match process without using the computer keyboard or mouse.  This was done using a serial port communicating to a BX24 from Netmedia.  The user would actuate buttons on the hardware UI in order to start or stop the match as well as many other tournament related functions.  The hardware UI would lead the operator to the next step by flashing the most appropriate button for the current point of the match.  This made the operators job a lot easier. 

The audience needed to be informed about what is going on during the event.  An announcer was always present at these events but the audience also needed visual cues that made it apparent what was going on.  So I created a win forms audience display application that would provide the detail the audience needed.  This detail was not only displayed to the live audience but it was also broadcast over the Internet to individuals that where not able to attend physically.  This audience display showed live video as well as match statistics mixed together on one screen (you can see this in action in the video links I mentioned above).  This screen was projected up onto a huge screen so all audience members could see with great ease.  The live video mixing was done using a green screen technique that is often used with the weatherman on local news stations.  Basically a green color is used in a color keying process to superimpose the live video over the green color.  The screen snapshots below give you an idea of the type of information that was presented to the audience as well as the web broadcast.

 

2007-02-21_135125      2007-02-21_135226

AlliancePairingSample      2007-02-21_135201

Well I could go on and on about details of the application we wrote to make the 2007 FIRST FRC event a great success but I think I will save it for a set of later posts.  Also I have been working on the software for the 2008 season that uses WPF for an even more richer user experience (can anyone guess I used some animations!).

Hero robot is coming back!

by Mike Linnen 16. December 2007 09:21

Heathkit was an awesome company that supplied electronic kits for educational purposes back in the 80's and 90's.  Their products where a bit on the pricey side but where else could you get a TV in kit form that you had to build.  I bought an Oscilloscope from them and put it all together in several weekends.  I also had a single board computer that was sold by Heathkit that I did not actually build but I used it for teaching myself how to program using machine(Assembly) language.

Well Heathkit is back in action and one of the best products they offered is also back.  The Hero Robot of the 80's is now called HE-RObot.  Back in the 80's you could get this robot in kit form or fully assembled.  I was never able to purchase one but I worked for a company repairing electronic equipment and the owner's son ended up getting one.  It was one of the coolest things I saw and it probably was one of the reasons I became so interested in robotics in the first place.  I don't remember all the specifics of the original robot but from what I remember it had sonar ranging, optical wheel encoders, light sensors, current sensors, and sound sensors.

Well the new Hero is a partnership between White Box Robotics and Heathkit.  The new HE-RObot comes with an onboard PC with an XP operating system and Microsoft Robotics Studio as the programming environment.  Finally a product is in the market place that combines both of my passion's: Robotics and Microsoft .Net.  This is a very powerful robot but I do not see too many details on what sensors will be offered.  On the web site it looks like it will include IR, Web Camera, and Audio.  I sure would like to see a few more details on what other capabilities it will have as far as sensors go. 

The web camera is going to be real powerful as a sensor.  I was fortunate enough to evaluate an ER1 robot from Evolution Robotics.  I wrote an article about this experience called 30 Days of ER1 back in 2003.  The live video pattern recognition routines put a whole new meaning to navigating your environment.  I am pretty sure White Box Robotics has licensed the software that handled pattern recognition from Evolution Robotics so the HE-RObot will have the same capabilities.

How to build a Maze Robot

by Mike Linnen 15. December 2007 12:48

Overview

The following article was originally posted by me on the Phoenix Area Robotics eXperimenters web site.  I moved the article here on my blog as I no longer belong to the Robotics group.  You can find the original article on the PAReX site.

Building a Maze Robot

My maze robot DR X took first place in BotBash’s 2000 Autonomous Maze competition. The competition consisted of three mazes with different configurations. The robot that completed all three mazes in the shortest time wins the event. Each robot had five chances to complete all three mazes. The shortest three times where summed up for the final score. DR X was the only robot able to complete all three mazes in the allotted time frame. This article was written as an attempt to explain the techniques used to allow DR X to accomplish first place.

There are several techniques that can be used in solving mazes:

  • Random
  • Wall Following
  • Mapping

Random navigation does not seem like a very elegant way to master a maze so my choices were mapping or wall following algorithms. Mapping a maze can be very difficult to do and this competition did not really reward such a task. So that leaves wall following as the best bet to complete the maze.

Wall following can be best explained by imagining yourself in a maze with your eyes closed. If you could place one hand on a wall and never let the hand leave the wall you will eventually find the end of the maze as long as the finish is not an island in the middle of the maze. It is very important to follow only one wall until you reach the end.

The following drawings show right and left wall following paths for a given maze.

wallfollow

Notice that in some cases it is better to choose one wall to follow over another. Here the shortest path from start (S) to finish (F) is via the right wall. So it is good practice to be able to command your robot to follow one wall over another before it is set in the start box. This can be accomplished by using the left and right bumper switches. Tapping the left or right switch before the start commands the robot to follow the left or right wall.

So I set out to build and program a maze robot to follow one wall. I choose to use a differential drive system on a round body. This would allow me to control the robot rather easily and prevent it from getting hung up on maze walls. I mounted two GP2D02 IR Sensors on a single shaft on top of a servomotor. The sensors were positioned 90 degrees apart. The servomotor allowed the robot to look straight ahead and the left or right wall at the same time.

DR X First Prototype

drxPrototype

In order to tell if the robot was getting closer or further away from a wall a minimum of two sensor readings would have to be taken over a period of time while the robot was moving. I had some difficulty in fine-tuning the reactions needed to prevent the robot from touching the walls. I quickly realized that this sensor arrangement had some shortcomings. I needed to be able to look at a wall and determine if the robot was parallel to it without moving forward. If I could achieve this, the robot would always start off parallel to a given wall. So I made some sensor placement changes that would not require the robot to be moving in order to determine if it was parallel or not.

DR X Second Prototype

drxPrototype2

I found out some other advantages of this sensor arrangement. While the robot was following a wall and it approached a doorway of the maze the first sensor would detect the opening (doorway) very easily. Once the second sensor detected the doorway I knew the robot was directly in front of the entranceway. A 90 degree turn towards the entranceway would position the robot perfectly for passage through the door. Passage through the door would also be easily detected. As the robot moved forward, the door jam could be detected by both sensors. The robot could successfully determine when a door was found and navigate through the door rather easily.

The following drawings show the robot navigating through a doorway.

The robot approaches the doorway

door1

The robot passes the doorway

door2

The robot turns left 90 degrees

door3

The robot moves forward into the doorway

door4

The robot is almost through the doorway.

door5

The robot is through the doorway.

door6

DR X Front View

drx_front

DR X Side View

drx_side

Improvements

Well this solution certainly has room for improvement and it is not the only way to solve a maze. One major enhancement that I saw was DR X needed a sensor that could look in front of the robot while it was attempting to follow a wall. This would have prevented the robot from having to collide with a wall before it realized it needed to stop and turn.

Conclusions

Well this project sure was a gratifying experience. To watch my little creation navigate the maze was a great thrill. A lot of last minute hard work went into this robot but come event day it all paid off.

BX24 and PowerShell for managing a build process

by Mike Linnen 18. July 2006 21:01
BX24 and PowerShell for managing a build process

I have been doing some BX24 development again lately.  I have also been reading a lot about the new shell support that Microsoft has pre-released called PowerShell (formerly known as Monad).  Well since I have been using the same batch files and VBScript files to manage my build process for BasicX source since 2001 I thought it might be time to look at another alternative. 

I need to be able to do the following:
  • Perform command line compiles of the BX24 project
  • Allow for the source to reside anywhere on the hard drive and still be able to compile.
  • Initiate a compile of all BX24 projects so I do not have to do them one at a time
  • Parse the BasicX.err file to determine if the compiler found errors
  • Launch an editor that shows the BasicX.err file only when an error exists
  • Be able to manage some registry entries specific to the BasicX IDE
  • Have a limited set of scripts that do not require any changes to support the build process
  • Allow for multiple project files to co-exist in the same folder. This means I need to save off the BasicX.err file into another file if I want to preserve what the results where from the compile.

After reading some about PowerShell it was very apparent that it would support anything I needed to do.  The main huddle I needed to over come was learning the syntax that revolved around PowerShell.  Fortunately it is based on the .Net framework so the majority of it was fairly easy to adjust to. 

Since I already had a VBScript file that did most of the above tasks I started dissecting what it did first.  The last time I touched this script was in 2001.  The script did the pieces around changing the registry entries and launching the compiler but it had no support for parsing the error file and managing many project files.  Here is the script that I ended up with:

param ([string]$WorkingDirectory)
# Define some script variables$chip_type="BX24"
# Save the current dirrectory so we can return to it
Push-Location
# If a working directory was passed in lets change to it
If ($WorkingDirectory){Set-Location $WorkingDirectory}
# Get the project files to process
$projectFiles = Get-ChildItem *.bxp 
foreach ($project in $projectFiles){$project_file = $project.name.split(".")[0]
# Use the current directory as the working directory
$work_dir = $project.DirectoryName
# Set some registry entries for the basicx IDE
$configEntry = "hkcu:\software\vb and vba Program Settings\basicx\config"
Set-ItemProperty ($configEntry) -Name Chip_Type -value 
$chip_typeSet-ItemProperty ($configEntry) -Name Work_Dir -value 
$work_dir
# determine from the registry where the basicx executable is installed
$program_dir = Get-ItemProperty ($configEntry) -Name Install_Directory
# Map the P drive to the basicx install directory for convieniance
if (Test-Path p:) {}else {subst P: $program_dir.Install_Directory}
# Remove the error file if it exists
if (Test-Path basicx.err){del basicx.err}
if (Test-Path ($project_file + ".err")){del ($project_file + ".err")}
# Launch the compiler
P:\basicx.exe $project_file /c
# Wait for the compiler to finish
$processToWatch = Get-Process basicx$processToWatch.WaitForExit()
# Unmap P: drive
if (Test-Path p:){subst P: /d}
# Check for errors and launch the error file if some do exist
$CompileResult = get-content basicx.err
If (($CompileResult -match "Error in module").Length -gt 0){notepad basicx.err}
# Copy the error file off so it does not get overwritten when multiple
# projects are being compiled in a single directory
copy-item basicx.err -destination ($project_file + ".err")} 
# Restore the original location
Pop-Location

Well that was pretty painless.  I basically had a script that managed processing all BasicX project files in a given folder.  Next I needed to have another script that found all the project folders for a given folder.  This also meant processing projects in sub folders.  This higher level script would launch the script above to do the compile.  I ended up with the following script:

# Save the current dirrectory so we can return to it
Push-LocationSet-Location ..\
# Get a list of all projects
$project_Files = Get-ChildItem -recurse -include *.bxp | sort $_.DirectoryName$lastDir=""
foreach($project in $project_Files)
{
# Since we can have multiple projects in a folder and we send the
# working folder to the build script we want to skip folders we already
# processed
if ($lastdir -ne $project.DirectoryName)
{./tools/build $project.DirectoryName  $lastDir = $project.DirectoryName}}
Pop-Location

Well that too was pretty easy.  I am beginning to really respect the power of PowerShell.  I can do so much more than what I was able to do with VBScript and do it easier.  Later I will but together a sample BX24 project showing how I use these scripts and the folder structure I place them in.

Using an alternative editor for the BasicX IDE

by Mike Linnen 16. July 2006 16:13
Using an alternative editor for the BasicX IDE

Although the BasicX Integrated Development Environment works for writing, compiling and downloading source code for small quick projects once you start using it a lot for writing source code it tends to be lacking features.  However the folks at Netmedia were nice enough to allow for command line execution of their IDE to compile and download code.  This opens up the opportunity to use your favorite editor to write source code and launch the IDE via command line to compile the source.  I have been using these command line options since 2001 to make my development environment a little more to my liking.  In this blog post I will talk about how I manage the process and the tools I use.

First a couple notes about some things that might trip you up in using the command line options.  The BasicX IDE wants to know the base directory where your projects live.  This is ok if you want to manage this directory in the IDE every time you switch to another folder or if you only have one project.  However if you are like me you have many projects and you don't want to have to load up the IDE to change this base directory every time you work on one of them.  Next the chip setting for BX24 or BX01 is also set from the IDE and is needed for the command line compile.  I bounce back and fourth between projects that use one or the other chip so my IDE could be set for either one at any given time. 

Neither the base directory or the chip setting is offered as an option in the command line.  Although Netmedia does store these items in the Windows Registry so an external program can modify them before launching the command line compile.

So I created a vbscript that sets the IDE options and then calls the compiler.  The script accepts 3 parameters:
 1 (Required) - Project file
 2 (Optional) - /c
 3 (Optional) - /d

The script supports drag and drop capabilities so you can use it 1 of 2 ways. 
1 - Drag the project file onto the script
2 - Call the script from your editor or a batch file

I usually create a build.bat file that I just call from the text editor.  The build batch file is specific to the project that I am working on so I generally keep it in the main folder of my project.

I have included the script files along with a sample BX24 project so you can look at what I did and maybe make use of it for your own BasicX projects.

I also use a shareware text editor called TextPad.  This editor supports syntax highlighting and multiple documents.  The nice thing about it is that I can pass TextPad the project file and it can load up all the source modules that are associated with the project.  I often use a batch file to launch the textpad editor and open all the source code for the project.  I have also included this batch file in the bx24.zip so you can see how it is done.

Tags:

Robotics

Completed tutorial 1 and 2

by Mike Linnen 22. June 2006 00:49
Completed tutorial 1 and 2

I just completed the MyTutorial1 and MyTutorial2 tutorials for the Microsoft Robotics Studio.  I used the Lego RSX 2.0 hardware for the tutorials.  I noticed a couple things that stumbled me for a short time.

Problem #1
When you launch the tutorials from the visual studio IDE the services never seem to communicate with the RSX.  I was able to get the service to work using the DSSHOST executable passing in the manifest file.  So I looked at the parameters that the IDE is passing to the DSHOST when you debug and it was using the contract command line option instead of the manifest option.  So I changed it to use the manifest and the service ran fine.

So change the command line debug argument -contract from:

-contract:"http://schemas.tempuri.org/2006/06/mytutorial1.html"
To:
-manifest:"C:\Microsoft Robotics Studio (June 2006)\samples\
MyTutorial1\MyTutorial1.manifest.xml"

And you should be able to launch the service from the IDE

Problem #2
On tutorial number two there is no step to add in the legorcxmotor service to the manifest, so the service never gets started correctly when you run the application from the IDE.  So add the following to the MyTutorial2.manifest.xml file:

      
<SERVICERECORDTYPE>
 <wsap:Contract>
  http://schemas.microsoft.com/robotics/2006/06/legorcxmotor.html
 </wsap:Contract>
</SERVICERECORDTYPE>

Overall I found the two tutorials informative. I at least got my feet wet with using the framework. I might try a few more tutorials before I attempt to write a driver for the BX24.

About the author

Mike Linnen

Software Engineer specializing in Microsoft Technologies

Month List