SandCastle – Is It Blowing Away?

I’m really, really disappointed with Microsoft about SandCastle.  For those of you not in the know SandCastle (SC) is the documentation generator from Microsoft.  Supposedly they use it internally for generating the .NET Framework documentation but with the tools they released publicly I find it hard to believe.  The last time SC was updated was 2010.  It’s been over a year and I still can’t generate anything near the style or accuracy of the existing documentation. 

I long for the days of NDoc where I could pass my documentation on to a GUI tool and it could spit out the professional looking documentation.  With SC it spits out beta quality documentation with styles that hardly work and absolutely cannot handle anything beyond basic doc comments.  This is really sad.  It doesn’t even ship with a GUI so I can configure the various options to generate something reasonably close without having to read through help files.  Fortunately there are third-party tools available but honestly they don’t update that often either.

What makes me really mad is that the whole reason SC is suppose to be so awesome is that it is configurable to the point that we should be able to generate any style of documentation.  The reality though is that the existing styles are horribly broken, can’t handle any external stuff and doesn’t even match the existing framework styles.  You’d figure that MS would release the very same style and resources that are used for the framework but I’ve yet to see any SC-generated documentation come close. There’s always something broken whether it is bad image references, poorly formatted examples or just plain ugly styles.

Don’t even get me started on the horrible new MS Help system that was introduced in VS2010.  Help is sliding so far backwards that I think MS needs to just start over.  The day we can’t ship a simple help file (or even a bootstrapper) is a sad day indeed.  I’d hate to be the folks at MS who have to go through all the steps needed to install/modify help files just for testing.  This is truly ridiculous and a bad, bad sign for help in general.

Therefore I throw out the challenge for MS to step up to the plate and actually provide us an updated version of SC that can generate MSDN-style documentation out of the box without all the extra work generally involved.  Better yet, integrate this functionality into VS directly so I don’t have to use third-party tools.  Unless MS can fix SC I feel that it’ll fall to the wayside like so many other MS projects.  This is unfortunate because documentation is key to good class library design.

Windows 8 and Visual Studio 11 Look…meh

Am I the only one that is not that excited about Win8?  I haven’t had time yet to play around with it (waiting for VMWare to upgrade me to Workstation 8) so I might be missing something but I’m just not seeing anything worth looking at.  Let me elaborate.  Firstly note that I always like to stay with the latest stuff so Win8 and VS11 will be on my machine as soon as they are released.  I’m more interested in the general public’s interest in the products.

OS History

First there was XP (ignoring anything prior to that) which is a decade old and still popular.  It doesn’t have any of the bells and whistles of modern OSes but it is solid and heavily used.  Many companies still have no intention of upgrading from XP.  MS even extended the support cycle longer than normal to give people time.

Vista came out and lasted a year.  It was a horrible disappointment in most areas but it introduced some radical UI changes that are popular.  Jumplists, task dialog, restart manager, etc.  Unforunately nobody wanted to upgrade so applications that want to use these new features either can’t or have to write bridge code to hide the differences.  If you’ve ever written such bridge code you’ll realize that MS should have done it before release.  Even worse is that .NET (which MS said is their platform of choice going forward) doesn’t even support most of the new UI stuff.  Instead MS released a code pack that adds in the functionality that should be in the framework.  Still worse is that you can’t even use that code unless you’re running under Vista+ because it’ll blow up.  The code pack was the perfect opportunity to write the bridge code once so we didn’t have to but alas no. 

Quickly on the heels of Vista is Win7, the current OS.  Win7 didn’t have any radical changes but it did fix the issues with Vista.  Win7 is, overall, a success and generally recommended as a required upgrade.  Companies are moving to it as they get new computers but there is certainly no big rush.  Win7 is the spiritual successor to XP and worth the effort.  With Win7 came VS2010 and .NET v4.  Alas still no support for the Vista+ features.  So MS wants us writing Win7 apps but we can’t because: a) XP is still popular, b) there is no bridge code to hide the details and c) none of the new features are in .NET anyway.  So while Win7 is nice, we’re still using the XP subset.

 Windows 8

Here’s where things get really frustrating.  Win8 doesn’t offer anything that anybody has been asking for as far as I can tell.  MS is pushing the Windows Phone 7 (WP7) UI with tiles and touchscreens (touch).  How many people are using touch today?  How many people are even talking about buying touch?  I know no one.  There simply isn’t a demand for them on the desktop yet.  Why would there be?  Unlike a phone or similar device we have a mouse that gives us everything we need.  Yes touch may one day become a commonplace item but not today.  Certainly MS should add support for touch in Windows but it shouldn’t be at the cost of deprecating everything else.  Touch works well for small lists of items but as you add more it gets harder to work with.  Have you ever tried using touch for a list of states?  It takes a while to get to the bottom.  That’s why scrollbars were invented.  I think it is a bad idea for MS to build the Win8 UI around touch and tiles.  There simply isn’t a demand or support for this technology yet.

Win8 is also introducing a brand new application type – Metro.  When I think of metro I think of the various subway systems I’ve ridden.  How awful they are.  How dirty.  Where does MS come up with these names?  But I digress – Metro is a new way of developing apps that (seems) to forgo existing technologies in lieu of HTML5 and XAML.  Potentially a good idea but let’s look at what is wrong with this approach. 

Firstly only Win8 supports Metro.  Therefore if you want to build your app using Metro you have to build 2 – one for Win8 and one for everybody else.  Since Metro looks so vastly different I don’t know how much code sharing is possible at this point.  It’s like telling developers that they need WPF for Win7 and WinForms for everybody else.  What developer in their right mind would do that?  Developers will just stick with the universal application until everybody gets upgraded.  So I think Metro, while highly touted by MS, is going to see little commercial success until post-Win8.

Secondly MS has been pushing SL and WPF for several years now.  Suddenly they’re saying these technologies are deprecated in lieu of Metro.  Really?  Why should I learn yet another technology when it can be deprecated itself in a handful of years?  One of the things that is important for developer buy in is longevity of the technology.  Yes technology needs to evolve but to effectively pull the switch on one set while pushing another just seems wrong.

Will Metro be all that MS is hyping it up to be?  Will developers jump on the bandwagon?  Will we really have to learn a whole new set of technology just to be Win8-friendly?  We’ll know as Win8 gets closer but right now it certainly seems like a very risky bet on MS’es part.

Overall I currently see Win8 as an upgrade only if you are running XP, buy a new machine or just want to be on the leading edge.  There is nothing that has been announced that even remotely sounds interesting as a consumer.  On the developer front I’m not going to be writing Metro apps because I need to support non-Win8 as well.  Maybe when Win.Next.Next comes out and everybody is running Win8 I’ll look into it but not until then.  I think pushing Metro and touchscreen is just going to alienate more people, both developers and consumers.  WPF and related technologies aren’t going away just because MS might want them to.

Visual Studio 11

Here’s where things get more interesting.  VS11 will have a lot of cool new features.  .NET v4.5 is introducing some really nice new features for parallel development and ASP.NET.  WPF, WCF and WinForms are not really getting any major enhancements that I see.  VS11 itself though will be a recommended upgrade.  Here’s a few of the high points for me.

  • Project Compatibility – This is a major feature and I’m glad it was finally announced.  It will allow teams to upgrade piecemeal rather than requiring everyone to upgrade at once.  The only cost is that the solution/project must already be in VS2010 SP1.  Still it’ll be nice to have mixed teams.  I’m curious to know how it’ll impact replication of bugs and whatnot though.  Time will tell.
  • Model Binding – MVC already has this feature so it is nice to see it in ASP.NET.  Model binding will greatly reduce the likelihood of bad binding causing runtime errors while making development easier.  Intellisense enhancements are also nice.
  • Solution Navigator – I’m really not a big fan of this but a lot of people are.  The ability to browse code from Solution Explorer will certainly make some windows like ClassView obsolete.
  • Find and replace with regular expressions – It is about time.  I never used the FR regular expression search because it used a bizarre syntax.  Now that it uses the .NET implementation it’ll make finding things easier.
  • Find window – Available in VS2010 today with the appropriate extras installed, this is actually pretty handy.  It makes finding things easier.  Once it is integrated into VS11 it’ll be better.
  • C++/CLI Intellisense – C++/CLI folks have been screaming for it for years.  It’s finally hear.  Hopefully C++/CLI isn’t needed too much anymore but it’s nice to have when you need it.
  • Async – For VB and C# the addition of async will mean we have to write less anonymous methods and/or functions to accomplish the same thing.  This alone is worth the price.
  • Standalone Intellitrace – In VS2010 (from what I can remember) you had to do a lot of work to get IT running on a machine without VS.  You might have had to even install VS, I can’t remember.  In VS11 you’ll be able to drop IT on a machine that is having problems and then view the log on your dev machine.  This will make IT far more useful.
  • LocalDB – SQL Express is dead.  Long live LocalDB.  Finally a dev version of SQL that doesn’t require all the extras of SQLX while still giving us the same features.  Unfortunately I think this may impact project compatibility with VS2010 so beware.
  • IIS Express – WebDev is dead.  Long live IISX.  Actually IISX has been out for a while and available as an option in VS2010 SP1 but it’ll now be integrated into VS11.  All the features of IIS without the overhead.

So, in summary, I think Win8 is going to be the next Windows Millenium Edition.  It’s going to be a lackluster releas from the consumer’s point of few.  At this point it doesn’t seem to have any features that anybody is going to care about and yet a lot of emphasis is being given to Win8-specific functionality.  I don’t think there is going to be a great migration to Win8 except for new computers.  VS11 on the other hand is going to be a recommended update but just for its new features and .NET v4.5.  Maybe MS can get the Win7 features that are missing in .NET into v4.5 but I won’t hold my breathe.  Instead we’ll be stuck with a couple of Win8-specific features that nobody will use for the foreseeable future.  As both releases get closer we’ll get a better idea of what impact they’ll have but this is my opinion for now.

Is It Time for a Unit Test Evolution?

Unit testing is important for code quality.  Most people don’t question this fact today.  There are lots of tools available to help you write unit tests.  But here’s the problem – you have to morph your design to make it testable.  I’m all for making my code better but I get annoyed when I have to modify a design just to fit the tools I’m using.  To me this is a bad sign.  Our tools are there to support our work.  If we’re modifying our code to make the tools work then we have things backwards.  When we’re designing our code we shouldn’t be focused on the tools we will use (IDEs, testing, build, etc).  That would be equivalent to designing our code with the limits of our database or communication infrastructure in mind.  While these will impact the implementation, they should not impact the design.  And yet unit testing, more often than not, requires that we design our code with testing in mind.

Mocking objects is a very common practice in unit testing.  It allows us to focus on what is specifically being tested without having to worry about setting up all the extra stuff.  There are many mocking frameworks available but the majority of them have the same limitations, just with different syntax.  Most mocking frameworks can only mock interfaces or extensible types.  Sealed and static types are out of the question.  Even more frustrating is that often the member(s) to be mocked must be public or internal and virtual (but not always).  Sealed and static types have very specific uses in design.  They identify classes that are either self-contained and/or non-extensible.  It is a design decision.  Unit testing with these types is difficult so the common approach is to either modify the design (bad) or use abstraction. 

A lot of code these days go overboard with abstraction.  It reminds me of the early database days when DBAs wanted to normalize everything.  Abstraction is important but it should be used only when it is needed.  Unit testing is not a need.  This is just one example of modifying design to meet the needs of the tools.  As an example take DateTime.Now.  If you need to be able to test code that uses this member then you either have to get tricky with date management or you have to abstract out the current date from your design.  Keep in mind that your production code would (probably) never need a time other than the current time and yet you abstract it out for testing purposes.  Some folks will argue that you wouldn’t hard code such a value anyway, you’d just pass it as a parameter but that is just moving the problem up the call hierarchy.  Somewhere along the way the time has to be specified.

In state-based testing it is generally necessary to expose property getters (and even setters) internally so the framework can access it.  This allows us to test the state of an object but not expose the members to the production code directly.  (We’re ignoring the whole discussion for and against state-based testing and domain development.)  This is a dirty hack.  We are once again modifying our design (albeit in a hidden manner) to allow for testing.  As an aside MSTest has an interesting approach using accessors to allow access to private members without using this hack.  Unfortunately though it is generally broken, hard to maintain and not recommended.

What is the problem with the current set of tests?  The problem is that they almost always use reflection as they evolved from the existing framework tools.  Reflection unto itself is slow but for testing it is acceptable.  What is a little harder though is working around the security model of .NET to get access to private members.  Even worse is that testing tools can enumerate objects, create types (or derive new types) and invoke members but only if the base type is “designed” properly.  This is hardly the fault of the tools but rather a consequence of their reliance on reflection.  It is my belief that it is time for unit testing frameworks and tools to evolve into the tools we really need.

How can testing tools evolve?  If I knew the answer I would have already shared it.  We can take a look at a few tools available today to get an idea of what could be though.  TypeMock and Moles approach mocking in an interesting way – they rewrite the code under test.  As an aside note that I’ve never used TypeMock as it is commercially available.  I have used Moles in limited scenarios but I intend to use it more going forward. 

Code rewriting is an old technique.  Traditionally it is slow and brings into question the validity of testing but the benefits are tremendous.  Using either of these tools we can stub out almost any code irrelevant of how it was designed.  These tools allow us to design our code the way we want to and still unit test them.  As an example we can use DateTime.Now without the wasteful abstraction and yet we can set the time in our tests.  Need to stub out a member that isn’t virtual?  No problem the rewriter can rewrite the method body.  This, to me, is the approach that has the best hopes of evolving testing but it is still a relative new, and difficult, task. 

There are two big issues with current rewriting tools (for me at any rate).  Firstly is performance.  Rewriting takes a while.  Unit tests should run quickly.  We can’t be rewriting code for every test.  Even with processors as fast as they are this would be too slow.  It might take runtime-level rewriting to get performance where it needs to be but once performance gets better then rewriting will become more feasible.

The other problem is configuration.  Today, at least for Moles, you have to be explicit about what you want stubbed.  For a handful of types this would be OK but as code gets more complex we don’t want to have huge lists to manage.  We need to have the same facilities available to us that mocking frameworks use today.  When the test starts the rewriter takes a look at what needs to be rewritten and does it on the fly.  Today rewriting happens beforehand and this is simply not flexible enough. 

The ideal test tool would allow us to test any code.  We can configure the tool to return stubbed values anywhere in the calling code, we can mock up objects to be returned from methods so we can track expectations, we can control when things should and shouldn’t be called and we can do it all at test time rather than at compile time.  The test framework cannot modify what gets called, only what happens when it gets called and what it returns.  Just like the mocking frameworks of today.

In summary unit testing and the tools that it uses, such as mocking, are critical for properly testing your code.  But today’s tools are using technologies that require too many sacrifices on our design.  Testing tools need to evolve to allow us to design our code the way it needs to be designed and the tools just adapt.  Code rewriting currently looks like a good way to go but it is still too early to be fully usable in reasonable size tests.  This is a challenge to all testing tools – revolution the testing landscape!!!  Create tools that adapt to our needs rather than the other way around.  The testing tool that can do that will become the clear winner of the next generation of tools.

Comparing Characters

.NET provides great support for comparing strings.  Using StringComparer we can compare strings using the current culture settings or with case insensitivity.  This makes it easy to use strings with dictionaries or just compare them directly.  As an example we can determine if a user is a member of a particular group by using the following code.

bool IsInSpecialGroup ( string user )
   var groups = GetUserGroups(user);

   var comparer = StringComparer.CurrentCultureIgnoreCase;
   foreach (var group inn groups)
      if (comparer.Compare(group, “group1”;) == 0 ||
          comparer.Compare(group, “group2”;) == 0 ||
          comparer.Compare(group, “group3”;) == 0)
         returnn true;

   return false;

Characters have most of the same problems as strings do (culture, case, etc) but .NET provides almost no support for comparing them.  If we want to compare characters without regard to case or in a culture sensitive manner we have to write the code ourselves.  Fortunately it is easy to emulate this using the existing infrastructure.  CharComparer is a parallel type to StringComparer.  It provides identical functionality except it works against characters instead of strings.  As an example the following code determines if a character is a vowel or not.

bool IsAVowel ( char ch )
   var vowels = new char[] { ‘a’‘e’‘i’‘o’‘u’ };

   var comparer = CharComparer.CurrentCultureIgnoreCase;

   foreach(var vowel in vowels)
      if (comparer.Compare(vowel, ch) == 0)
         return true;

   return false;

Just like StringComparer, CharComparer has static properties exposing the standard comparison types available in .NET.  Furthermore since StringComparison is commonly used in string comparisons CharComparer provides the static method GetComparer that accepts a StringComparison and returns the corresponding CharComparer instance. 

bool ContainsACharacter ( this IEnumerable<char> source, char value, StringComparison comparison )
   var comparer = CharComparer.GetComparer(comparison);

   return source.Contains(value, comparer);            

CharComparer doesn’t actually do comparisons directly.  This process can be difficult to get right so it just defers to StringComparer internally.  Naturally this means that CharComparer doesn’t actually do anything different than you would normally do nor does it perform any better.  What it does do is provide an abstraction over the actual process and simplify it down to a couple of lines of code.  If, one day, .NET exposes a better way of comparing characters then CharComparer can be updated without breaking existing code.  Even better is that your code can use CharComparer and StringComparer almost interchangeably without worrying about the details under the hood.

CharComparer implements the standard comparison interfaces: IComparer<char>, IComparer, IEqualityComparer<char> and IEqualityComparer.  The non-generic versions are privately implemented to enforce type safety.  The generic methods are abstract as is CharComparer.  Comparison is specific to the culture being used.  CharComparer defines a couple of nested, private types to implement the culture-specific details.  The nested types are responsible for providing the actual implementation of the comparison methods.  Refer to the source code for the gory details.  Note that this pattern is derived from how StringComparer works.

Feel free to use this code in any of your projects and provide feedback on any issues found.  Unfortunately I’m not posting the unit tests for this class at this time.  However I’ve used this type in several projects and haven’t run into any problems with it.  But, as always, test your code before using it in production.

ServiceBase.OnStart Peculiarity

When implementing a service you really have to have a good understanding of how Windows services work.  If you do it wrong then your service won’t work properly, or worse, can cause problems in Windows.  Services must be responsive and be a good citizen when working with the Service Control Manager (SCM).  The .NET implementation hides a lot of these details but there is a hidden complexity under the hood that you must be aware of.  But first a brief review of how Windows services work.

Windows Services Internals (Brief)

All services run under the context of the SCM.  The SCM is responsible for managing the lifetime of a service.  All interaction with a service must go through the SCM.  The SCM must be thread safe since any number of processes may be interacting with a service at once.  In order to ensure that a single service does not cause the entire system to grind to a halt the SCM manages each service on a separate thread.  The exact internal details are not formally documented but we know that the SCM uses threads to work with each service. 

Each service is in one of several different states such as started, paused or stopped.  The SCM relies on the state of a service to determine what the service will and will not support.  Since state changes can take a while most states have a corresponding pending state such as start pending or stop pending.  The SCM expects a service to update its state as it runs.  For example when the SCM tells a service to start the service is expected to move to the start pending state and, eventually, the started state.  The SCM won’t wait forever for a service to respond.  If a service does not transition fast enough then the SCM considers the service hung.  To allow for longer state changes a service must periodically notify the SCM that it needs more time.

One particularly important state change is the stop request.  When Windows shuts down the SCM sends a stop request to all services.  Every service is expected to stop quickly.  The SCM gives a (configurable) time for each service to stop before it is forcifully terminated.  If it wasn’t for this behavior a hung or errant service could cause Windows shutdown to freeze.

A Day In the Life Of a Service

A service is normally a standard Windows process and hence has a WinMain.  However a single process can host multiple services (many of the Windows services are this way) so WinMain itself is not the service entry point.  Instead a service process must register the list of supported services and their entry points to the SCM via a call to StartServiceCtrlDispatcher.  This method, which is a blocking call, hooks up the process to the SCM and doesn’t return until all listed services are stopped.  The method takes the service name and its entry point (normally called ServiceMain).  When the SCM needs to start a service it calls the entry point on a separate thread (hence each service gets its own in addition to the process).  The entry point is required to call RegisterServiceCtrlHandlerEx to register a function that handles service requests (the control handler).  It also must set the service state to start pending.  Finally it should initialize the service and then exit.  The thread will go away but the service will continue to run. 

One caveat to the startup process is the fact that it must be quick.  The SCM uses an internal lock to serialize startup.  Therefore services cannot start at the same time and a long running service can stall the startup process.  For this reason the general algorithm is to set the state to start pending, spawn a worker thread to do the real work and then set the service to running.  Any other variant can slow the entire system down.

All future communication with the service will go through the control handler function.  Each time the function is called (which can be on different threads) the service will generally change state.  This will normally involve changing to the pending state, doing the necessary work and then setting the service to the new state.  Note that in all cases the SCM expects the service to respond quickly.

.NET Implementation

In .NET the ServiceBase class hides most of the state details from a developer.  To ensure that the service is a good citizen the .NET implementation hides all this behind a few virtual methods that handle start, stop, pause, etc.  All a developer need do is implement each one.  The base class handles setting the state to pending and to the final state while the virtual call is sandwiched in between.  However the developer is still responsible for requesting additional time if needed.  Even the registration process is handled by the framework.  All a developer needs to do is call ServiceBase.Run and pass in the service(s) to host.

 All is wonderful and easy in .NET land, or is it.  If you read the documentation carefully you’ll see a statement that says the base implementation hides all the details of threading so you can just implement the state methods as needed but this is not entirely true.  All the implementations except OnStart behave the same way.  When the control handler is called it sets the service to the pending state, executes the corresponding virtual method asynchronously and returns.  Hence the thread used to send the request is not the same thread that handles the request and ultimately sets the service state.  This makes sense and meets the requirements of the SCM.  More importantly it means the service can take as long as it needs to perform the request without negatively impacting the SCM.

The start request is markedly different.  When the start request is received the base class moves the service to the start pending state, executes the OnStart virtual method asynchronously and then…waits for it to complete before moving the service to the start state.  See the difference?  The start request thread won’t actually return until OnStart completes.  Why does the implementation bother to call the method asynchronously just to block waiting for it to complete?  Perhaps the goal was to make all the methods behave symmetrically in terms of thread use.  Perhaps the developers didn’t want the service to see the real SCM thread.  Nevertheless it could have used a synchronous call and behaved the same way. 

What does this mean for service developer?  It means your OnStart method still needs to run very fast (create a thread and get out) even in the .NET implementation even though all the other control methods can be implemented without regard for the SCM.  If OnStart takes too long then it’ll block the SCM.  More importantly the OnStart method needs to periodically request additional time using RequestAdditionalTime to avoid the SCM thinking it is hung.


When implementing a service in .NET it is still important to understand how native services and the SCM work together.  The OnStart method must be fast and well behaved to avoid causing problems with Windows.  The other control methods are less restrictive but still require awareness of how the base implementation works.  Writing a service is trivial as far as coding goes but services require a great deal of care in order to ensure they behave properly.  This doesn’t even get into the more complex issues of security, installation, event logging and error handling which are broad topics unto themselves.

Tech Ed 2011 Summary

Tech Ed was great this year.  Already mentioned a few topics that were of great interest.  Here’s some more topics that deserve further attention.


Ever heard of functional programming or a functional language?  No?  Not surprised.  It is one of several categories that language designers use to identify languages.  Functional languages have their basis in mathematics so they look and act like math expressions (functions).  What makes functional programming so useful is that data is generally passed one function to another.  In fact the data is generally just defined in terms of the function it performs.  This makes these languages great for mathematical processes.  It also solves one of the more difficult problems of multi-threaded programming – shared data.  Normally with MT programs you have to use locks to protect shared data.  With a functional language this isn’t necessary because each functional data feeds into the next like a pipeline.  Even more important is that most functional programming involves arrays/lists of items.  Functional languages can parallelize these values, do some processing and put it all back together without the programmer even worrying about a lock.  Needless to say functional languages are great for parallel processing.

F# happens to be a functional language.  Written by the folks at Microsoft, it runs on the .NET framework.  It has actually been out a couple of years but most people haven’t been exposed to it until now.  In fact if you have VS2010 it is already on your machine.  You can use the F# Interactive tool window to start playing around with it.  It comes with some great tutorials as well.  However functional programming is as far beyond procedural programming as procedural is beyond English writing.  Therefore it will take time to understand.  You probably will never be creating your UIs in a functional language but it is likely that one day you’ll be using a parallel library that is itself written in F#.  So now may be a good time to take a look at it.  Here’s a starter link: http://www.tryfsharp.org/Default.aspx.  If you are running VS 2010 then you already have the necessary tools.


Honestly I wrote this off as yet another tool for wannabe programmers.  After hearing more about it I realize that it may very well be the future IT programing tool.  Today most IT groups use Access or Excel for creating simple apps they need to get their work done.  Who hasn’t had to deal with crazy UIs and hacked together VBA?  LS will change that albeit with a slightly higher learning curve.  LS allows non-developers to create the same types of apps (minus the reporting) but using modern tools and the entire .NET framework.  When such an app eventually winds up in the hands of the “real” devs we won’t want to shoot ourselves over the limitations of some scripting language.  We’ll be working in VB/C# and the framework!!

LS does have a higher entry requirement than existing tools.  It is designed for 2 or 3 tiers.  The client application is Silverlight hosted via IIS.  The back end can be SQL Server (any edition), WCF services or whatever.  The IIS requirement (I believe it ships with Cassini) is probably going to be the biggest headache but it’ll be worth it.  Who hasn’t had someone call them and complain about a bug in some Access app.  When you ask what version they’re running they say Office 2007.  “No, what version of the app?  I don’t know.   Argh!!!”  SL removes the deployment/update issue while still allowing it to run just about anywhere.

LS is still in beta but a GoLive license is available so companies can begin using it today.  Here’s the MSDN for LightSwitch: http://www.microsoft.com/visualstudio/en-us/lightswitch/


Talk to any C++ developer and you’ll hear grumblings about the lack of features and support in later versions of VS.  Well that’s changing.  In VS2010 RTM Intellisense was completely rewritten.  It is now fast, accurate and reliable.  In SP1 the C++ team has added some basic editor features that were either not fully baked in RTM or missing altogether.  vNext problems to bring C++ back up to par with the existing .NET languages.  Exactly what those updates will be we’ll have to wait and see.

Another area with VS2010 really advanced C++ is with the new C++ 0X standard that should be approved this year.  C++ now has better support for parallel processing and the VS2010 version already supports this.  Here’s some of the C++ standard changes already in VS2010:

  • shared_ptr – Updates to this type (introduced in VS2008) make it easier to work with and reliable. 
  • Concurrency runtime – Adds support for parallel processing in C++.  This is different from openmp which is about multithreading an app.  ConcRT works and is easily added to existing apps just by including ppl.h (for the most part).  It will make the process of optimizing existing code to take advantage of multiple cores easier.
  • Anonymous types – Ever heard of the auto keyword?   Most people haven’t but it was in the original C++ spec.  It’s purpose was to allow a programmer to optimize code generation by telling the compiler some additional information about variables.  Nobody really used it and it was deprecated.  However in order to support newer constructs this keyword has been changed to represent something completely different – anonymous types.  An anonymous type is really just a type where the compiler figures out the underlying type based upon usage rather than a programmer specifying it.  We aren’t losing any of the strong type checking support C++ is known for.  All we’re doing is telling the compiler to figure it out.  Here’s an example.

    SomeObject* someVar = new SomeObject;

    It really is redundant to specify the type twice.  The compiler knows what the type is and so do we.  Therefore we can replace the above with the following and everything will work exactly the same.

    auto someVar2 = new SomeObject;

    Now to be fair overuse of the auto keyword can cause problems and make maintaining the code harder.  But in the example above and another case it makes code easier to read and understand.  So limited use of this keyword is good.  What’s the other case?  Well that would be…
  • Lambda expressions – Lambdas can be hard to explain for those that don’t work with them.  Basically though a lambda is an anonymous function that you declare and use in one specific place.  The compiler is responsible for generating the underlying boilerplate code to create the real function and hook everything up.  The main benefit of lambdas is that it allows us to replace one-use functions with a lambda expression that is defined where we actually use it (sort of like a nested function, but not really).  A full discussion of lambda is beyond this post so refer to the link above. 

    Where do anonymous types come in?  Well the underlying type of a lambda expression is managed by the compiler so if you want to create a lambda variable you can’t really find a type that would work.  This is the other case where anonymous types come in.  You can create a variable of an anonymous type and assign it a lambda expression.  Then you can use the variable elsewhere without regard for the type.  The syntax for lambda expressions in C++ isn’t easy so I’ll defer from samples.  Refer to the provided link above.

    It is important to remember that lambdas and anonymous types are strictly compile-time features.  The compiler generates the boilerplate code you would normally write to get this to work.  At runtime C++ just calls functions and all variables have a strong type.

Should I Upgrade Now or Wait?

There were a lot of folks asking if they should go ahead and upgrade to VS2010 or wait for vNext.  The simple answer is: upgrade now.  Microsoft generally only supports one version back on compatibility so in order to be ready for vNext you should first get your code running under VS2010.  Furthermore vNext does not yet have a release date.  It could be 6 months or 6 years.  The standard life cycle for VS seems to be 2-3 years so it is possible that vNext will be released 2012-2013 but it is far to early to tell.  In the meantime VS2010 provides a lot of functionality today that is better than VS2008.  You need to be familiar with this functionality in order to prepare for what is coming in vNext.  So if you haven’t upgraded to VS2010 yet then do so now.

Tech Ed NA 2011 (Halfway) – Cool New Features

TechEd is half over and I’ve only been able to attend a few sessions.  Nevertheless there’s lots of new stuff coming down the pike.  Here’s my favorite things thus far. Note that no release dates (or even guarantees) are available yet.

Juneau (aka SQL Server Data Tools)

Remember when VS2010 came out and suddenly none of your SQL projects could be loaded?  Remember running the SQL Server 2008 R2 installer to get back your SQL projects?  Remember having to keep VS2008 around until the tools were finally released?  Oh wait – you still have to keep VS2008 around for some SQL projects.  Oh well.  MS has sworn that they don’t intent to make that mistake again and Juneau is the answer.  Juneau is a set of tools for developing databases just like you do source code including the editor, solution explorer support, source control, etc – only much, much better.  Rather than working directly with the database Juneau works with a model of the database (sound familiar?)  Juneau can track changes that are made to the model and generate scripts to apply those changes back to the real database (either during development or later) without having to write any SQL scripts or wipe existing data.  And that’s only the beginning.  Juneau is taking advantage of VS’s excellent editor to allow you to work with the database model just like you would source code.

You can learn more about Juneau and why it is going to be so important for database development here: http://msdn.microsoft.com/en-us/data/gg427686.

Task-Based Asynchronous Programming

The Task Parallel Library (TPL) is becoming the preferred way to do asynchronous development since it removes the need to worry about the existing asynchronous patterns (begin/end, event based) and the thread pool. It is available as of .NET v4.  You can read more about it here: http://msdn.microsoft.com/en-us/library/dd460717.aspx.  Honestly if you are doing .NET development then you’ll need to learn about TPL.  It really is pretty straightforward to build even moderately complex pipelines of work. 

Here’s an example.  This is some sample code that starts a task that is similar to any code you might find that loads data from a database or WCF service.  For demo purposes the code simply sleeps but imagine it was doing real work.  Notice that calls to update the UI have to be marshalled to the UI thread.  Also notice that once the task is complete we need to do some work.

private void button1_Click ( object sender, RoutedEventArgs e )
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default)

private void DoSomeWork ()
   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Loading data…”);

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Processing data…”);

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Finalizing data…”);

private void WorkCompleted ( Task t )

      Status = “Done”;
      CanRun = true;
   } catch (Exception e)
      Status = e.Message;

private void LoadData ()

private void ProcessData ( )

private void FinalizeData ()

private void UpdateStatus ( string text )
   Status = text;

The only real downside is that each task requires a method (or lambda) to run.  This means you’ll end up having lots of little methods that run in some arbitrary thread and do one thing. This can make understanding the code far harder than it needs to be.  Enter Task-based Asynchronous Programming (TAP).  TAP is a couple of language extensions to C#/VB that makes the process of creating tasks simpler by allowing you to put all the task-related stuff into a single method.  Any time asynchronous work needs to be done you use a keyword that tells the compiler that the work needs to be asynchronously.  During compilation the compiler basically breaks up your code into tasks.  You don’t have to any of the boilerplate work yourself.  Here’s a sample of how it might look (keep in mind that it is a CTP and anything could change – and also I’m not currently running the CTP to verify the syntax).

private void button1_Click ( object sender, RoutedEventArgs e )
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default);

private async void DoSomeWork ()
      Status = “Loading data…”;
      await LoadData();

      Status = “Processing data…”;
      await ProcessData();

      Status = “Finalizing data…”;
      await FinalizeData();

      Status = “Done”;                
   } catch (Exception e)
      Status = e.Message;

   CanRun = true;

private void LoadData ()

private void ProcessData ( )

private void FinalizeData ()

The CTP is available here: http://msdn.microsoft.com/en-us/vstudio/gg316360.  Be careful about installing this on production machines as it does modify the compilers. 

TPL Dataflow

As an extension to TAP, TPL Dataflow (still in development) uses blocks to represent parallel tasks of functionality.  For example there is a block to join two collections into one, another block to transform one collection to another, etc.  These blocks use tasks internally to manage the data sent to them.  Basically each block can be thought of as a parallel algorithm and all that needs to be provided is the data.  Dataflow is still too early in development to be useful but one can imagine an age where we have a workflow designer that allows us to drop blocks down to manage our data without regard for threading and the parallel algorithms needed.  You can read more about TPL Dataflow here: http://msdn.microsoft.com/en-us/devlabs/gg585582.


It is Microsoft after all.  There are plenty of discussions about things coming in Visual Studio vNext (no release date yet though).  Some are still in early phases of work while others are almost certainly already in.  Still no demos of vNext so either the UI isn’t changing that much (yeah) or it is too early in the process to be showing it.  Either way excitement for the new features is certainly obvious from the attendees I’ve talked to – especially parallel processing.  If you haven’t been looking into parallel processing yet you might want to.  It is an inevitable future for developers.

These are just the topics that I had a chance to sit in on at TechEd.  Refer to the TechEd site (http://northamerica.msteched.com/default.aspx?fbid=yX5WuBwNOm8) for more information on what’s going on and all the cool things coming down the pike.

Reflector Is Dead

(The views expressed in this post are my own and are not reflections of my employer, peers or any company anywhere.  Take it as you wish.)

It was bound to happen.  Honestly did anybody not see this coming?  Reflector is officially a dead product to the majority of .NET developers.  In other words Red Gate (RG) is making it a commercial-only product (http://www.reflector.net/2011/04/why-we-reversed-some-of-our-reflector-decision/).  After some backlash they have decided to release the old version for free but read below as to why this isn’t quite as it seems.

First some history.  Reflector has been around a long time – for free.  Most everybody inside and outside of Microsoft will mention Reflector when talking about disassembling code.  A few years ago the author of Reflector sold the code to RG.  I’m sure the original thought was that Reflector would remain free and RG would be able to make money off Pro versions.  How many times have we heard this story?  Early changes were annoying but tolerable.  We had to install the product now instead of xcopy because, you know, they can’t add licensing to an xcopy image.  We also got the annoying “please buy our Pro version” ads.  Again, annoying but tolerable.

As one could expect RG didn’t make sufficient money off the Pro version to cover their costs.  They had to recoup the initial purchase price plus the cost of ongoing maintenance.  Why would somebody pay money for something that is free?  The only good answer is if the paid version had features worth paying for.  What features did RG add that were actually worth anything?  I can’t think of one.  Let’s see, they added integration with VS.  Funny, I had that by using ToolsExternal Tools in VS.  They added shell integration.  Again, had that by a simple registry change.  In other words they added absolutely nothing to an existing free tool and expected that people would want the Pro version.  They could have gotten sneaky and started removing features that were previously free but that would have caused an uproar.

So the folks at RG have decided that they can’t sustain a free product anymore and therefore are completely eliminating the free version.  Even worse is that they removed all options for actually getting the free version before (or as) they announced it (just go read the forums).  Fortunately (maybe) they have temporarily added back the free version BUT you must do the following: 1) have an existing copy of v6, 2) check for updates and 3) do so before the deadline (which I believe is August 2011).  After that you’re out of luck.  Even more sinister is that they say it is a free, unsupported version but the fine print says that you actually get an activation license for 5 machines.  So what does that mean if you have to reinstall?  I have absolutely no idea but it sounds like a limited version to me. 

Now one could argue that $35 isn’t a bad price for Reflector and I would be wholeheartedly in aggreement IF 1) it was a new product that they had actually written, 2) it provided functionality that was not available elsewhere and 3) it hadn’t been previously available for free for years.  RG probably looked at other programs (i.e. Linux) that have both free and paid versions and thought they could do the same. It didn’t work out.  Their decision is undoubtedly a business one.  While I can understand their decision I don’t have to support it.  After reflecting on Reflector I’ve decided that I will continue to use the free version of Reflector until such time as a better tool comes along or my activations run out.  Then I’ll switch over to the less useful, but still capable, ILDasm.  All RG has done is angered those who feel betrayed by the “free-to-paid” switch.  I doubt they’ll see any additional money.

What does the future hold for Reflector?  Unfortunately I don’t think it is good.  RG is trying to recoup their costs and I don’t think they’re going to be able to do it.  Most devs are not going to pay for the Pro version if they have the free version (which is probably why the licensing is set up the way it is).  They might get some new customers but I don’t know that it’ll cover the long term.  I expect that Reflector is going to effectively die because of lack of money.  The only way I really see Reflector surviving is for RG to release it to open source (again) and let the community support it themselves.  Yes RG would lose money but the way I see it RG needs to cut their loses and go on. 

RIP (free) Reflector.  You were a good tool.  You will be missed.


BuildVer Update

UPDATE: A new update (v2.1) that resolves a few issues:

  • Character set conflicts when using non-ASCII characters
  • Updated documentation on how to integrate version.rci without having VS delete it the next time you open the resource designer
  • Fix for parameters that are built from the product version

I recently had the need to update my BuildVer tool to use in newer projects.  BuildVer is an incremental versioning tool that can be used for automated builds.  It was originally written to allow us to use the Win32 VERSIONINFO resource but have it updated for each build.  Later functionality was added to support generating standard .NET assembly info files.  While the existing tool was sufficient I decided that it was time to update the code and, at the same time, add some new features.

The original code was written over a decade ago in C++.  Later it was updated for .NET support and to use MFC types.  The new tool is completely written in C#.  The original version could generate 1 or 2 files (a managed and unmanaged file) and an optional text file.  Some support was added for specifying the build information on the command line.  This was sufficient to meet the needs of the time.

v2 supports generating any number of output files with their corresponding template files.  The tool works with any text file.  Additionally the configuration file has been expanded to allow for specifying arbitrary parameter name-value sets that can be used in template files.  There are a couple of pre-defined sets for versioning, company information, etc.  Any of the parameter values can be overridden by the command line.  This allows for more flexibility in the tool while keeping the command line options concise.

The readme.txt file contains all the details.  The attached file contains the source code and a v4 version of the tool along with sample templates for C# and C++.  Feel free to use the tool as you see fit.  One note however, some of the code in the tool (specifically the command line processor) is copyright Carlson Engineering, Inc. and may not be reused without explicit permission.  You can compile and use the tool itself without permission. 

VS 2010 SP1 Favorite Changes

Honestly SP1 doesn’t seem to be a major release compared to previous SPs we’ve seen but there are still some interesting changes.  Here’s my highlights.

  • Non-browser based help –  If there is absolutely one feature that caused an uproar during the betas of VS 2010 it was the new help system.  The help team took a serious beating while trying to justify the new system.  For better or worse we have to live with the results.  One of the big friction points was the reliance on the web browser and online help.  Fortunately before final release MS fixed things so we could have offline help but we were stuck with the browser.  Fortunately the group at Helpware Group remedied this with a client UI for viewing help files.  Albeit it wasn’t the best tool and it seems like a new fix was available weely but the group really did a great job and they stayed vigilent. 

    Fortunately with the release of SP1 they no longer have to do this as MS has included a client UI for viewing help files.  Hopefully MS has learned a valuable lesson about developers and their help system and, in the future, will not make such sweeping changes without first getting feedback.
  • Silverlight 4 –  Sure you could download the SDK separately but now it is included. 
  • Docking window crash – How long has VS supported docking windows?  How long has multiple monitor support been around?  Evidently nobody at MS does any of this because the RTM version of VS2010 crashes if you dock/undock windows while debugging.  Fortunately MS fixed this with a hotfix and now it is available in SP1.
  • IIS Express – What, never heard of IIS Express?  It is basically IIS bundled by itself and usable on client machines without all the headaches and issues of IIS.  Read about it here.  SP1 introduces support for IIS Express.  I suspect that most devs will migrate to it and that WebDevServer will quickly go away given all its existing limitations.
  • x64 Intellitrace support-  SP1 now supports using Intellitrace in x64 builds.  Note that this doesn’t mean you can do Edit and Continue in x64.  That issue still remains but at least x64 is starting to get the love.

UPDATE:  A couple of notes.

  1. SP1 can be installed over SP1 beta.
  2. Remember how long it took to install VS 2008 SP1.  Plan for the same hour or so.  This time it just takes that long to install rather than taking 20 minutes to extract the files like it did in VS 2008.