Comparing Characters

.NET provides great support for comparing strings.  Using StringComparer we can compare strings using the current culture settings or with case insensitivity.  This makes it easy to use strings with dictionaries or just compare them directly.  As an example we can determine if a user is a member of a particular group by using the following code.

bool IsInSpecialGroup ( string user )
   var groups = GetUserGroups(user);

   var comparer = StringComparer.CurrentCultureIgnoreCase;
   foreach (var group inn groups)
      if (comparer.Compare(group, “group1”;) == 0 ||
          comparer.Compare(group, “group2”;) == 0 ||
          comparer.Compare(group, “group3”;) == 0)
         returnn true;

   return false;

Characters have most of the same problems as strings do (culture, case, etc) but .NET provides almost no support for comparing them.  If we want to compare characters without regard to case or in a culture sensitive manner we have to write the code ourselves.  Fortunately it is easy to emulate this using the existing infrastructure.  CharComparer is a parallel type to StringComparer.  It provides identical functionality except it works against characters instead of strings.  As an example the following code determines if a character is a vowel or not.

bool IsAVowel ( char ch )
   var vowels = new char[] { ‘a’‘e’‘i’‘o’‘u’ };

   var comparer = CharComparer.CurrentCultureIgnoreCase;

   foreach(var vowel in vowels)
      if (comparer.Compare(vowel, ch) == 0)
         return true;

   return false;

Just like StringComparer, CharComparer has static properties exposing the standard comparison types available in .NET.  Furthermore since StringComparison is commonly used in string comparisons CharComparer provides the static method GetComparer that accepts a StringComparison and returns the corresponding CharComparer instance. 

bool ContainsACharacter ( this IEnumerable<char> source, char value, StringComparison comparison )
   var comparer = CharComparer.GetComparer(comparison);

   return source.Contains(value, comparer);            

CharComparer doesn’t actually do comparisons directly.  This process can be difficult to get right so it just defers to StringComparer internally.  Naturally this means that CharComparer doesn’t actually do anything different than you would normally do nor does it perform any better.  What it does do is provide an abstraction over the actual process and simplify it down to a couple of lines of code.  If, one day, .NET exposes a better way of comparing characters then CharComparer can be updated without breaking existing code.  Even better is that your code can use CharComparer and StringComparer almost interchangeably without worrying about the details under the hood.

CharComparer implements the standard comparison interfaces: IComparer<char>, IComparer, IEqualityComparer<char> and IEqualityComparer.  The non-generic versions are privately implemented to enforce type safety.  The generic methods are abstract as is CharComparer.  Comparison is specific to the culture being used.  CharComparer defines a couple of nested, private types to implement the culture-specific details.  The nested types are responsible for providing the actual implementation of the comparison methods.  Refer to the source code for the gory details.  Note that this pattern is derived from how StringComparer works.

Feel free to use this code in any of your projects and provide feedback on any issues found.  Unfortunately I’m not posting the unit tests for this class at this time.  However I’ve used this type in several projects and haven’t run into any problems with it.  But, as always, test your code before using it in production.

ServiceBase.OnStart Peculiarity

When implementing a service you really have to have a good understanding of how Windows services work.  If you do it wrong then your service won’t work properly, or worse, can cause problems in Windows.  Services must be responsive and be a good citizen when working with the Service Control Manager (SCM).  The .NET implementation hides a lot of these details but there is a hidden complexity under the hood that you must be aware of.  But first a brief review of how Windows services work.

Windows Services Internals (Brief)

All services run under the context of the SCM.  The SCM is responsible for managing the lifetime of a service.  All interaction with a service must go through the SCM.  The SCM must be thread safe since any number of processes may be interacting with a service at once.  In order to ensure that a single service does not cause the entire system to grind to a halt the SCM manages each service on a separate thread.  The exact internal details are not formally documented but we know that the SCM uses threads to work with each service. 

Each service is in one of several different states such as started, paused or stopped.  The SCM relies on the state of a service to determine what the service will and will not support.  Since state changes can take a while most states have a corresponding pending state such as start pending or stop pending.  The SCM expects a service to update its state as it runs.  For example when the SCM tells a service to start the service is expected to move to the start pending state and, eventually, the started state.  The SCM won’t wait forever for a service to respond.  If a service does not transition fast enough then the SCM considers the service hung.  To allow for longer state changes a service must periodically notify the SCM that it needs more time.

One particularly important state change is the stop request.  When Windows shuts down the SCM sends a stop request to all services.  Every service is expected to stop quickly.  The SCM gives a (configurable) time for each service to stop before it is forcifully terminated.  If it wasn’t for this behavior a hung or errant service could cause Windows shutdown to freeze.

A Day In the Life Of a Service

A service is normally a standard Windows process and hence has a WinMain.  However a single process can host multiple services (many of the Windows services are this way) so WinMain itself is not the service entry point.  Instead a service process must register the list of supported services and their entry points to the SCM via a call to StartServiceCtrlDispatcher.  This method, which is a blocking call, hooks up the process to the SCM and doesn’t return until all listed services are stopped.  The method takes the service name and its entry point (normally called ServiceMain).  When the SCM needs to start a service it calls the entry point on a separate thread (hence each service gets its own in addition to the process).  The entry point is required to call RegisterServiceCtrlHandlerEx to register a function that handles service requests (the control handler).  It also must set the service state to start pending.  Finally it should initialize the service and then exit.  The thread will go away but the service will continue to run. 

One caveat to the startup process is the fact that it must be quick.  The SCM uses an internal lock to serialize startup.  Therefore services cannot start at the same time and a long running service can stall the startup process.  For this reason the general algorithm is to set the state to start pending, spawn a worker thread to do the real work and then set the service to running.  Any other variant can slow the entire system down.

All future communication with the service will go through the control handler function.  Each time the function is called (which can be on different threads) the service will generally change state.  This will normally involve changing to the pending state, doing the necessary work and then setting the service to the new state.  Note that in all cases the SCM expects the service to respond quickly.

.NET Implementation

In .NET the ServiceBase class hides most of the state details from a developer.  To ensure that the service is a good citizen the .NET implementation hides all this behind a few virtual methods that handle start, stop, pause, etc.  All a developer need do is implement each one.  The base class handles setting the state to pending and to the final state while the virtual call is sandwiched in between.  However the developer is still responsible for requesting additional time if needed.  Even the registration process is handled by the framework.  All a developer needs to do is call ServiceBase.Run and pass in the service(s) to host.

 All is wonderful and easy in .NET land, or is it.  If you read the documentation carefully you’ll see a statement that says the base implementation hides all the details of threading so you can just implement the state methods as needed but this is not entirely true.  All the implementations except OnStart behave the same way.  When the control handler is called it sets the service to the pending state, executes the corresponding virtual method asynchronously and returns.  Hence the thread used to send the request is not the same thread that handles the request and ultimately sets the service state.  This makes sense and meets the requirements of the SCM.  More importantly it means the service can take as long as it needs to perform the request without negatively impacting the SCM.

The start request is markedly different.  When the start request is received the base class moves the service to the start pending state, executes the OnStart virtual method asynchronously and then…waits for it to complete before moving the service to the start state.  See the difference?  The start request thread won’t actually return until OnStart completes.  Why does the implementation bother to call the method asynchronously just to block waiting for it to complete?  Perhaps the goal was to make all the methods behave symmetrically in terms of thread use.  Perhaps the developers didn’t want the service to see the real SCM thread.  Nevertheless it could have used a synchronous call and behaved the same way. 

What does this mean for service developer?  It means your OnStart method still needs to run very fast (create a thread and get out) even in the .NET implementation even though all the other control methods can be implemented without regard for the SCM.  If OnStart takes too long then it’ll block the SCM.  More importantly the OnStart method needs to periodically request additional time using RequestAdditionalTime to avoid the SCM thinking it is hung.


When implementing a service in .NET it is still important to understand how native services and the SCM work together.  The OnStart method must be fast and well behaved to avoid causing problems with Windows.  The other control methods are less restrictive but still require awareness of how the base implementation works.  Writing a service is trivial as far as coding goes but services require a great deal of care in order to ensure they behave properly.  This doesn’t even get into the more complex issues of security, installation, event logging and error handling which are broad topics unto themselves.

Tech Ed 2011 Summary

Tech Ed was great this year.  Already mentioned a few topics that were of great interest.  Here’s some more topics that deserve further attention.


Ever heard of functional programming or a functional language?  No?  Not surprised.  It is one of several categories that language designers use to identify languages.  Functional languages have their basis in mathematics so they look and act like math expressions (functions).  What makes functional programming so useful is that data is generally passed one function to another.  In fact the data is generally just defined in terms of the function it performs.  This makes these languages great for mathematical processes.  It also solves one of the more difficult problems of multi-threaded programming – shared data.  Normally with MT programs you have to use locks to protect shared data.  With a functional language this isn’t necessary because each functional data feeds into the next like a pipeline.  Even more important is that most functional programming involves arrays/lists of items.  Functional languages can parallelize these values, do some processing and put it all back together without the programmer even worrying about a lock.  Needless to say functional languages are great for parallel processing.

F# happens to be a functional language.  Written by the folks at Microsoft, it runs on the .NET framework.  It has actually been out a couple of years but most people haven’t been exposed to it until now.  In fact if you have VS2010 it is already on your machine.  You can use the F# Interactive tool window to start playing around with it.  It comes with some great tutorials as well.  However functional programming is as far beyond procedural programming as procedural is beyond English writing.  Therefore it will take time to understand.  You probably will never be creating your UIs in a functional language but it is likely that one day you’ll be using a parallel library that is itself written in F#.  So now may be a good time to take a look at it.  Here’s a starter link: http://www.tryfsharp.org/Default.aspx.  If you are running VS 2010 then you already have the necessary tools.


Honestly I wrote this off as yet another tool for wannabe programmers.  After hearing more about it I realize that it may very well be the future IT programing tool.  Today most IT groups use Access or Excel for creating simple apps they need to get their work done.  Who hasn’t had to deal with crazy UIs and hacked together VBA?  LS will change that albeit with a slightly higher learning curve.  LS allows non-developers to create the same types of apps (minus the reporting) but using modern tools and the entire .NET framework.  When such an app eventually winds up in the hands of the “real” devs we won’t want to shoot ourselves over the limitations of some scripting language.  We’ll be working in VB/C# and the framework!!

LS does have a higher entry requirement than existing tools.  It is designed for 2 or 3 tiers.  The client application is Silverlight hosted via IIS.  The back end can be SQL Server (any edition), WCF services or whatever.  The IIS requirement (I believe it ships with Cassini) is probably going to be the biggest headache but it’ll be worth it.  Who hasn’t had someone call them and complain about a bug in some Access app.  When you ask what version they’re running they say Office 2007.  “No, what version of the app?  I don’t know.   Argh!!!”  SL removes the deployment/update issue while still allowing it to run just about anywhere.

LS is still in beta but a GoLive license is available so companies can begin using it today.  Here’s the MSDN for LightSwitch: http://www.microsoft.com/visualstudio/en-us/lightswitch/


Talk to any C++ developer and you’ll hear grumblings about the lack of features and support in later versions of VS.  Well that’s changing.  In VS2010 RTM Intellisense was completely rewritten.  It is now fast, accurate and reliable.  In SP1 the C++ team has added some basic editor features that were either not fully baked in RTM or missing altogether.  vNext problems to bring C++ back up to par with the existing .NET languages.  Exactly what those updates will be we’ll have to wait and see.

Another area with VS2010 really advanced C++ is with the new C++ 0X standard that should be approved this year.  C++ now has better support for parallel processing and the VS2010 version already supports this.  Here’s some of the C++ standard changes already in VS2010:

  • shared_ptr – Updates to this type (introduced in VS2008) make it easier to work with and reliable. 
  • Concurrency runtime – Adds support for parallel processing in C++.  This is different from openmp which is about multithreading an app.  ConcRT works and is easily added to existing apps just by including ppl.h (for the most part).  It will make the process of optimizing existing code to take advantage of multiple cores easier.
  • Anonymous types – Ever heard of the auto keyword?   Most people haven’t but it was in the original C++ spec.  It’s purpose was to allow a programmer to optimize code generation by telling the compiler some additional information about variables.  Nobody really used it and it was deprecated.  However in order to support newer constructs this keyword has been changed to represent something completely different – anonymous types.  An anonymous type is really just a type where the compiler figures out the underlying type based upon usage rather than a programmer specifying it.  We aren’t losing any of the strong type checking support C++ is known for.  All we’re doing is telling the compiler to figure it out.  Here’s an example.
    SomeObject* someVar = new SomeObject;

    It really is redundant to specify the type twice.  The compiler knows what the type is and so do we.  Therefore we can replace the above with the following and everything will work exactly the same.

    auto someVar2 = new SomeObject;

    Now to be fair overuse of the auto keyword can cause problems and make maintaining the code harder.  But in the example above and another case it makes code easier to read and understand.  So limited use of this keyword is good.  What’s the other case?  Well that would be…

  • Lambda expressions – Lambdas can be hard to explain for those that don’t work with them.  Basically though a lambda is an anonymous function that you declare and use in one specific place.  The compiler is responsible for generating the underlying boilerplate code to create the real function and hook everything up.  The main benefit of lambdas is that it allows us to replace one-use functions with a lambda expression that is defined where we actually use it (sort of like a nested function, but not really).  A full discussion of lambda is beyond this post so refer to the link above. 

    Where do anonymous types come in?  Well the underlying type of a lambda expression is managed by the compiler so if you want to create a lambda variable you can’t really find a type that would work.  This is the other case where anonymous types come in.  You can create a variable of an anonymous type and assign it a lambda expression.  Then you can use the variable elsewhere without regard for the type.  The syntax for lambda expressions in C++ isn’t easy so I’ll defer from samples.  Refer to the provided link above.

    It is important to remember that lambdas and anonymous types are strictly compile-time features.  The compiler generates the boilerplate code you would normally write to get this to work.  At runtime C++ just calls functions and all variables have a strong type.

Should I Upgrade Now or Wait?

There were a lot of folks asking if they should go ahead and upgrade to VS2010 or wait for vNext.  The simple answer is: upgrade now.  Microsoft generally only supports one version back on compatibility so in order to be ready for vNext you should first get your code running under VS2010.  Furthermore vNext does not yet have a release date.  It could be 6 months or 6 years.  The standard life cycle for VS seems to be 2-3 years so it is possible that vNext will be released 2012-2013 but it is far to early to tell.  In the meantime VS2010 provides a lot of functionality today that is better than VS2008.  You need to be familiar with this functionality in order to prepare for what is coming in vNext.  So if you haven’t upgraded to VS2010 yet then do so now.

Tech Ed NA 2011 (Halfway) – Cool New Features

TechEd is half over and I’ve only been able to attend a few sessions.  Nevertheless there’s lots of new stuff coming down the pike.  Here’s my favorite things thus far. Note that no release dates (or even guarantees) are available yet.

Juneau (aka SQL Server Data Tools)

Remember when VS2010 came out and suddenly none of your SQL projects could be loaded?  Remember running the SQL Server 2008 R2 installer to get back your SQL projects?  Remember having to keep VS2008 around until the tools were finally released?  Oh wait – you still have to keep VS2008 around for some SQL projects.  Oh well.  MS has sworn that they don’t intent to make that mistake again and Juneau is the answer.  Juneau is a set of tools for developing databases just like you do source code including the editor, solution explorer support, source control, etc – only much, much better.  Rather than working directly with the database Juneau works with a model of the database (sound familiar?)  Juneau can track changes that are made to the model and generate scripts to apply those changes back to the real database (either during development or later) without having to write any SQL scripts or wipe existing data.  And that’s only the beginning.  Juneau is taking advantage of VS’s excellent editor to allow you to work with the database model just like you would source code.

You can learn more about Juneau and why it is going to be so important for database development here: http://msdn.microsoft.com/en-us/data/gg427686.

Task-Based Asynchronous Programming

The Task Parallel Library (TPL) is becoming the preferred way to do asynchronous development since it removes the need to worry about the existing asynchronous patterns (begin/end, event based) and the thread pool. It is available as of .NET v4.  You can read more about it here: http://msdn.microsoft.com/en-us/library/dd460717.aspx.  Honestly if you are doing .NET development then you’ll need to learn about TPL.  It really is pretty straightforward to build even moderately complex pipelines of work. 

Here’s an example.  This is some sample code that starts a task that is similar to any code you might find that loads data from a database or WCF service.  For demo purposes the code simply sleeps but imagine it was doing real work.  Notice that calls to update the UI have to be marshalled to the UI thread.  Also notice that once the task is complete we need to do some work.

private void button1_Click ( object sender, RoutedEventArgs e )
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default)

private void DoSomeWork ()
   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Loading data…”);

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Processing data…”);

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Finalizing data…”);

private void WorkCompleted ( Task t )

      Status = “Done”;
      CanRun = true;
   } catch (Exception e)
      Status = e.Message;

private void LoadData ()

private void ProcessData ( )

private void FinalizeData ()

private void UpdateStatus ( string text )
   Status = text;

The only real downside is that each task requires a method (or lambda) to run.  This means you’ll end up having lots of little methods that run in some arbitrary thread and do one thing. This can make understanding the code far harder than it needs to be.  Enter Task-based Asynchronous Programming (TAP).  TAP is a couple of language extensions to C#/VB that makes the process of creating tasks simpler by allowing you to put all the task-related stuff into a single method.  Any time asynchronous work needs to be done you use a keyword that tells the compiler that the work needs to be asynchronously.  During compilation the compiler basically breaks up your code into tasks.  You don’t have to any of the boilerplate work yourself.  Here’s a sample of how it might look (keep in mind that it is a CTP and anything could change – and also I’m not currently running the CTP to verify the syntax).

private void button1_Click ( object sender, RoutedEventArgs e )
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default);

private async void DoSomeWork ()
      Status = “Loading data…”;
      await LoadData();

      Status = “Processing data…”;
      await ProcessData();

      Status = “Finalizing data…”;
      await FinalizeData();

      Status = “Done”;                
   } catch (Exception e)
      Status = e.Message;

   CanRun = true;

private void LoadData ()

private void ProcessData ( )

private void FinalizeData ()

The CTP is available here: http://msdn.microsoft.com/en-us/vstudio/gg316360.  Be careful about installing this on production machines as it does modify the compilers. 

TPL Dataflow

As an extension to TAP, TPL Dataflow (still in development) uses blocks to represent parallel tasks of functionality.  For example there is a block to join two collections into one, another block to transform one collection to another, etc.  These blocks use tasks internally to manage the data sent to them.  Basically each block can be thought of as a parallel algorithm and all that needs to be provided is the data.  Dataflow is still too early in development to be useful but one can imagine an age where we have a workflow designer that allows us to drop blocks down to manage our data without regard for threading and the parallel algorithms needed.  You can read more about TPL Dataflow here: http://msdn.microsoft.com/en-us/devlabs/gg585582.


It is Microsoft after all.  There are plenty of discussions about things coming in Visual Studio vNext (no release date yet though).  Some are still in early phases of work while others are almost certainly already in.  Still no demos of vNext so either the UI isn’t changing that much (yeah) or it is too early in the process to be showing it.  Either way excitement for the new features is certainly obvious from the attendees I’ve talked to – especially parallel processing.  If you haven’t been looking into parallel processing yet you might want to.  It is an inevitable future for developers.

These are just the topics that I had a chance to sit in on at TechEd.  Refer to the TechEd site (http://northamerica.msteched.com/default.aspx?fbid=yX5WuBwNOm8) for more information on what’s going on and all the cool things coming down the pike.

Reflector Is Dead

(The views expressed in this post are my own and are not reflections of my employer, peers or any company anywhere.  Take it as you wish.)

It was bound to happen.  Honestly did anybody not see this coming?  Reflector is officially a dead product to the majority of .NET developers.  In other words Red Gate (RG) is making it a commercial-only product (http://www.reflector.net/2011/04/why-we-reversed-some-of-our-reflector-decision/).  After some backlash they have decided to release the old version for free but read below as to why this isn’t quite as it seems.

First some history.  Reflector has been around a long time – for free.  Most everybody inside and outside of Microsoft will mention Reflector when talking about disassembling code.  A few years ago the author of Reflector sold the code to RG.  I’m sure the original thought was that Reflector would remain free and RG would be able to make money off Pro versions.  How many times have we heard this story?  Early changes were annoying but tolerable.  We had to install the product now instead of xcopy because, you know, they can’t add licensing to an xcopy image.  We also got the annoying “please buy our Pro version” ads.  Again, annoying but tolerable.

As one could expect RG didn’t make sufficient money off the Pro version to cover their costs.  They had to recoup the initial purchase price plus the cost of ongoing maintenance.  Why would somebody pay money for something that is free?  The only good answer is if the paid version had features worth paying for.  What features did RG add that were actually worth anything?  I can’t think of one.  Let’s see, they added integration with VS.  Funny, I had that by using ToolsExternal Tools in VS.  They added shell integration.  Again, had that by a simple registry change.  In other words they added absolutely nothing to an existing free tool and expected that people would want the Pro version.  They could have gotten sneaky and started removing features that were previously free but that would have caused an uproar.

So the folks at RG have decided that they can’t sustain a free product anymore and therefore are completely eliminating the free version.  Even worse is that they removed all options for actually getting the free version before (or as) they announced it (just go read the forums).  Fortunately (maybe) they have temporarily added back the free version BUT you must do the following: 1) have an existing copy of v6, 2) check for updates and 3) do so before the deadline (which I believe is August 2011).  After that you’re out of luck.  Even more sinister is that they say it is a free, unsupported version but the fine print says that you actually get an activation license for 5 machines.  So what does that mean if you have to reinstall?  I have absolutely no idea but it sounds like a limited version to me. 

Now one could argue that $35 isn’t a bad price for Reflector and I would be wholeheartedly in aggreement IF 1) it was a new product that they had actually written, 2) it provided functionality that was not available elsewhere and 3) it hadn’t been previously available for free for years.  RG probably looked at other programs (i.e. Linux) that have both free and paid versions and thought they could do the same. It didn’t work out.  Their decision is undoubtedly a business one.  While I can understand their decision I don’t have to support it.  After reflecting on Reflector I’ve decided that I will continue to use the free version of Reflector until such time as a better tool comes along or my activations run out.  Then I’ll switch over to the less useful, but still capable, ILDasm.  All RG has done is angered those who feel betrayed by the “free-to-paid” switch.  I doubt they’ll see any additional money.

What does the future hold for Reflector?  Unfortunately I don’t think it is good.  RG is trying to recoup their costs and I don’t think they’re going to be able to do it.  Most devs are not going to pay for the Pro version if they have the free version (which is probably why the licensing is set up the way it is).  They might get some new customers but I don’t know that it’ll cover the long term.  I expect that Reflector is going to effectively die because of lack of money.  The only way I really see Reflector surviving is for RG to release it to open source (again) and let the community support it themselves.  Yes RG would lose money but the way I see it RG needs to cut their loses and go on. 

RIP (free) Reflector.  You were a good tool.  You will be missed.


BuildVer Update

UPDATE: A new update (v2.1) that resolves a few issues:

  • Character set conflicts when using non-ASCII characters
  • Updated documentation on how to integrate version.rci without having VS delete it the next time you open the resource designer
  • Fix for parameters that are built from the product version

I recently had the need to update my BuildVer tool to use in newer projects.  BuildVer is an incremental versioning tool that can be used for automated builds.  It was originally written to allow us to use the Win32 VERSIONINFO resource but have it updated for each build.  Later functionality was added to support generating standard .NET assembly info files.  While the existing tool was sufficient I decided that it was time to update the code and, at the same time, add some new features.

The original code was written over a decade ago in C++.  Later it was updated for .NET support and to use MFC types.  The new tool is completely written in C#.  The original version could generate 1 or 2 files (a managed and unmanaged file) and an optional text file.  Some support was added for specifying the build information on the command line.  This was sufficient to meet the needs of the time.

v2 supports generating any number of output files with their corresponding template files.  The tool works with any text file.  Additionally the configuration file has been expanded to allow for specifying arbitrary parameter name-value sets that can be used in template files.  There are a couple of pre-defined sets for versioning, company information, etc.  Any of the parameter values can be overridden by the command line.  This allows for more flexibility in the tool while keeping the command line options concise.

The readme.txt file contains all the details.  The attached file contains the source code and a v4 version of the tool along with sample templates for C# and C++.  Feel free to use the tool as you see fit.  One note however, some of the code in the tool (specifically the command line processor) is copyright Carlson Engineering, Inc. and may not be reused without explicit permission.  You can compile and use the tool itself without permission. 

VS 2010 SP1 Favorite Changes

Honestly SP1 doesn’t seem to be a major release compared to previous SPs we’ve seen but there are still some interesting changes.  Here’s my highlights.

  • Non-browser based help –  If there is absolutely one feature that caused an uproar during the betas of VS 2010 it was the new help system.  The help team took a serious beating while trying to justify the new system.  For better or worse we have to live with the results.  One of the big friction points was the reliance on the web browser and online help.  Fortunately before final release MS fixed things so we could have offline help but we were stuck with the browser.  Fortunately the group at Helpware Group remedied this with a client UI for viewing help files.  Albeit it wasn’t the best tool and it seems like a new fix was available weely but the group really did a great job and they stayed vigilent. 

    Fortunately with the release of SP1 they no longer have to do this as MS has included a client UI for viewing help files.  Hopefully MS has learned a valuable lesson about developers and their help system and, in the future, will not make such sweeping changes without first getting feedback.

  • Silverlight 4 –  Sure you could download the SDK separately but now it is included. 
  • Docking window crash – How long has VS supported docking windows?  How long has multiple monitor support been around?  Evidently nobody at MS does any of this because the RTM version of VS2010 crashes if you dock/undock windows while debugging.  Fortunately MS fixed this with a hotfix and now it is available in SP1.
  • IIS Express – What, never heard of IIS Express?  It is basically IIS bundled by itself and usable on client machines without all the headaches and issues of IIS.  Read about it here.  SP1 introduces support for IIS Express.  I suspect that most devs will migrate to it and that WebDevServer will quickly go away given all its existing limitations.
  • x64 Intellitrace support-  SP1 now supports using Intellitrace in x64 builds.  Note that this doesn’t mean you can do Edit and Continue in x64.  That issue still remains but at least x64 is starting to get the love.

UPDATE:  A couple of notes.

  1. SP1 can be installed over SP1 beta.
  2. Remember how long it took to install VS 2008 SP1.  Plan for the same hour or so.  This time it just takes that long to install rather than taking 20 minutes to extract the files like it did in VS 2008.


I have to admit that ASP.NET MVC is cool.  The ability to interleave UI elements with control flow statements makes things a lot easier than the traditional databinding of ASP.NET.  The ability to pass strongly-typed objects to methods and auto-magically have pages appear with the data without ugly hacks is just awesome.  The new Razor engine in MVC 3 takes this to an entirely new level with its much simplified, less ASP-like syntax.

But there is a problem in MVC land.  We’re building UIs so why can’t I visually see what my page is going to look like within the designer?  It brings back memories of the ASP/HTML days where you’d write your UI and then run IE to see what it looks like.  From a UI designer perspective this is insane.  One of the really big features of ASP.NET (and perhaps Visual Interdev) was that I could write my UI and then switch over and view it without leaving VS.  I could even drag and drop controls onto the form and voila I could make changes until it is just right.  No such ability in MVC.

Pundits generally justify this lack of functionality in MVC in one of the following ways:

  • It’s MVC, you don’t care about the view layout since you’ll use CSS to format it – Bull.  CSS controls basic layout, coloring, fonts, etc but there is absolutely no way to verify how your view and CSS play together without actually running it.  One of the big “advantages” of MVC is that we are suppose to be able to pass our view on to a UI author but how many authors do you know that could read MVC blocks and understand them.  It isn’t simply a matter of ignoring the blocks because those blocks control what gets generated when.  So a UI author has to understand MVC just to write the view.  All we’ve accomplished here is to move the complexity from our code to the UI.
  • MVC has extensible view engines so a VS designer would have to support them – So?  IIS renders the various webpages without regard for the underlying rendering engine.  VS itself is designer agnostic.  How hard is it to create a separate designer for each of the view engines that can then be invoked within VS just like in ASP.NET?  All it really takes is some prioritizing.  A UI designer seems pretty important for a UI technology.  Even more so then some of the other features that have been added into MVC.
  • Browsers render things differently so the designer might be wrong – That is true but didn’t ASP.NET have the same issue?  Technically though one of the big benefits of MVC is that most of the view is browser-agnostic.  The various support files are there to help ensure that it renders the same across all browsers.  What better way to test this functionality then by using VS as a testbed?
  • It would just be too difficult – I bet they said the same thing before ASP.NET came out.  I don’t believe there is any technical problem that cannot be resolved.  ASP.NET, in my opinion, is the harder technology to design for since it is so monolithic and structured so it seems that MVC should be easier to create a designer for.
  • Use ASP.NET for your UI and MVC for everything else if it is such a big deal – What?  That would be like C# not supporting integers and the justification being that you can just use Fortran for the math.  Mixing and matching equivalent technologies, while useful when necessary, should be avoided.  It complicates the project, requires more skills from the dev team and can cause interop issues. 

Here’s my solution – add full UI designer support for MVC!!!  How hard can it be to invoke the correct view engine in the VS designer?  MVC view engines are far less heavy than ASP.NET so it seems like a relatively easy task.  Since ASP.NET is already supported the infrastructure needed to host a web-based engine is there.  It just needs to be tweaked to allow MVC engines to run.  Even if we cannot guarantee that a page will look exactly the same when rendered in a browser it would at least give us a better idea of how it might look.  The ASP.NET designer supports loading the CSS correctly so an MVC engine should be able to render pretty close to what we’d expect.

To me the lack of a UI designer (IE doesn’t count) is a really big mark against using MVC in a project.  I started using MVC in a couple of projects but since most web apps are UI-heavy I had to back away.  It was simply to painful trying to build the UI in an MVC-friendly manner.  The whole concept of MVC is great.  I just can’t understand why the V part of MVC is so difficult.  After all isn’t the entire purpose of MVC to allow us to separate M, V and C to simplify development?  It’s really ironic.  They say history repeats itself and I agree.  We started with HTML, moved on to ASP, then to ASP.NET and now we’re back to MVC.  UI designer wise we’ve come full circle.  Perhaps the next great framework innovation will take all the great things of MVC and combine it with a usable designer so we can actually get our work done.  Just my opinion

String Extension Methods

Haven’t posted in a while and don’t have a lot of time today so I’ll just throw up a copy of the string extension methods I’ve been using over the years.  Summary of the provided functions (note that not all of them have been fully tested).

  • Combine – Basically acts like String.Join but handles cases where the delimiters are already part of the string.
  • Is… – Equivalent to Char.Is… but applies to an entire string.
  • Left/Right – Gets the leftmost/rightmost N characters in a string.
  • LeftOf/RightOf – Gets the portion of a string to the left/right of a character or string.
  • Mid – Gets a portion of a string.
  • IndexOfNone/LastIndexOfNone – Finds the index of the first character NOT IN a list of tokens.
  • ReplaceAll – Replaces all occurrences of a token with another token.
  • ToCamel/ToPascal – Camel or Pascal cases a string.
  • ToMultipleWords – Pretty prints a string such as taking SomeValue and converting it to “Some Value”.

Download the Library Code

The Old (New) Common Controls

The common controls provided by Windows are shipped as part of the OS in the Common Controls library.  When .NET was first introduced we had the v5 version.  When XP was released a new version (v6) was introduced.  Applications that needed to run under W2K or XP had to target v5 while XP-only applications could use v6.  v6 added many new controls and enabled theming.  For example task dialogs, the replacement for message boxes, require v6.  Ironically it isn’t even part of the framework yet, you have to use an addon.

Fast forward to today.  .NET v4 only supports XP and above.  Therefore if you’re writing a .NET application you can safely use the v6 controls.  But guess what – .NET uses v5 by default.

Enabling the v6 Controls

Enabling the v6 controls is, unfortunately, not a simple check box.  One approach is to make a specific API call (Application.EnableVisualStyles) before any user interface code.  If you’re writing a WinForms application then the default template generates the necessary code but I’ve always felt like this was a hack.  The code has to run before any code that might do anything related to user interface work otherwise the default v5 controls are loaded. 

The better approach is to add an application manifest that specifically identifies the v6 common controls.  This ensures that the proper version is loaded before any code is run.  Here’s how it might look like:

<assemblyIdentity type=win32 name=Microsoft.Windows.Common-Controls version=
processorArchitecture=* publicKeyToken=6595b64144ccf1df 

Adding a manifest is not hard.  In fact if you are using UAC settings then chances are you already have one.  But honestly this shouldn’t be necessary anymore.  .NET v4 only supports XP+ so a .NET v4 application should load the v6 controls automatically.  But alas this is yet to be the default. 

Problems In Paradise

Unfortunately using a manifest to load the controls introduces some annoying problems.  The first problem is that ClickOnce won’t work anymore.  ClickOnce can’t handle dependent assemblies.  I’ve already blogged about this issue.

The second, and probably more annoying, problem is that you’ll occassionally be debugging your application and find that the manifest seems to be ignored.  This is most obvious when you try to use a task dialog in your code (say for error reporting) and you get an error trying to display the task dialog.  The error will generally say something about needing the v6 controls.  Checking the manifest will confirm that you have set the appropriate options.  You’ll then probably have to run your app and use a tool like Process Explorer to see what DLLs are actually getting loaded (since VS doesn’t seem to display unmanaged DLLs in .NET apps anymore).  You’ll find that v5 is in fact being loaded despite your manifest.  What is going on?

Back in the VS2005 days (approximately) Microsoft bragged about how much faster the debugger was.  They gained this speed boost by using the VS host process (vshost) to load programs to be debugged.  In previous versions when you started the debugger your process would start and all the pre-debug processing would occur.  When debugging was finished all the information was thrown out.  When you started debugging again the information had to be generated again.  vshost actually handles this now.  vshost is loaded when the project loads and remains running until VS shuts down.  This speeds up debugging greatly, but at a cost.  Windows cannot tell the difference between your application and vshost so Microsoft had to write some code to cause vshost to emulate your process as closely as possible (even down to the configuration files).  But at the end of the day vshost is ultimately responsible for running. 

Back to the application manifest.  If vshost loads DLLs before it reads your manifest file (which happens at arbitrary times in my experience) then it is possible that vshost will load the wrong DLL.  In the case of the common controls vshost uses v5 by default.  As a result if you are debugging your application through vshost and your manifest requests v6 controls then it might or might not work.  I’ve had it work after 5 debug sessions and fail on the 6th.  So the takeaway is that if you want to use the v6 controls then you need to disable the vshost process (Project properties -> Debug -> Enable the Visual Studio hosting process).  Note that this only applies to Windows applications.


In summary if you want a XP+ style user interface then you’re going to have to use an application manifest to load the correct version of the controls.  But this will prevent the use of ClickOnce publishing.  It also means that you will likely have to disable the vshost process for debugging.  I hope that Microsoft will fix this issue in a future version of VS.  I’d like to see an option for Windows projects that allow us to select the version of the controls (and perhaps other libraries) that we want to use.  We shouldn’t have to edit manifest files and hack around tools to use what should be the default settings going forward.  If you agree with me then please cast your vote at the Connect site on this issue.