P3.NET

ServiceBase.OnStart Peculiarity

When implementing a service you really have to have a good understanding of how Windows services work.  If you do it wrong then your service won’t work properly, or worse, can cause problems in Windows.  Services must be responsive and be a good citizen when working with the Service Control Manager (SCM).  The .NET implementation hides a lot of these details but there is a hidden complexity under the hood that you must be aware of.  But first a brief review of how Windows services work.

Windows Services Internals (Brief)

All services run under the context of the SCM.  The SCM is responsible for managing the lifetime of a service.  All interaction with a service must go through the SCM.  The SCM must be thread safe since any number of processes may be interacting with a service at once.  In order to ensure that a single service does not cause the entire system to grind to a halt the SCM manages each service on a separate thread.  The exact internal details are not formally documented but we know that the SCM uses threads to work with each service. 

Each service is in one of several different states such as started, paused or stopped.  The SCM relies on the state of a service to determine what the service will and will not support.  Since state changes can take a while most states have a corresponding pending state such as start pending or stop pending.  The SCM expects a service to update its state as it runs.  For example when the SCM tells a service to start the service is expected to move to the start pending state and, eventually, the started state.  The SCM won’t wait forever for a service to respond.  If a service does not transition fast enough then the SCM considers the service hung.  To allow for longer state changes a service must periodically notify the SCM that it needs more time.

One particularly important state change is the stop request.  When Windows shuts down the SCM sends a stop request to all services.  Every service is expected to stop quickly.  The SCM gives a (configurable) time for each service to stop before it is forcifully terminated.  If it wasn’t for this behavior a hung or errant service could cause Windows shutdown to freeze.

A Day In the Life Of a Service

A service is normally a standard Windows process and hence has a WinMain.  However a single process can host multiple services (many of the Windows services are this way) so WinMain itself is not the service entry point.  Instead a service process must register the list of supported services and their entry points to the SCM via a call to StartServiceCtrlDispatcher.  This method, which is a blocking call, hooks up the process to the SCM and doesn’t return until all listed services are stopped.  The method takes the service name and its entry point (normally called ServiceMain).  When the SCM needs to start a service it calls the entry point on a separate thread (hence each service gets its own in addition to the process).  The entry point is required to call RegisterServiceCtrlHandlerEx to register a function that handles service requests (the control handler).  It also must set the service state to start pending.  Finally it should initialize the service and then exit.  The thread will go away but the service will continue to run. 

One caveat to the startup process is the fact that it must be quick.  The SCM uses an internal lock to serialize startup.  Therefore services cannot start at the same time and a long running service can stall the startup process.  For this reason the general algorithm is to set the state to start pending, spawn a worker thread to do the real work and then set the service to running.  Any other variant can slow the entire system down.

All future communication with the service will go through the control handler function.  Each time the function is called (which can be on different threads) the service will generally change state.  This will normally involve changing to the pending state, doing the necessary work and then setting the service to the new state.  Note that in all cases the SCM expects the service to respond quickly.

.NET Implementation

In .NET the ServiceBase class hides most of the state details from a developer.  To ensure that the service is a good citizen the .NET implementation hides all this behind a few virtual methods that handle start, stop, pause, etc.  All a developer need do is implement each one.  The base class handles setting the state to pending and to the final state while the virtual call is sandwiched in between.  However the developer is still responsible for requesting additional time if needed.  Even the registration process is handled by the framework.  All a developer needs to do is call ServiceBase.Run and pass in the service(s) to host.

 All is wonderful and easy in .NET land, or is it.  If you read the documentation carefully you’ll see a statement that says the base implementation hides all the details of threading so you can just implement the state methods as needed but this is not entirely true.  All the implementations except OnStart behave the same way.  When the control handler is called it sets the service to the pending state, executes the corresponding virtual method asynchronously and returns.  Hence the thread used to send the request is not the same thread that handles the request and ultimately sets the service state.  This makes sense and meets the requirements of the SCM.  More importantly it means the service can take as long as it needs to perform the request without negatively impacting the SCM.

The start request is markedly different.  When the start request is received the base class moves the service to the start pending state, executes the OnStart virtual method asynchronously and then…waits for it to complete before moving the service to the start state.  See the difference?  The start request thread won’t actually return until OnStart completes.  Why does the implementation bother to call the method asynchronously just to block waiting for it to complete?  Perhaps the goal was to make all the methods behave symmetrically in terms of thread use.  Perhaps the developers didn’t want the service to see the real SCM thread.  Nevertheless it could have used a synchronous call and behaved the same way. 

What does this mean for service developer?  It means your OnStart method still needs to run very fast (create a thread and get out) even in the .NET implementation even though all the other control methods can be implemented without regard for the SCM.  If OnStart takes too long then it’ll block the SCM.  More importantly the OnStart method needs to periodically request additional time using RequestAdditionalTime to avoid the SCM thinking it is hung.

Summary

When implementing a service in .NET it is still important to understand how native services and the SCM work together.  The OnStart method must be fast and well behaved to avoid causing problems with Windows.  The other control methods are less restrictive but still require awareness of how the base implementation works.  Writing a service is trivial as far as coding goes but services require a great deal of care in order to ensure they behave properly.  This doesn’t even get into the more complex issues of security, installation, event logging and error handling which are broad topics unto themselves.

Tech Ed 2011 Summary

Tech Ed was great this year.  Already mentioned a few topics that were of great interest.  Here’s some more topics that deserve further attention.

F#

Ever heard of functional programming or a functional language?  No?  Not surprised.  It is one of several categories that language designers use to identify languages.  Functional languages have their basis in mathematics so they look and act like math expressions (functions).  What makes functional programming so useful is that data is generally passed one function to another.  In fact the data is generally just defined in terms of the function it performs.  This makes these languages great for mathematical processes.  It also solves one of the more difficult problems of multi-threaded programming – shared data.  Normally with MT programs you have to use locks to protect shared data.  With a functional language this isn’t necessary because each functional data feeds into the next like a pipeline.  Even more important is that most functional programming involves arrays/lists of items.  Functional languages can parallelize these values, do some processing and put it all back together without the programmer even worrying about a lock.  Needless to say functional languages are great for parallel processing.

F# happens to be a functional language.  Written by the folks at Microsoft, it runs on the .NET framework.  It has actually been out a couple of years but most people haven’t been exposed to it until now.  In fact if you have VS2010 it is already on your machine.  You can use the F# Interactive tool window to start playing around with it.  It comes with some great tutorials as well.  However functional programming is as far beyond procedural programming as procedural is beyond English writing.  Therefore it will take time to understand.  You probably will never be creating your UIs in a functional language but it is likely that one day you’ll be using a parallel library that is itself written in F#.  So now may be a good time to take a look at it.  Here’s a starter link: http://www.tryfsharp.org/Default.aspx.  If you are running VS 2010 then you already have the necessary tools.

LightSwitch

Honestly I wrote this off as yet another tool for wannabe programmers.  After hearing more about it I realize that it may very well be the future IT programing tool.  Today most IT groups use Access or Excel for creating simple apps they need to get their work done.  Who hasn’t had to deal with crazy UIs and hacked together VBA?  LS will change that albeit with a slightly higher learning curve.  LS allows non-developers to create the same types of apps (minus the reporting) but using modern tools and the entire .NET framework.  When such an app eventually winds up in the hands of the “real” devs we won’t want to shoot ourselves over the limitations of some scripting language.  We’ll be working in VB/C# and the framework!!

LS does have a higher entry requirement than existing tools.  It is designed for 2 or 3 tiers.  The client application is Silverlight hosted via IIS.  The back end can be SQL Server (any edition), WCF services or whatever.  The IIS requirement (I believe it ships with Cassini) is probably going to be the biggest headache but it’ll be worth it.  Who hasn’t had someone call them and complain about a bug in some Access app.  When you ask what version they’re running they say Office 2007.  “No, what version of the app?  I don’t know.   Argh!!!”  SL removes the deployment/update issue while still allowing it to run just about anywhere.

LS is still in beta but a GoLive license is available so companies can begin using it today.  Here’s the MSDN for LightSwitch: http://www.microsoft.com/visualstudio/en-us/lightswitch/

C++

Talk to any C++ developer and you’ll hear grumblings about the lack of features and support in later versions of VS.  Well that’s changing.  In VS2010 RTM Intellisense was completely rewritten.  It is now fast, accurate and reliable.  In SP1 the C++ team has added some basic editor features that were either not fully baked in RTM or missing altogether.  vNext problems to bring C++ back up to par with the existing .NET languages.  Exactly what those updates will be we’ll have to wait and see.

Another area with VS2010 really advanced C++ is with the new C++ 0X standard that should be approved this year.  C++ now has better support for parallel processing and the VS2010 version already supports this.  Here’s some of the C++ standard changes already in VS2010:

  • shared_ptr – Updates to this type (introduced in VS2008) make it easier to work with and reliable. 
  • Concurrency runtime – Adds support for parallel processing in C++.  This is different from openmp which is about multithreading an app.  ConcRT works and is easily added to existing apps just by including ppl.h (for the most part).  It will make the process of optimizing existing code to take advantage of multiple cores easier.
  • Anonymous types – Ever heard of the auto keyword?   Most people haven’t but it was in the original C++ spec.  It’s purpose was to allow a programmer to optimize code generation by telling the compiler some additional information about variables.  Nobody really used it and it was deprecated.  However in order to support newer constructs this keyword has been changed to represent something completely different – anonymous types.  An anonymous type is really just a type where the compiler figures out the underlying type based upon usage rather than a programmer specifying it.  We aren’t losing any of the strong type checking support C++ is known for.  All we’re doing is telling the compiler to figure it out.  Here’s an example.
    SomeObject* someVar = new SomeObject;

    It really is redundant to specify the type twice.  The compiler knows what the type is and so do we.  Therefore we can replace the above with the following and everything will work exactly the same.

    auto someVar2 = new SomeObject;

    Now to be fair overuse of the auto keyword can cause problems and make maintaining the code harder.  But in the example above and another case it makes code easier to read and understand.  So limited use of this keyword is good.  What’s the other case?  Well that would be…

  • Lambda expressions – Lambdas can be hard to explain for those that don’t work with them.  Basically though a lambda is an anonymous function that you declare and use in one specific place.  The compiler is responsible for generating the underlying boilerplate code to create the real function and hook everything up.  The main benefit of lambdas is that it allows us to replace one-use functions with a lambda expression that is defined where we actually use it (sort of like a nested function, but not really).  A full discussion of lambda is beyond this post so refer to the link above. 

    Where do anonymous types come in?  Well the underlying type of a lambda expression is managed by the compiler so if you want to create a lambda variable you can’t really find a type that would work.  This is the other case where anonymous types come in.  You can create a variable of an anonymous type and assign it a lambda expression.  Then you can use the variable elsewhere without regard for the type.  The syntax for lambda expressions in C++ isn’t easy so I’ll defer from samples.  Refer to the provided link above.

    It is important to remember that lambdas and anonymous types are strictly compile-time features.  The compiler generates the boilerplate code you would normally write to get this to work.  At runtime C++ just calls functions and all variables have a strong type.

Should I Upgrade Now or Wait?

There were a lot of folks asking if they should go ahead and upgrade to VS2010 or wait for vNext.  The simple answer is: upgrade now.  Microsoft generally only supports one version back on compatibility so in order to be ready for vNext you should first get your code running under VS2010.  Furthermore vNext does not yet have a release date.  It could be 6 months or 6 years.  The standard life cycle for VS seems to be 2-3 years so it is possible that vNext will be released 2012-2013 but it is far to early to tell.  In the meantime VS2010 provides a lot of functionality today that is better than VS2008.  You need to be familiar with this functionality in order to prepare for what is coming in vNext.  So if you haven’t upgraded to VS2010 yet then do so now.

Tech Ed NA 2011 (Halfway) – Cool New Features

TechEd is half over and I’ve only been able to attend a few sessions.  Nevertheless there’s lots of new stuff coming down the pike.  Here’s my favorite things thus far. Note that no release dates (or even guarantees) are available yet.

Juneau (aka SQL Server Data Tools)

Remember when VS2010 came out and suddenly none of your SQL projects could be loaded?  Remember running the SQL Server 2008 R2 installer to get back your SQL projects?  Remember having to keep VS2008 around until the tools were finally released?  Oh wait – you still have to keep VS2008 around for some SQL projects.  Oh well.  MS has sworn that they don’t intent to make that mistake again and Juneau is the answer.  Juneau is a set of tools for developing databases just like you do source code including the editor, solution explorer support, source control, etc – only much, much better.  Rather than working directly with the database Juneau works with a model of the database (sound familiar?)  Juneau can track changes that are made to the model and generate scripts to apply those changes back to the real database (either during development or later) without having to write any SQL scripts or wipe existing data.  And that’s only the beginning.  Juneau is taking advantage of VS’s excellent editor to allow you to work with the database model just like you would source code.

You can learn more about Juneau and why it is going to be so important for database development here: http://msdn.microsoft.com/en-us/data/gg427686.

Task-Based Asynchronous Programming

The Task Parallel Library (TPL) is becoming the preferred way to do asynchronous development since it removes the need to worry about the existing asynchronous patterns (begin/end, event based) and the thread pool. It is available as of .NET v4.  You can read more about it here: http://msdn.microsoft.com/en-us/library/dd460717.aspx.  Honestly if you are doing .NET development then you’ll need to learn about TPL.  It really is pretty straightforward to build even moderately complex pipelines of work. 

Here’s an example.  This is some sample code that starts a task that is similar to any code you might find that loads data from a database or WCF service.  For demo purposes the code simply sleeps but imagine it was doing real work.  Notice that calls to update the UI have to be marshalled to the UI thread.  Also notice that once the task is complete we need to do some work.

private void button1_Click ( object sender, RoutedEventArgs e )
{
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default)
                          .ContinueWith(WorkCompleted);
}

private void DoSomeWork ()
{
   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Loading data…”);
   LoadData();

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Processing data…”);
   ProcessData();

   Dispatcher.BeginInvoke((Action<string>)UpdateStatus, “Finalizing data…”);
   FinalizeData();
}

private void WorkCompleted ( Task t )
{
   try
   {
      t.Wait();

      Status = “Done”;
      CanRun = true;
   } catch (Exception e)
   {
      Status = e.Message;
   };
}

private void LoadData ()
{
   Thread.Sleep(5000);
}

private void ProcessData ( )
{
   Thread.Sleep(5000);
}

private void FinalizeData ()
{
   Thread.Sleep(300);
}

private void UpdateStatus ( string text )
{
   Status = text;

The only real downside is that each task requires a method (or lambda) to run.  This means you’ll end up having lots of little methods that run in some arbitrary thread and do one thing. This can make understanding the code far harder than it needs to be.  Enter Task-based Asynchronous Programming (TAP).  TAP is a couple of language extensions to C#/VB that makes the process of creating tasks simpler by allowing you to put all the task-related stuff into a single method.  Any time asynchronous work needs to be done you use a keyword that tells the compiler that the work needs to be asynchronously.  During compilation the compiler basically breaks up your code into tasks.  You don’t have to any of the boilerplate work yourself.  Here’s a sample of how it might look (keep in mind that it is a CTP and anything could change – and also I’m not currently running the CTP to verify the syntax).

private void button1_Click ( object sender, RoutedEventArgs e )
{
   CanRun = false;

   var task = Task.Factory.StartNew(DoSomeWork, CancellationToken.None, 
                                    TaskCreationOptions.None, TaskScheduler.Default);
}

private async void DoSomeWork ()
{
   try
   {
      Status = “Loading data…”;
      await LoadData();

      Status = “Processing data…”;
      await ProcessData();

      Status = “Finalizing data…”;
      await FinalizeData();

      Status = “Done”;                
   } catch (Exception e)
   {
      Status = e.Message;
   };

   CanRun = true;
}

private void LoadData ()
{
   Thread.Sleep(5000);
}

private void ProcessData ( )
{
   Thread.Sleep(5000);
}

private void FinalizeData ()
{
   Thread.Sleep(300);
}

The CTP is available here: http://msdn.microsoft.com/en-us/vstudio/gg316360.  Be careful about installing this on production machines as it does modify the compilers. 

TPL Dataflow

As an extension to TAP, TPL Dataflow (still in development) uses blocks to represent parallel tasks of functionality.  For example there is a block to join two collections into one, another block to transform one collection to another, etc.  These blocks use tasks internally to manage the data sent to them.  Basically each block can be thought of as a parallel algorithm and all that needs to be provided is the data.  Dataflow is still too early in development to be useful but one can imagine an age where we have a workflow designer that allows us to drop blocks down to manage our data without regard for threading and the parallel algorithms needed.  You can read more about TPL Dataflow here: http://msdn.microsoft.com/en-us/devlabs/gg585582.

vNext

It is Microsoft after all.  There are plenty of discussions about things coming in Visual Studio vNext (no release date yet though).  Some are still in early phases of work while others are almost certainly already in.  Still no demos of vNext so either the UI isn’t changing that much (yeah) or it is too early in the process to be showing it.  Either way excitement for the new features is certainly obvious from the attendees I’ve talked to – especially parallel processing.  If you haven’t been looking into parallel processing yet you might want to.  It is an inevitable future for developers.

These are just the topics that I had a chance to sit in on at TechEd.  Refer to the TechEd site (http://northamerica.msteched.com/default.aspx?fbid=yX5WuBwNOm8) for more information on what’s going on and all the cool things coming down the pike.

Reflector Is Dead

(The views expressed in this post are my own and are not reflections of my employer, peers or any company anywhere.  Take it as you wish.)

It was bound to happen.  Honestly did anybody not see this coming?  Reflector is officially a dead product to the majority of .NET developers.  In other words Red Gate (RG) is making it a commercial-only product (http://www.reflector.net/2011/04/why-we-reversed-some-of-our-reflector-decision/).  After some backlash they have decided to release the old version for free but read below as to why this isn’t quite as it seems.

First some history.  Reflector has been around a long time – for free.  Most everybody inside and outside of Microsoft will mention Reflector when talking about disassembling code.  A few years ago the author of Reflector sold the code to RG.  I’m sure the original thought was that Reflector would remain free and RG would be able to make money off Pro versions.  How many times have we heard this story?  Early changes were annoying but tolerable.  We had to install the product now instead of xcopy because, you know, they can’t add licensing to an xcopy image.  We also got the annoying “please buy our Pro version” ads.  Again, annoying but tolerable.

As one could expect RG didn’t make sufficient money off the Pro version to cover their costs.  They had to recoup the initial purchase price plus the cost of ongoing maintenance.  Why would somebody pay money for something that is free?  The only good answer is if the paid version had features worth paying for.  What features did RG add that were actually worth anything?  I can’t think of one.  Let’s see, they added integration with VS.  Funny, I had that by using ToolsExternal Tools in VS.  They added shell integration.  Again, had that by a simple registry change.  In other words they added absolutely nothing to an existing free tool and expected that people would want the Pro version.  They could have gotten sneaky and started removing features that were previously free but that would have caused an uproar.

So the folks at RG have decided that they can’t sustain a free product anymore and therefore are completely eliminating the free version.  Even worse is that they removed all options for actually getting the free version before (or as) they announced it (just go read the forums).  Fortunately (maybe) they have temporarily added back the free version BUT you must do the following: 1) have an existing copy of v6, 2) check for updates and 3) do so before the deadline (which I believe is August 2011).  After that you’re out of luck.  Even more sinister is that they say it is a free, unsupported version but the fine print says that you actually get an activation license for 5 machines.  So what does that mean if you have to reinstall?  I have absolutely no idea but it sounds like a limited version to me. 

Now one could argue that $35 isn’t a bad price for Reflector and I would be wholeheartedly in aggreement IF 1) it was a new product that they had actually written, 2) it provided functionality that was not available elsewhere and 3) it hadn’t been previously available for free for years.  RG probably looked at other programs (i.e. Linux) that have both free and paid versions and thought they could do the same. It didn’t work out.  Their decision is undoubtedly a business one.  While I can understand their decision I don’t have to support it.  After reflecting on Reflector I’ve decided that I will continue to use the free version of Reflector until such time as a better tool comes along or my activations run out.  Then I’ll switch over to the less useful, but still capable, ILDasm.  All RG has done is angered those who feel betrayed by the “free-to-paid” switch.  I doubt they’ll see any additional money.

What does the future hold for Reflector?  Unfortunately I don’t think it is good.  RG is trying to recoup their costs and I don’t think they’re going to be able to do it.  Most devs are not going to pay for the Pro version if they have the free version (which is probably why the licensing is set up the way it is).  They might get some new customers but I don’t know that it’ll cover the long term.  I expect that Reflector is going to effectively die because of lack of money.  The only way I really see Reflector surviving is for RG to release it to open source (again) and let the community support it themselves.  Yes RG would lose money but the way I see it RG needs to cut their loses and go on. 

RIP (free) Reflector.  You were a good tool.  You will be missed.

 

BuildVer Update

UPDATE: A new update (v2.1) that resolves a few issues:

  • Character set conflicts when using non-ASCII characters
  • Updated documentation on how to integrate version.rci without having VS delete it the next time you open the resource designer
  • Fix for parameters that are built from the product version

I recently had the need to update my BuildVer tool to use in newer projects.  BuildVer is an incremental versioning tool that can be used for automated builds.  It was originally written to allow us to use the Win32 VERSIONINFO resource but have it updated for each build.  Later functionality was added to support generating standard .NET assembly info files.  While the existing tool was sufficient I decided that it was time to update the code and, at the same time, add some new features.

The original code was written over a decade ago in C++.  Later it was updated for .NET support and to use MFC types.  The new tool is completely written in C#.  The original version could generate 1 or 2 files (a managed and unmanaged file) and an optional text file.  Some support was added for specifying the build information on the command line.  This was sufficient to meet the needs of the time.

v2 supports generating any number of output files with their corresponding template files.  The tool works with any text file.  Additionally the configuration file has been expanded to allow for specifying arbitrary parameter name-value sets that can be used in template files.  There are a couple of pre-defined sets for versioning, company information, etc.  Any of the parameter values can be overridden by the command line.  This allows for more flexibility in the tool while keeping the command line options concise.

The readme.txt file contains all the details.  The attached file contains the source code and a v4 version of the tool along with sample templates for C# and C++.  Feel free to use the tool as you see fit.  One note however, some of the code in the tool (specifically the command line processor) is copyright Carlson Engineering, Inc. and may not be reused without explicit permission.  You can compile and use the tool itself without permission. 

VS 2010 SP1 Favorite Changes

Honestly SP1 doesn’t seem to be a major release compared to previous SPs we’ve seen but there are still some interesting changes.  Here’s my highlights.

  • Non-browser based help –  If there is absolutely one feature that caused an uproar during the betas of VS 2010 it was the new help system.  The help team took a serious beating while trying to justify the new system.  For better or worse we have to live with the results.  One of the big friction points was the reliance on the web browser and online help.  Fortunately before final release MS fixed things so we could have offline help but we were stuck with the browser.  Fortunately the group at Helpware Group remedied this with a client UI for viewing help files.  Albeit it wasn’t the best tool and it seems like a new fix was available weely but the group really did a great job and they stayed vigilent. 

    Fortunately with the release of SP1 they no longer have to do this as MS has included a client UI for viewing help files.  Hopefully MS has learned a valuable lesson about developers and their help system and, in the future, will not make such sweeping changes without first getting feedback.

  • Silverlight 4 –  Sure you could download the SDK separately but now it is included. 
  • Docking window crash – How long has VS supported docking windows?  How long has multiple monitor support been around?  Evidently nobody at MS does any of this because the RTM version of VS2010 crashes if you dock/undock windows while debugging.  Fortunately MS fixed this with a hotfix and now it is available in SP1.
  • IIS Express – What, never heard of IIS Express?  It is basically IIS bundled by itself and usable on client machines without all the headaches and issues of IIS.  Read about it here.  SP1 introduces support for IIS Express.  I suspect that most devs will migrate to it and that WebDevServer will quickly go away given all its existing limitations.
  • x64 Intellitrace support-  SP1 now supports using Intellitrace in x64 builds.  Note that this doesn’t mean you can do Edit and Continue in x64.  That issue still remains but at least x64 is starting to get the love.

UPDATE:  A couple of notes.

  1. SP1 can be installed over SP1 beta.
  2. Remember how long it took to install VS 2008 SP1.  Plan for the same hour or so.  This time it just takes that long to install rather than taking 20 minutes to extract the files like it did in VS 2008.

ASP.NET MVC Is the New ASP

I have to admit that ASP.NET MVC is cool.  The ability to interleave UI elements with control flow statements makes things a lot easier than the traditional databinding of ASP.NET.  The ability to pass strongly-typed objects to methods and auto-magically have pages appear with the data without ugly hacks is just awesome.  The new Razor engine in MVC 3 takes this to an entirely new level with its much simplified, less ASP-like syntax.

But there is a problem in MVC land.  We’re building UIs so why can’t I visually see what my page is going to look like within the designer?  It brings back memories of the ASP/HTML days where you’d write your UI and then run IE to see what it looks like.  From a UI designer perspective this is insane.  One of the really big features of ASP.NET (and perhaps Visual Interdev) was that I could write my UI and then switch over and view it without leaving VS.  I could even drag and drop controls onto the form and voila I could make changes until it is just right.  No such ability in MVC.

Pundits generally justify this lack of functionality in MVC in one of the following ways:

  • It’s MVC, you don’t care about the view layout since you’ll use CSS to format it – Bull.  CSS controls basic layout, coloring, fonts, etc but there is absolutely no way to verify how your view and CSS play together without actually running it.  One of the big “advantages” of MVC is that we are suppose to be able to pass our view on to a UI author but how many authors do you know that could read MVC blocks and understand them.  It isn’t simply a matter of ignoring the blocks because those blocks control what gets generated when.  So a UI author has to understand MVC just to write the view.  All we’ve accomplished here is to move the complexity from our code to the UI.
  • MVC has extensible view engines so a VS designer would have to support them – So?  IIS renders the various webpages without regard for the underlying rendering engine.  VS itself is designer agnostic.  How hard is it to create a separate designer for each of the view engines that can then be invoked within VS just like in ASP.NET?  All it really takes is some prioritizing.  A UI designer seems pretty important for a UI technology.  Even more so then some of the other features that have been added into MVC.
  • Browsers render things differently so the designer might be wrong – That is true but didn’t ASP.NET have the same issue?  Technically though one of the big benefits of MVC is that most of the view is browser-agnostic.  The various support files are there to help ensure that it renders the same across all browsers.  What better way to test this functionality then by using VS as a testbed?
  • It would just be too difficult – I bet they said the same thing before ASP.NET came out.  I don’t believe there is any technical problem that cannot be resolved.  ASP.NET, in my opinion, is the harder technology to design for since it is so monolithic and structured so it seems that MVC should be easier to create a designer for.
  • Use ASP.NET for your UI and MVC for everything else if it is such a big deal – What?  That would be like C# not supporting integers and the justification being that you can just use Fortran for the math.  Mixing and matching equivalent technologies, while useful when necessary, should be avoided.  It complicates the project, requires more skills from the dev team and can cause interop issues. 

Here’s my solution – add full UI designer support for MVC!!!  How hard can it be to invoke the correct view engine in the VS designer?  MVC view engines are far less heavy than ASP.NET so it seems like a relatively easy task.  Since ASP.NET is already supported the infrastructure needed to host a web-based engine is there.  It just needs to be tweaked to allow MVC engines to run.  Even if we cannot guarantee that a page will look exactly the same when rendered in a browser it would at least give us a better idea of how it might look.  The ASP.NET designer supports loading the CSS correctly so an MVC engine should be able to render pretty close to what we’d expect.

To me the lack of a UI designer (IE doesn’t count) is a really big mark against using MVC in a project.  I started using MVC in a couple of projects but since most web apps are UI-heavy I had to back away.  It was simply to painful trying to build the UI in an MVC-friendly manner.  The whole concept of MVC is great.  I just can’t understand why the V part of MVC is so difficult.  After all isn’t the entire purpose of MVC to allow us to separate M, V and C to simplify development?  It’s really ironic.  They say history repeats itself and I agree.  We started with HTML, moved on to ASP, then to ASP.NET and now we’re back to MVC.  UI designer wise we’ve come full circle.  Perhaps the next great framework innovation will take all the great things of MVC and combine it with a usable designer so we can actually get our work done.  Just my opinion

String Extension Methods

Haven’t posted in a while and don’t have a lot of time today so I’ll just throw up a copy of the string extension methods I’ve been using over the years.  Summary of the provided functions (note that not all of them have been fully tested).

  • Combine – Basically acts like String.Join but handles cases where the delimiters are already part of the string.
  • Is… – Equivalent to Char.Is… but applies to an entire string.
  • Left/Right – Gets the leftmost/rightmost N characters in a string.
  • LeftOf/RightOf – Gets the portion of a string to the left/right of a character or string.
  • Mid – Gets a portion of a string.
  • IndexOfNone/LastIndexOfNone – Finds the index of the first character NOT IN a list of tokens.
  • ReplaceAll – Replaces all occurrences of a token with another token.
  • ToCamel/ToPascal – Camel or Pascal cases a string.
  • ToMultipleWords – Pretty prints a string such as taking SomeValue and converting it to “Some Value”.

Download the Library Code

The Old (New) Common Controls

The common controls provided by Windows are shipped as part of the OS in the Common Controls library.  When .NET was first introduced we had the v5 version.  When XP was released a new version (v6) was introduced.  Applications that needed to run under W2K or XP had to target v5 while XP-only applications could use v6.  v6 added many new controls and enabled theming.  For example task dialogs, the replacement for message boxes, require v6.  Ironically it isn’t even part of the framework yet, you have to use an addon.

Fast forward to today.  .NET v4 only supports XP and above.  Therefore if you’re writing a .NET application you can safely use the v6 controls.  But guess what – .NET uses v5 by default.

Enabling the v6 Controls

Enabling the v6 controls is, unfortunately, not a simple check box.  One approach is to make a specific API call (Application.EnableVisualStyles) before any user interface code.  If you’re writing a WinForms application then the default template generates the necessary code but I’ve always felt like this was a hack.  The code has to run before any code that might do anything related to user interface work otherwise the default v5 controls are loaded. 

The better approach is to add an application manifest that specifically identifies the v6 common controls.  This ensures that the proper version is loaded before any code is run.  Here’s how it might look like:

<dependency>
    
<dependentAssembly>
      
<assemblyIdentity type=win32 name=Microsoft.Windows.Common-Controls version=6.0.0.0
                        
processorArchitecture=* publicKeyToken=6595b64144ccf1df 
                        
language=*
        
/>
    
</dependentAssembly>
  
</dependency>>

Adding a manifest is not hard.  In fact if you are using UAC settings then chances are you already have one.  But honestly this shouldn’t be necessary anymore.  .NET v4 only supports XP+ so a .NET v4 application should load the v6 controls automatically.  But alas this is yet to be the default. 

Problems In Paradise

Unfortunately using a manifest to load the controls introduces some annoying problems.  The first problem is that ClickOnce won’t work anymore.  ClickOnce can’t handle dependent assemblies.  I’ve already blogged about this issue.

The second, and probably more annoying, problem is that you’ll occassionally be debugging your application and find that the manifest seems to be ignored.  This is most obvious when you try to use a task dialog in your code (say for error reporting) and you get an error trying to display the task dialog.  The error will generally say something about needing the v6 controls.  Checking the manifest will confirm that you have set the appropriate options.  You’ll then probably have to run your app and use a tool like Process Explorer to see what DLLs are actually getting loaded (since VS doesn’t seem to display unmanaged DLLs in .NET apps anymore).  You’ll find that v5 is in fact being loaded despite your manifest.  What is going on?

Back in the VS2005 days (approximately) Microsoft bragged about how much faster the debugger was.  They gained this speed boost by using the VS host process (vshost) to load programs to be debugged.  In previous versions when you started the debugger your process would start and all the pre-debug processing would occur.  When debugging was finished all the information was thrown out.  When you started debugging again the information had to be generated again.  vshost actually handles this now.  vshost is loaded when the project loads and remains running until VS shuts down.  This speeds up debugging greatly, but at a cost.  Windows cannot tell the difference between your application and vshost so Microsoft had to write some code to cause vshost to emulate your process as closely as possible (even down to the configuration files).  But at the end of the day vshost is ultimately responsible for running. 

Back to the application manifest.  If vshost loads DLLs before it reads your manifest file (which happens at arbitrary times in my experience) then it is possible that vshost will load the wrong DLL.  In the case of the common controls vshost uses v5 by default.  As a result if you are debugging your application through vshost and your manifest requests v6 controls then it might or might not work.  I’ve had it work after 5 debug sessions and fail on the 6th.  So the takeaway is that if you want to use the v6 controls then you need to disable the vshost process (Project properties -> Debug -> Enable the Visual Studio hosting process).  Note that this only applies to Windows applications.

Summary

In summary if you want a XP+ style user interface then you’re going to have to use an application manifest to load the correct version of the controls.  But this will prevent the use of ClickOnce publishing.  It also means that you will likely have to disable the vshost process for debugging.  I hope that Microsoft will fix this issue in a future version of VS.  I’d like to see an option for Windows projects that allow us to select the version of the controls (and perhaps other libraries) that we want to use.  We shouldn’t have to edit manifest files and hack around tools to use what should be the default settings going forward.  If you agree with me then please cast your vote at the Connect site on this issue. 

ClickOnce Is a Complex XCopy

ClickOnce was designed to make deployment of applications easier.  The goal was to allow a user to start using an application without having to install it.  Additionally ClickOnce was suppose to allow developers to auto-update their applications without having to write hardly any code.  One of the premises of ClickOnce is that you don’t need to be an administrator to use a program.  Overall it met these goals.  The reality though is that ClickOnce is horribly, horribly out of date for the modern times and yet Microsoft continues to push it as “the” easiest deployment strategy.  My recent attempts at using it for a simple client application met with a resounding failure.

Here’s the list of benefits ClickOnce supposedly brings to the table.

  • Publish an application to a website or file share
  • Does not require administrative privileges to deploy or run
  • No setup program needs to be run but an application can still add itself to the Start Menu, Desktop and create an icon in Programs and Features
  • Automatically update an application with only a handful lines of code

ClickOnce does all this but unfortunately it is simply too old to be useful in modern .NET applications.  Let me identify the areas where I had trouble when I recently tried to use ClickOnce.

Publish Our Way… Or Else

In my particular case I need to be able to control how publishing occurs based upon whether a client has a web server (unlikely) or file server available.  Ideally I would like a user to be able to enter a URL into a browser and my application automatically downloads and runs.  Think WCF service here.  Unfortunately ClickOnce doesn’t support this approach.  Even worse is that, unlike almost all of the .NET Framework, it is completely locked down so you can’t even override the implementation.  You’re stuck with the publishing options that ClickOnce exposes.  Who thought it was a good idea to completely prevent extending ClickOnce?  If there is anything we’ve learned over the years it is that developers like to extend things.  Here ClickOnce just doesn’t have the flexibility, or extensibility, needed to allow custom publishing options.

No Administrators Here – Ever

One of the important features of ClickOnce is the ability to run an application without administrative privileges.  This is important and useful for a lot of applications.  However with the advent of Vista and UAC the rules have changed a little.  It is now more common to have an administrative application run as a normal application until such time as administrative privileges are needed. 

Visual Studio allows you to add an application manifest to your application.  The manifest contains a variety of things but on the security side it allows you to specify how you want to support Vista and UAC.  On one side is the default setting where UAC kicks in when needed.  But on the other side is an option to require administrative privileges automatically without requiring the user to change shortcut settings or anything else crazy like that.  Yes there do exist applications that require administrative privileges.

Once again ClickOnce will thwart your efforts.  If you have an application manifest with UAC settings in your application then ClickOnce won’t work.  Why?  I have no idea.  It seems that the designers of ClickOnce thought that this requirement meant that an application shouldn’t be allowed to have administrative privileges, ever.  Hence if you plan to use a manifest to control UAC then you can’t use ClickOnce. 

No Setup Required – Even If It Already Exists

ClickOnce gets this one mostly right.  You don’t need to install any files to get a ClickOnce application working.  But what if you need to install some additional files.  Well then you are going to have to ditch ClickOnce.  Sounds like a fair tradeoff but here is one very big gotcha – this includes versions of system binaries that have been around as long as .NET. 

Specifically I’m talking about the v6 Common Controls added in Windows XP.  To avoid breaking code Microsoft made the decision to make the v6 CCs optional.  Most applications would instead target the v5 CCs.  Sounded great because it ensured that an application written under XP would run under Windows 2000 without change.  Fast forward to today.  .NET v4 only supports XP+ so any v4 .NET application can safely use the “new” v6 CCs where all the new Windows controls reside.  But if you’re using ClickOnce then you are once again out of luck.  The issue is that in order to get the v6 CCs to load properly you have to add them as a dependency in your application manifest  (I’ll blog later about this hack).  Adding a dependency to the manifest causes the publish process to fail because the publisher assumes you are relying on a binary that isn’t available to ClickOnce (even though it will be there for all versions of Windows). 

So, in summary, if you want to use ClickOnce then you are stuck with a pre-XP user interface.  Now you might think you can work around this limitation by building your app, publishing it and then editing the post-build manifest but guess what – it won’t work.  See ClickOnce doesn’t want you tampering with any files once published so it checksums them.  If any file is modified then ClickOnce fails.  In order to modify a file you’d have to make the modification and then run a command line tool to regenerate the publish data.  Needless to say switching from Visual Studio to the command line to work around a severe limitation in ClickOnce just doesn’t sit well with me.

Autoupdate Your Application – Sometimes

Finally we have autoupdate.  Autoupdate sounds like a great idea – push a new version of your application to a website and users can be notified to download it.  All this wonderful technology is wrapped in a simple static class you can use from your application, some conditions may apply.  These conditions include the fact that the application must have been deployed via ClickOnce to begin with (i.e you can’t install via a CD and then autoupdate).  You also cannot be running in the debugger.  Do what???  Yes that is right the designers of ClickOnce figured that nobody would be debugging code.  If you call the autoupdate code within the debugger you’ll get an exception.  The logic of this design alludes me.  I’m also confused as to how they could even test it but that is a different story.  Nevertheless in order to use autoupdate you have to first verify your app isn’t running in the debugger and you were originally deployed using ClickOnce (they at least provide a method for that check). 

I fail to see why either of these requirements is necessary for the proper implementation of updating.  No problem, says a developer, .NET is extensible so I’ll just extend the infrastructure.  Or not.  Again, the designers of ClickOnce felt autoupdate was either too tricky or too special to allow any sort of extensions.  The only extending you can do is to rewrite it yourself. 

Summary

If you felt like I’ve been harsh on ClickOnce you’d be right.  I think it is deserved. This isn’t some legacy technology that we’re talking about.  It is one of the preferred deployment approaches that .NET recommends and yet it is completely outdated and inflexible.  I believed the hype and samples that I saw on ClickOnce so much that I was going to use it in a recent project.  Within the first few hours I realized I’d have to either hack my way around ClickOnce or give up on it entirely.  So now I have to write my own publishing code and autoupdate code just because ClickOnce is outdated.

Here’s my feature request list for the ClickOnce team.

  • ClickOnce has to be extensible.  Publishing and updates are the primary areas.
  • ClickOnce must support the newer OS features like UAC and v6 Common Controls (and others).
  • ClickOnce needs to be fully integrated into Visual Studio without the need to run command line tools for something as common as changing a post-published file.
  • Autoupdating needs to work whether I’ve deployed via ClickOnce or via an alternative manner.  After all it really just needs to know where to go check for updates.
  • ClickOnce has to work within the debugger. If I cannot run Visual Studio then the tool is useless.  I shouldn’t have to code around technologies.

I’m hoping that one day ClickOnce will be brought up to snuff with the rest of the framework but for now I consider it no better than the old Visual Studio Setup Projects – good for novices but a complete waste of time for “real” applications.