P3.NET

Computer Science Education

Recently I read a blog post from Keith Ward about Computer Science education.  It struck a cord for a couple of reasons.  Firstly I’m a computer professional by day and a CS professor at night.  I teach both basic computer and programming courses at my college.  I’ve been teaching for years.  Secondly I’ve seen the arguments on both sides of the fence about the “best” way to teach students to ensure they are successful.  Thirdly I’ve interviewed hundreds of people over the years for various positions and I’ve seen the results of both good and bad educations.  Rather than posting a long message in Keith’s blog about my opinion I figured it would be better to post it here since I’ve been meaning to discuss it for a while.

Computer science is hard.  When I went to college we had math professors who learned to program and then taught us.  The teaching involved the professor going over some programming concepts and throwing up sample code in class and the students going home and working on programming labs for hours.  You might take a 2 semester programming course and then move on to the other computer topics.  In the early courses it was not unheard of to have a project due every 2 weeks.  You were on your own to learn your way around the computer, the IDE and the language.  Debugging was trial and error.  If you really needed help you could talk with the professor but otherwise you were on your own.  In later courses you might have to learn an entirely new language (C or C++ at the time) on your own in order to do the assignments in class because a professor would only accept certain languages. This was above and beyond the course material.  Back then computer programming was new, exciting and in high demand.  Folks were there to get a degree in something that was seen to be rewarding.

Fast forward to today where schools offer 2 or 3 different languages.  The core programming language (often C++) might be a 2 or 3 semester course.  The number of projects have dwindled down to 5 to 7.  Students are expected to get most of the information during class including how to debug, use the IDE, write code and test.  Needless to say that is a lot of material to cover in 15 weeks (a standard semester).  There is almost no time for professors to slow down to ensure the class is getting a particularly tough section.  There’s also no time to address any additional information that is outside the curriculum (IDE usage, debugging, etc).  The CS industry is becoming saturated.  CS is no longer a specialty career.  Students aren’t interested in working harder to get a degree when there are other, equally rewarding, careers available.

In order to write code you need to know several things: the language, the IDE, design methodologies and debugging.  Traditionally the language is taught first.  Consequently the IDE has to be learned by the student.  Debugging code is generally an on-the-fly skill learned by the student with assistance from the professor.  Proper design comes later after the programming courses are done.  So out of the box the student has to pick up the IDE and debugging themselves while learning a new language and, even more important, an entirely new thought process.  Statistically speaking most schools see either a low enrollment for the courses or a low retention rate for subsequent courses.  Is it any wonder?  There is so much work involved just to become a basic programmer that most students have neither the time nor the desire to continue.

What can we, the educators, do to solve this issue?  The (somewhat) popular approach is to dumb down the class; make only a few simple projects due, provide simple exams that test only rudimentary knowledge, and/or give large curves.  What does this do?  It encourages the students to continue in the program.  No matter how we dice the problem it eventually is going to boil down to the retention rate.  A department (CS or otherwise), no matter how good it is, cannot survive unless students enroll in the courses.  The harder the courses the lower the retention rate. 

I worked at a college (for one semester) that followed this philosophy wholesale.  We, as instructors, were told that if a student showed up for class then they passed.  It didn’t matter how they did on assignments (of which we should give few) or exams (which should be easy).  The important thing was that students wanted to come back the next semester.  Needless to say I didn’t follow that philosophy.  It was a rough semester for me.  I would never work there again nor did they ever call me back.  In this particular case the school put retention over the quality of the students’ education.  These students were going to pay a lot of money for a degree and then not be able to find a job.

As an interviewer (at least for entry positions) I’m interested in what concepts you can put into play.  It isn’t good enough that you can give me the definition of encapsulation if you can’t also provide me a reason why you would use it.  In my experience you will get one of three students in an interview (again, entry level): book smart, straggler and earner.  The book smart student can tell you the textbook definition of any concept you want.  The problem is they cannot identify it in the wild or know when to use it.  Even worse is any variations on a theme will cause confusion.  These students have been taught to read a book and answer review questions.  Their instructors failed to teach them the why’s and how’s of the concepts.  Such students will easily pass exams but struggle with real-world tasks.  This, in my opinion, is a failing of the educators who put too much emphasis on passing exams and too little on practical usage.

The straggler is the student who didn’t do well in most courses but managed to pass anyway.  They might or might not know the answers to even the most basic questions.  How someone such as this could get a degree is beyond me.  Early on the instructors should have properly tested the student and determined that they do not have sufficient knowledge to pass the course.  It is a fact of life that not everyone can be a CS person.  It is important for the student to identify this early on.  Yes the school will lose a student but it is better than having the student struggle 4 years and then walk away with a degree that they might not be able to use.  Where is the fault here?  In this case it can lie anywhere.  Some instructors are more concerned about passing students either to make the college or their approval ratings higher (http://www.ratemyprofessors.com/).  These instructors are only hurting their students.  I’d rather a student struggle through my class and barely pass but know the material well than pass with flying colors but know nothing.  However it might also be the student’s fault (gasp).  Some students are interested in easy grades so they’ll do whatever they can to do the bare minimal.  As an instructor I can’t force students to try and excel.  If they want to eek by then all I can do is grade fairly and warn them about the potential issues they will face down the road.

Finally the earner has done their best, graduated with a degree and knows the material.  The instructors have done their job and taught the material and the student has done their best to understand it.  This is the student who employers like because they can be molded into a productive employee.  Unfortunately in the modern era of quick degrees and lackluster colleges it is harder to find these students.  Not every student is going to understand every concept they were taught but an earner has done the work and demonstrated their desire to at least try.  Note that an earner isn’t necessarily a straight A student.  When I look at student grades (as an interviewer) I’m generally more interested in students who struggled through at least a couple of courses.  I’m especially interested in students who failed/withdrew from a course and then retook it.  That shows initiative and a desire to succeed.  Failures happen.  It is how you respond to them that tell’s me what type of person you are.

To wrap up I have to share my opinion of professor ratings.  There are sites available (see above) that allow students to rate professors.  Additionally most schools have mandatory student evaluations of their instructors.  For the school the evaluations determine which instructors to keep and which to let go.  For students it allows them to share their opinions.  The problem is that it is just that: an opinion.  It is a fact of life that not every student is going to like every professor and vice versa.  Unfortunately it is very easy to skew ratings/evaluations because of personality conflicts.  For this reason many instructors are impersonal to students to avoid conflicts (ironically, impacting their rating).  While I think the evaluation of instructors is important to weed out those who are not providing optimal education I think the entire picture must be taken into account.  A single set of bad reviews doesn’t necessarily mean the instructor is bad.  Finding patterns in the reviews is more likely to provide a better picture.  As an instructor I am always interested in hearing feedback about my courses.  Sometimes I get the summary information from the student evals but more often than not I never see them.  As a result I ask the students to provide me feedback as they see fit.  Dangerous?  Yes.  Helpful?  Most definitely.   The most discouraging thing about this is why I see a lot of bad reviews for an instructor on a site and yet that instructor is still teaching.  Are the students giving the school one set of evaluations while posting another?  I don’t know.  Perhaps the schools are not doing anything with these evaluations in which case it is a waste of everyone’s time.  More likely however the review sites are being filled with negative (or positive) comments by people who are posting their opinions irrelevant of the actual quality of the instructor.  As a result I’m still not convinced professor rating sites are a good thing but until student evals are publicly disclosed and acted upon by schools there seems to be little alternative for students who want the best educators.

Updating the UI On an Arbitrary Thread

(Originally published: 5 March 2007)

It is very common to have to perform work on a secondary thread.  This allows us to provide a more interactive UI to our users while still allowing us to do lengthy work.  Unfortunately this is not as easy as it first appears.  We will discuss the problem, the standard v1.x solution and the v2 solution to the problem.

Why Do We Need It

Windows has always had the cardinal rule that a UI element can only be interacted with on the thread that created it.  This is because internally Windows still uses a message queue to communicate with UI elements.  The message queue is per-thread.  It is created on the thread that created the element.  This ensures that all messages are serialized to the UI and simplifies our work. 

However it is often necessary to do work on secondary threads and still update the UI.  In order for this to work we have to actually send the request to the UI’s thread and then have it pushed onto the message queue.  There are a couple of different ways to do this.  We’ll look at the most common method in v1.x and v2.  .NET v3.x introduces the concept of synchronization contexts which can be used to execute code on specific threads.  Synchronization contexts are a lengthy topic that will not be covered in this article.

v1.x Solution – Invoke and InvokeRequired

Every UI element (a control, here on out) has a InvokeRequired property.  This property can be called on any thread and indicates whether the calling thread is the thread that created the control.  If it is then the property is false otherwise it is true.  When working with controls in a multithreaded environment you must check this property.  If it is true then you must marshal the request to the UI thread. 

You marshal the request to the UI thread using the Invoke or BeginInvoke/EndInvoke methods.  These methods can be called on any thread.  In each case they accept a delegate, switch to the UI thread and then call the delegate.

if (myControl.InvokeRequired) 

   myControl.Invoke(…);
else 
   …

To make things easier we generally wrap the above code in a method and then create a delegate with the same signature.  This allows us to use one function for two different purposes.  The first “version” is called in all cases and determines whether an invoke is required.  The second “version” actually does the work on the UI thread.  This is a nice way to hide the details from your clients.

public delegate void UpdateProgressDelegate ( int progress );

public void UpdateProgress ( int progress )
{
   //If not on UI thread
   if (myProgress.InvokeRequired)
      myProgress.Invoke(new UpdateProgressDelegate(UpdateProgress), new object[] { progress });
   else
   {
      //Do work here – called on UI thread
      myProgress.Value = progress;
   };
}

As far as your users are concerned this is a clean interface.  They simply call UpdateProgress.  Internally if an invocation is required to get the call on the UI thread then it does so.  However it calls the same method again.  The second time it is called it will be done on the UI thread.  The biggest downside to this code is that: a) you have to write it, and b) you have to define a delegate.  Here is how we might call it.

public void Run ( )
{
   Thread t = new Thread(new ThreadStart(DoWork)); 
   t.IsBackground = true
   t.Start();
}

public void UpdateProgress ( int progress ) 
{ myProgress.Value = progress; } 

private void DoWork ( ) 

   for (int nIdx = 1; nIdx <= 100; ++nIdx) 
   { 
      Thread.Sleep(50); 
      UpdateProgress(nIdx);
   }; 

v2 Solution – BackgroundWorker Component

Since this is such a common situation in v2 Microsoft has added the BackgroundWorker component (BWC).  This component’s sole purpose in life is to do work on a secondary thread and then update the UI.  The component requires that you identify the function to run.  It will then run the function on a thread pool thread.  Once it is complete the component will raise the RunWorkerCompleted event.

public void Run ( )
{
   BackgroundWorker bwc = new BackgroundWorker(); 
   bwc.DoWork += DoWork; 
   bwc.RunWorkerCompleted += OnCompleted; 
   bwc.RunWorkerAsync();    //Run it
}

private void OnCompleted ( object sender, RunWorkerCompletedEventArgs e ) 
{ MessageBox.Show(“Done”); } 

private void DoWork ( object sender, DoWorkEventArgs e ) 

   for (int nIdx = 1; nIdx <= 100; ++nIdx) 
   { Thread.Sleep(50); }; 
}

Notice a few things about the code.  Firstly it is smaller as we don’t have to worry about the thread management.  Secondly notice we do not have to worry about the UI thread.  All events exposed by BWC are raised on the UI thread automatically.  Note that you can drop BWC onto your form or control because it shows up in the designer.  Personally I prefer to create it on the fly.

There is a problem with the above code.  It does not actually update the progress while it is running.  In order to raise progress events we need to tell BWC we will be raising progress notifications.  We also need to hook up another function to handle the event.

public void Run ( )
{
   BackgroundWorker bwc = new BackgroundWorker(); 
   bwc.DoWork += DoWork; 
   bwc.RunWorkerCompleted += OnCompleted; 
   bwc.WorkerReportsProgress = true
   bwc.ProgressChanged += OnProgressChanged;

   bwc.RunWorkerAsync();    //Run it
}

private void OnProgressChanged ( object sender, ProgressChangedEventArgs e ) 
{ myProgress.Value = e.ProgressPercentage; }

private void OnCompleted ( object sender, RunWorkerCompletedEventArgs e ) 
{ MessageBox.Show(“Done”); } 

private void DoWork ( object sender, DoWorkEventArgs e ) 

   BackgroundWorker bwc = sender as BackgroundWorker; 
   for (int nIdx = 1; nIdx <= 100; ++nIdx) 
   { 
      Thread.Sleep(50); 
      bwc.ReportProgress(nIdx); 
   };
}

The changes are in bold.  Basically the UpdateProgress of earlier changes to OnProgressChanged.  Again, it is called on the UI thread.  We have to report progress using BWC.ReportProgress.

One final feature of BWC that was not in the original code we wrote is cancellation support.  It is often nice to allow users to cancel lengthy tasks.  BWC supports this automatically.  All you have to do is tell it that you will allow cancellation and then periodically check for a cancellation request.  For algorithms that loop you generally check at the beginning of each loop.  The more frequently you check the cancellation flag (CancellationPending) the faster your user will see the process be cancelled.

public void Run ( )
{
   BackgroundWorker bwc = new BackgroundWorker(); 
   bwc.DoWork += DoWork; 
   bwc.RunWorkerCompleted += OnCompleted; 
   bwc.WorkerReportsProgress = true
   bwc.ProgressChanged += OnProgressChanged;
   bwc.WorkerSupportsCancellation = true
   bwc.RunWorkerAsync();    //Run it
}

private void btnCancel_Click ( object sender, EventArgs e ) 
{ bwc.CancelAsync(); }

private void OnProgressChanged ( object sender, ProgressChangedEventArgs e ) 
{ myProgress.Value = e.ProgressPercentage; }

private void OnCompleted ( object sender, RunWorkerCompletedEventArgs e ) 
{
   if (e.Cancelled) 
      MessageBox.Show(
“Cancelled”); 
   else 
      MessageBox.Show(“Done”);

private void DoWork ( object sender, DoWorkEventArgs e ) 

   BackgroundWorker bwc = sender as BackgroundWorker; 
   for (int nIdx = 1; nIdx <= 100; ++nIdx) 
   { 
      if (bwc.CancellationPending)
      {
         e.Cancel = 
true;
         
return
      };

      Thread.Sleep(50); 
      bwc.ReportProgress(nIdx); 
   };
}

Here we’ve set the property notifying BWC that we support cancellation and we check for cancellation during each iteration of the loop.  We also modified the completion handler to detect whether the operation completed or was cancelled.  The only hard part here is actually hooking a control (such as a button) up to cancel the work.  If you use the designer to drop the BWC on your form then you can use the field that is created to access the BWC.  In the above code it was a local variable so you would have to somehow store the local variable somewhere where the cancel button’s click handler could get to it.  I leave that for you to figure out.

Why Not Do It Automatically

Some people ask why, if it is so common, Windows does not do this automatically or why the framework developers didn’t do it, or why they shouldn’t do it themeselves.  The answer is performance.  As common as it might be most UI interaction still occurs on the appropriate thread.  It is not really worth the performance hit to always handle the situation.  This is a common theme throughout the framework.  Another problem with threading messages is the order.  In general if you send two messages (one right after the other) then you expect the first message to be processed first and then the second message.  With asynchronous calls this might not be true.  Therefore more effort must be put into ensuring that the ordering of messages is not important. 

Most classes in the framework are not thread safe.  The reasoning is that it adds complexity and degrades performance.  A good multithreaded appliation does not allow just any object to be accessed across threads.  This would be a bad design.  Instead only a select few objects are shared and it is these objects that need to be thread safe, not the entire application.

Until Windows sheds its message queue behavior I’m afraid we are stuck with this limitation.  Although I believe WPF has somewhat remedied the situation…

Persisting Form Settings

(Originally published: 15 September 2007)

This article was originally written when .NET v2 was released.  Some of the information in here may be outdated with newer .NET versions but there is still some good information on persisting form settings and avoiding some pitfalls in the process.  Therefore this article is being posed unchanged from its initial form.

When working with Windows applications users expect that the application will remember its state when it is closed and restore that state when it is opened.  This allows the user to customize the layout of the application to fit their needs.  If Bill only works with a single application at a time he might like to have the application take up the whole screen so he can maximize his view.  Julie, on the other hand, runs several applications at once.  She wants to keep windows laid out in a certain order so she can efficiently copy and paste data between applications.  It doesn’t make sense to require Julie and Bill to share the same window layouts when their individual settings can be stored easily.

In the early betas of .NET v2 this functionality was available.  It seems to have been removed before the final release.  This article will discuss how to add in form state persistence to your WinForms applications.  This article will only deal with a single form and the basic form attributes but it can be extended to multiple forms with many different properties including tool windows and docking state.

Where to Store the Data

For persisting data on a per-user basis you basically have three common choices: registry, isolated storage and settings file.  The registry is commonly used for old-school programming.  The HKEY_CURRENT_USER key is designed for storing per-user settings.  It works a lot like a file system.  The registry is generally not recommended anymore.  It is designed for small pieces of data and has limited support for data formats.  It is, however, secure and requires more than a passing knowledge of Windows to use properly.  Therefore it is a good choice for settings that should generally be protected but not to big in size.  A big limitation of the registry is that it can’t be used in applications that don’t have registry access (like network applications or smart clients).

Isolated storage is a step up the file system hierarchy.  It looks like a file system (and actually resides in the system somewhere) but its actual location is hidden from applications.  Isolated storage allows any application to store data per-user.  The downside to isolated storage is that it can be a little confusing to use.  Additionally, since the actual path and file information is hidden, it can be hard to clean up corrupt data if something were to go wrong. 

Finally there is the settings file.  We are talking about the user settings file here, not the application settings file.  Each user can have their own settings file.  This file works similar to the application property settings file that you can use for application-wise settings.  The difference is that each user has their own copy and it is store in the user’s profile directory.

Before moving on it is important to consider versioning of the settings.  If you want a user running v1 of your application to be able to upgrade to v2 and not lose any of their settings then you must be sure to chose a persistence location that is independent of the version of the application.  The registry is a good choice here as isolated storage and user settings are generally done by application version.  Still it doesn’t make sense in all cases to be backward compatible with the settings file.  You will have to decide on a case by case basis.

FormSettings

Let’s start with the basic class we’ll use.  Ultimately, since we might have quite a few forms to persist we want to create a base class (FormSettings) that will take care of the details.  We can derive from this class for custom form settings as needed.  In this article we will use the user’s setting file so we derive from ApplicationSettingsBase.  If you want to use a different storage mechanism then you’ll need to make the appropriate changes. 

Since we want our class to work with multiple forms we need to make each form unique.  We will use the SettingsKey to make each form unique.  Each form must specify the key it will use.  Here is the start of our class.

public class FormSettings : ApplicationSettingsBase 

   public FormSettings ( string prefix ) 
   { 
      SettingsKey = prefix; 
   } 

   public void Load ( Form target )  />   {
      //Load
   }

   public void Save ( Form target ) />   {
      //Save
   }
}

When the form is loaded it will call Load to load its settings.  When the form is closed it will call Save.  Here is sample code for our form.

protected override void OnLoad(EventArgs e) 

    base.OnLoad(e); 

    m_Settings.Load(this); 

protected override void OnFormClosing ( FormClosingEventArgs e ) 

    base.OnFormClosing(e); 

    if (!e.Cancel) 
        m_Settings.Save(this);
}

private FormSettings m_Settings = new FormSettings(“MainForm”);

 

Persisting Properties

At a minimum a user would expect to be able to move the window around and resize it to fit their needs.  Therefore we need to load and save the following properties: DesktopLocation, Size and WindowStateDesktopLocation specifies the position, relative to the top-left corner of the desktop, of the top-left corner of the form.  The Size property indicates the width and height of the form.  Finally the WindowState is used to track when a window is minimized or maximized.  We will discuss this property shortly.

To load and save properties using the settings file we need to define a property for each item we want to save.  We need to mark the property as user-scoped and we need to get the value from and set the value to the settings file.  This is pretty straightforward so we will not dwell on the details.  Here is the code for getting and setting the property values.  One point of interest is that we use special values when we can not find the property values.  This will come back later when we talk about loading the settings.

public class FormSettings : ApplicationSettingsBase  
{
   … 

   [UserScopedSetting] 
   public Point Position  BR />   { 
      get 
      { 
         object value = this[“Position”]; 
         return (value != null) ? (Point)value : Point.Empty; 
      } 
      set { this[“Position”] = value; } 
   } 

   [UserScopedSetting] 
   public Size Size  R />   { 
      get 
      { 
         object value = this[“Size”]; 
         return (value != null) ? (Size)value : Size.Empty; 
      } 
      set { this[“Size”] = value; } 
   } 

   [UserScopedSetting] 
   public FormWindowState State  R />   { 
      get 
      { 
         object value = this[“State”]; 
         return (value != null) ? (FormWindowState)value : FormWindowState.Normal; 
      } 
      set { this[“State”] = value; } 
   } 
}

 

Saving the Settings

Saving the settings is really easy.  All we have to do is set each of the user-scoped property values and then call Save on the base settings class.  This will flush the property values to the user’s settings file.

public void Save ( Form target ) 

   //Save the values 
   Position = target.DesktopLocation; 
   Size = target.Size; 
   State = target.WindowState; 

   //Save the settings
   Save(); 
}

 

Loading the Settings

Loading the settings requires a little more work.  On the surface it is similar to the save process: get each property value and assign it to the form.  The only issue that comes up is what to do when no settings have been persisted (or they are corrupt).  In this case I believe the best option is to not modify the form’s properties at all and, therefore, let it use whatever settings were defined in the designer.  Let’s do that now and see what happens.

public void Load ( Form target ) 

   //If the saved position isn’t empty we will use it 
   if (!Position.IsEmpty) 
        target.DesktopLocation = Position; 
   //If the saved size isn’t empty we will use it 
   if (!Size.IsEmpty) 
     target.Size = Size; 

   target.WindowState = State; 
}

It seems to work.  This is too easy, right?  Now try minimizing the form and then closing it.  Open the form again.  Can’t restore it can you? 

Minimizing and Maximizing Forms

The problem is that for a minimized/maximized form the DesktopLocation and Size properties are not reliable.  Instead we need to use the RestoreBounds property which tracks the position and size of the form in its normal state.  If we persist this property when saving then when we load we can restore the normal position and size and then set the state to cause the form to minimize or maximize properly.  But there is another problem.  RestoreBounds isn’t valid unless the form is minimized or maximized.  Therefore our save code has to look at the state of the form and use DesktopLocation when normal and RestoreBounds when minimized/maximized.  Note that RestoreBounds.Size is valid in both cases although whether this is by design or not is unknown.  The load code remains unchanged as we will set the values based upon the form’s normal state and then tell the form to minimize or maximize.  Here is the updated code.

public void Save ( Form target ) 

   //Save the values 
   if (target.WindowState == FormWindowState.Normal) 
      Position = target.DesktopLocation; 
   else 
      Position = target.RestoreBounds.Location; 

   Size = target.RestoreBounds.Size; 
   State = target.WindowState; 

   //Save the settings 
   Save(); 
}

 

Disappearing Windows

The final problem with our code is the problem of disappearing windows.  We’ve all seen it happen.  You start an application and the window shows up in the Task Bar but not on the screen.  Windows doesn’t realize that the application’s window is off the screen.  This often occurs when using multiple monitors and we switch the monitors around or when changing the screen resolution.  Fortunately we can work around it.

During loading we need to verify that the window is going to be in the workspace of the screen (which includes all monitors).  If it isn’t then we need to adjust the window to appear (at least slightly) on the screen.  We can do this by doing a quick check to make sure the restore position is valid and update it if not. 

What makes this quite a bit harder is the fact that screen coordinates are based off the primary monitor.  Therefore it is possible to have negative screen coordinates.  We also don’t want the Task Bar or other system windows to overlap so we will use the working area of the screen which is possibly smaller than the screen size itself.  .NET has a method to get the working area given a control or point but it returns the closest match.  In this case we don’t want a match.  .NET also has SystemInformation.VirtualScreen which gives us the upper and lower bounds of the entire screen but it doesn’t take the working area into account.

For this article we’ll take the approach of calculating the working area manually by enumerating the monitors on the system and finding the smallest and largest working areas.  Once we have the work area we need to determine if the caption of the form fits inside this area.  The caption height is fixed by Windows but the width will match whatever size the form is.  We do a little math and viola.  If the caption is visible then we will set the position otherwise, in this case, we simply let the form reset to its initial position.  Here is the load code.

public void Load ( Form target ) 

   //If the saved position isn’t empty we will use it 
   if (!Position.IsEmpty) 
   { 
       //Verify the position is visible (at least partially) 
       Rectangle rcArea = GetWorkingArea(); 

       //We want to confirm that any portion of the caption is visible 
       //The caption is the same width as the window but the height is fixed 
       //from the top-left of the window 
       Size sz = (Size.IsEmpty) ? target.Size : this.Size; 
       Rectangle rcForm = new Rectangle(Position, new Size(sz.Width, SystemInformation.CaptionHeight)); 
       if (rcArea.IntersectsWith(rcForm)) 
          target.DesktopLocation = Position; 
   }; 

   //If the saved size isn’t empty we will use it 
   if (!Size.IsEmpty) 
       target.Size = Size; 

   target.WindowState = State; 

private Rectangle GetWorkingArea () 

   int minX, maxX, minY, maxY; 
   minX = minY = Int32.MaxValue; 
   maxX = maxY = Int32.MinValue; 

   foreach (Screen scr in Screen.AllScreens) 
   { 
      Rectangle area = scr.WorkingArea; 

      if (area.Bottom < minY) minY = area.Bottom; 
      if (area.Bottom > maxY) maxY = area.Bottom; 

      if (area.Top < minY) minY = area.Top; 
      if (area.Top > maxY) maxY = area.Top; 

      if (area.Left < minX) minX = area.Left; 
      if (area.Left > maxX) maxX = area.Left; 

      if (area.Right < minX) minX = area.Right; 
      if (area.Right > maxX) maxX = area.Right; 
   }; 

   return new Rectangle(minX, minY, (maxX – minX), (maxY – minY)); 
}

UpdateUserType (Visual Studio Addin)

Here is another addin I wrote for VS2008 and then updated for VS2010.  In C++ you can define your own custom keywords and VS will color them differently if you say so (ToolsOptions -> Fonts and Colors).  There are a couple of problems with the VS solution.

  1. You have to edit the UserType.dat file.
  2. You have to restart VS.
  3. You have only one set of keywords for all of C++.

The second and third problems are the killers here.  As you are developing code you will often find a set of keywords you’ll want to add.  To do so you’ll have to open the .dat file, add the keywords and then restart VS.  Even worse is that if you are working on an MFC project then you’ll likely want some MFC keywords but if you switch to Win32 then you want a different set of keywords.  To resolve this problem you’ll have to keep multiple .dat files that you can swap in and out.  Enter UpdateUserType.

UpdateUserType does a couple of things. Firstly it allows you to separate your keywords into different files (i.e. C++, MFC, Win32).  Secondly UpdateUserType can merge all these files into a single .dat file.  Thirdly the addin will detect when the .dat file changes and request VS to refresh its keyword list.  

Using the addin UI you can add a list of .txt files to be monitored.  Whenever one of these files is modified the addin merges them all into a new .dat file.  This allows you to separate keywords by area and enable only the keywords appropriate for your current project.  My personal technique is to keep one or more of them open in a separate editor so that I can add new keywords as I go through code.  Saving the files in the editor causes the .dat file to be regenerated.

When the .dat file changes the addin requests VS to refresh the keyword list.  When this happens VS will read the .dat file and begin coloring the keywords again.  This feature allows you to add keywords on the fly and have VS begin coloring them almost instantiately. 

Attached is the source and binaries for the VS2010 version of this addin.  Additionally I’ve included some starter .txt files that you can use.  A caveat is in order.  The .XML file where the addin stores the files to be merged supports relative or absolute paths.  However the UI and the code only supports absolute paths.  Therefore if you use the UI to change the file list then the paths are converted to absolute paths.

To install the addin copy the contents of the Bin directory to your %My Documents%Visual Studio 2010Addins directory.  You might need to enable the addin via ToolsAddin Manager.  If the addin does not show up (it’ll appear under the Tools menu) then you might need to force a re-registration.

Code

CodeHTMLer (Visual Studio Addin)

CodeHTMLer is a utility written by Weshaggard and available here.  The utility allows you to convert code to HTML for insertion into a website or blog.  It happens to be the tool I use for code on this site.  I know, I know.  There are many tools already available for this but I prefer CodeHTMLer for several reasons.

  1. It supports multiple languages.
  2. The HTML is partially configurable via the XML file it loads.
  3. Some code elements can be ignored rather than having to generate style information for everything.
  4. Can use either existing CSS styles or inline styles.
  5. It is fast.

The only issue I had with it was that I wanted to use it inside Visual Studio.  Being a developer I did the logical thing and wrote an addin for it.  The addin was originally written for VS2008 and the updated for VS2010.  The attached code runs under VS2010.  The addin wraps CodeHTMLer and adds the following features.

  1. Copy as HTML is exposed as an option in the editor.
  2. A UI is added to allow you to change the options in the XML file to control how HTML is generated.
  3. Uses the VS code model to determine which language definition to use for formatting so you can format VB one way and C# another, for example.

I only ran into a couple of problems with CodeHTMLer.  These mainly revolved around the manner in which it expected to be used.  I had to modify (and annotate) the areas I changed.  Furthermore I needed a few additional options for controlling formatting so I extended the language definition code to add a few extra attributes along with the serialization logic.  So the version that is included here is a modified version of the one available from CodePlex.  As a final note the attached language definition file is modified to support how VS names languages.  The original version did not use the same name for C# as VS.

IMPORTANT: I don’t take credit for CodeHTMLer or any of its source.  I only take credit for the addin code.  Please support the original author for CodeHTMLer.

The current project is for VS2010 but you can recompile the code if you want a VS2008 version.  To use this code extract the binaries to your addin directory (%My Documents%Visual Studio 2010Addins).  In VS you probably need to enable the loading of the addin via the ToolsAddin Manager command.  If it does not show up then you might need to modify the addin file to force a re-registration.  The addin shows up as an item in the Tools menu and as a context menu option when you right-click in the code area.

FAQ: Rectangular selection in Visual Studio

Q: I need to delete some code from several lines and the code is well formatted.  Can I use some sort of regular expression to replace the code?

A: You could but this might be overkill.  VS has supported rectangular selection for a while.  With rectangular selection you can select a block of code and then select that same block on several consecutive lines and then delete the selected code. 

Do the following to use rectangular selection.

  1. Place the caret on the column where you want to begin selection.
  2. Hold down the ALT key.
  3. Drag the mouse to select the region you want to select.  If you cross lines then the same columns are selected on all the lines.
  4. Release the ALT key and then press DEL to delete the columns from each line.

Starting with VS2010 some new capabilites were added.  They are discussed here.  My personal favorite is the ability to replace the same block of code on all lines (such as renaming private to public). 

To do that just select the region that you want to replace and then begin typing the replacement text.

FAQ: Temporary Projects

Q: I create a lot of test projects and I am getting tired of cleaning up the temporary projects directory and clearing out the recent projects menu.  Is there some way to get VS to do this for me?

A: Yes there is.  Under ToolsOptions -> Projects and Solutions there is an option called Save new projects when created.  Unchecking this option will allow you to create (most) new projects without saving them.  It will still end up in your temporary projects folder but it will be automatically deleted when you close the project unless you chose to save it.  Furthermore it will not be added to the MRU project list.

 There are some restrictions to this option.  Firstly not all project types support it.  C++ does not while C# and VB do.  Secondly, you cannot create multiple projects in a single solution without saving it first.  You also cannot do some project changes without first saving the project.  Avoid using the Save All button as this will prompt you to save the project.  However you can add files and compile and debug without saving.

FAQ: Toolbox is slow to populate

Q: Whenever I accidentally mouse over the Toolbox it locks up VS for a while.  What is going on?

A: This should only happen the first time you mouse over the Toolbox after starting VS.  VS is populating the window with the controls that are defined within your solution in addition to the pre-defined controls.  This can be a slow process.  To speed it up go to ToolsOptions -> Windows Form DesignerGeneral and set AutoToolboxPopulate to false.  This will tell VS not to scan your solution looking for controls to add to the toolbox.

 

FAQ: Adding a lot of files to a solution

Q: I need to add a lot of existing files and folders to a project but using Add Existing Item is slow.  Is there a faster way?

A: When you select a project in Solution Explorer you might have noticed the toolbar at the top of the window changing.  One of the option (in most projects) is Show All Files.  This option shows all files and folders under the project folder whether they are in the project or not.  Files/folders not in the project are shown grayed out.  If you right-click the files/folders (multiple selection is supported) you can right-click and select Include in Project to automatically add them to the project.  If you add a folder it will automatically include any children in the folder.  This is a great way to add a lot of items when they already reside in the project directory.

If the files do not already reside in the directory then you can drag the files from Windows Explorer to the project directory.  Unfortunately this does not work in all projects.  Also be careful about what actually happens when you drop the file.  For projects like C# that assume all files are in the project directory then a copy is performed.  However some project types may result in a link being created to the original file location.

FAQ: Access Denied when signing an assembly

Q: When I try to sign an assembly I get an access denied error.  What is going on?

A: By default you can not sign an assembly unless you have administrative privileges.  This is due to security set on the key(s) used for signing.  To enable your account for signing assembly you will need to modify the NTFS security on the folder %ALLUSERS%Application DataMicrosoftCryptoRSAMachineKeys to give your user account (or better yet a group in which your account is a member of) full control.  You will need read & execute, list and modify rights.  You can then sign assemblies without administrative privileges.