P3.NET

Distinguishing .NET Versions

.NET versions can be confusing when you are trying to determine what version of the framework a program uses.  Gone are the days of all binaries carrying the same version number.  This article will attempt to explain the current (as of v4) versioning of .NET assemblies.

Versioning Made Confusing

First it is necessary to clarify that there are actually two different version numbers in .NET: framework and runtime.  The runtime version determines what version of the CLR a program uses.  The framework version determines what version of the framework is being used.  In this article framework versions will be preceded by a v (as in v1.0 or v2.0) whereas the runtime version will be listed as simply a number.

The following table identifies the different runtime and framework versions.

Framework Version Assembly Version Runtime Version
1.0 1.0.0.0 1.0
1.1 1.1.0.0 (?)
2.0 2.0.0.0 2.0
3.0 3.0.0.0
3.5 3.5.0.0
4.0 4.0.?.? 4.0

As can be seen from the table there are currently three versions of the runtime while there are quite a few versions of the framework. The above table ignores the updates to the versions due to service packs.  Implicit in the table is the fact that if you install any version of the framework from v2-v3.5 on you will get the v2 runtime.  Therefore it is not necessary to install v2 and then v3.0 and then v3.5.  Just install v3.5 if you want v2.0 support.  v4 is a new runtime so if an application needs v2 support then a previous framework must be installed as well.

As evident from the table the current runtime version is 2 and it has been in use for several framework versions.  You can confirm this by looking at some of the core system assemblies such as Mscorlib, System or System.Windows.Forms.  Each of these assemblies shipped with .NET originally or were added in v2.0 and therefore contain a version number of 2.0.0.0, in Visual Studio.

When v3.0 came out it made a few changes to the core assemblies (a service pack, if you will) and added a few new assemblies (WPF and WCF).  Since the runtime did not change the runtime version remains 2.  However since the new WPF and WCF assemblies were added in v3.0 they received an assembly version of 3.0.0.0.

With the release of v3.5 some new assemblies were added.  Again, since the runtime did not change (still 2) the core assemblies remain 2.0.0.0 even though they were updated (a service pack that was also released for v2.0).  The WPF and WCF assemblies from v3.0 were also updated but remain 3.0.0.0.  The new assemblies added for v3.5 (LINQ, for example) get an assembly version of 3.5.0.0.

(NOTE: v4 will be released in 2010 so it will be come the new standard going forward.  Visual Studio 2010 will ship with only the v4 framework.  VS2010 will support previous versions but you must install them manually.  VS 2010 does support loading projects from previous versions.  If you load a legacy project and the appropriate framework is not installed then you will get a dialog prompting you to: download the missing version, ignore the project or retarget to the newer version.  Unfortunately retargeting is the only real option.  VS will not recognize any framework not available when it loaded.  Therefore if you try to load a legacy solution then you will get this dialog for each project unless the legacy framework is installed.  Be sure to install v3.5 before trying to convert a VS 2008 project under VS 2010.)

Determining the Version

Confused yet?  A general guideline you can use is to look at the version of the assembly.  It is an indication of either which framework the assembly was introduced in or the runtime version that it was built for.  Programmatically you can use Environment.Version to get the runtime version of an application.  For a specific assembly you can use Assembly.ImageRuntimeVersion to get the runtime version the assembly was built for.  In most cases it will be the same version as the application being run but, due to versioning policies, the assembly’s runtime version might be lower than the application’s runtime version.  It can never be higher.

At this time there is no real way to determine the framework version an application was built against.  The framework version is predominantly for determining which assemblies to reference anyway and what features to enable so it does not really have any runtime significants.  If you truly must know then you can use hueristics to find the highest assembly version for the system assemblies.  Once you have that you’ll have the (minimal) framework version.  As an aside note that the runtime enforces only the runtime version number of an assembly.  If you were to try to load a v2 application on a machine without v2 installed you’ll get an error saying the runtime version is invalid.  However if you try to load a v3.5 application on a machine with only v3.0 it might or might not work depending upon whether you actually use any v3.5 features and reference any v3.5-only assemblies. 

To determine the runtime (CLR) version use Environment.Version.

Side-by-side Versioning

Prior to v4 the application determined the CLR version to use.  If you built a v3.5 app then you used the v2 runtime.  This can cause all sorts of problems – take two examples.  In example one an assembly written for v1 is loaded.  The assembly was built and tested against v1.1 and might not be compatible with the newer v2 framework.  In example two we have a v1.1 app that attempts to load a v2 assembly.  In this case the assembly might not have all the features it expected and will generate an error.  Neither of these are good scenarios.

Starting with v4 a single application can actually be running multiple versions of the framework at the same time.  The goal was to allow each assembly to run against the version it was built against.  Can this cause issues?  Most definitely but time will tell if it was worth the effort involved.  A full discussion of side-by-side runtime versioning is beyond the scope of this article.  Refer to the following link for more information: http://msdn.microsoft.com/en-us/magazine/ee819091.aspx

Benefits of Virtual Machines

Virtual machines have been around for a long time but they really have not become common place in many development shops.  It is unfortunate since virtual machines provide so many benefits to developers and testers alike.  This article will discuss some of the benefits of virtual machines and review two of the most popular virtual machine software packages available for Windows.

A Virtual Machine?

A virtual machine, simply put, is a virtual computer running on a physical computer.  The virtual machine emulates a physical machine in software.  This includes not only the processor but the instruction set, the memory bus, any BIOS commands and critical machine hardware such as the system clock and and DMA hardware.  Depending upon the machine peripheral devices are generally virtualized including storage devices like floppy drives, hard drives and CD drives.  Video, keyboard and mouse support are also common.  A virtual machine must look and act just like the real thing so standard software, like operating systems and applications, can run without modification. 

Emulation software (the term we will use for applications that create and run virtual machines) generally define a basic machine to emulate rather than supporting a wide variety of devices.  This reduces the amount of work that must be done and keeps things simple.  For Windows-based emulations you can expect a Pentium 4+ processor with basic SCSI and/or IDE drive support, floppy disk and basic CD/DVD reading along with all the required hardware.  This is enough to run most applications.  So even if you are running a multiprocessor non-Intel processor the virtual machines will still see a Pentium 4.  The emulation software is responsible for mapping the virtual devices back to the real devices, when appropriate.  For example writes to the virtual hard drive must be written to the backing file for the drive.

Emulation software generally allows for some manipulation of the virtual devices.  At a minimum this would generally include how much memory to make accessible to the virtual machine, how many (and how large) the hard drives are, whether sound cards or ports are available, etc.  These virtual machine settings are generally stored in a custom file by the emulation software.  Additionally the virtual hard drives are also generally stored as files.  These files can get really large since they are emulating a real computer. 

In emulation software the machine running the virtual machines (in our case Windows) is known as the host.  The virtual machine itself is known as the guest.

Why Does It Matter To Me?

So what does this mean to developers and testers.  Let’s look at a few scenarios that developers and testers find themselves in.  For testers it is important that they test software against the various supported operating systems that an application runs against.  The traditional approach is to run multiple physical machines, each with a different operating system.  This is bad for several reasons.  Space, maintenance, power and feasibility come to mind.  Deployment of the software to these various machines can also be an issue.  Instead a tester can run multiple virtual machines on one physical machine.  Each virtual machine could have a different operating system.  The application can be deployed to the virtual machines and tested.

Another advantage of virtual machines is reproducibility.  Build and test environments generally need to be well controlled.  It would be undo work to have to wipe out a machine and rebuild it after each build or test run.  A virtual machine allows the environment to be set up once.  The environment is then captured.  Any changes made after the capture can then be thrown away after the build or test run.  Most emulation software packages offer this in some form or another.

Another scenario, your application is currently released as version 1.  Because of how the application is written you can only run a single version of your application on a machine.  When you start development on version 2 you have to remove version 1.  Part way through development an issue is found in the version 1 software that you need to replicate and fix.  You can uninstall version 2 and install version 1, find and fix the issue and then revert back but that is a lot of work.  A nicer approach is to have a virtual machine with version 1 installed.  When you need to go back to version 1 you just start up the virtual machine.  Even better is that you can easily compare the behavior of the two versions side by side rather than having to switch between two computers.

IT departments have already found the benefits of running virtual servers over having multiple physical servers.  Development and testing share many of the same benefits.  Virtualization has become a buzzword in the industry.  Windows is becoming more virtualized so even if you aren’t using virtual machines today you may be in the future.

Which Emulation Software Is Best?

You have decided to try virtual machines out.  Now which software to use?  Fortunately, or perhaps not, there are not too many options available.  Each have their strengths and weaknesses.  First we’ll give a brief overview of each and then we’ll compare them by looking at features important to good emulation software.

Microsoft Virtual PC

Version used: Virtual PC 2007
Website: http://www.microsoft.com/windows/products/winfamily/virtualpc/default.mspx

Microsoft purchased Connectix many years back for their virtual machine software.  They rebranded it Microsoft Virtual PC (VPC).  There have only beeen two versions: 2004 and 2007.  It comes in either PC or Server edition but we will only talk about PC.

VPC is the primary mechanism by which Microsoft deploys demo and beta products to customers.  They generate VPC images that can be downloaded and run.  If you do a lot of beta testing for Microsoft then VPC will be a requirement.

Windows Virtual PC

Version used: Windows Virtual PC
Website: http://www.microsoft.com/windows/virtual-pc/

This is an updated version of Virtual PC.  The reason it is listed separately is because it only supports Windows 7 and later operating systems.  WVPC is basically VPC with some new enhancements.  It is relevant enough that if you are running Windows 7 and you want to use VPC then you should be using WVPC instead.

WVPC supports loading of existing VPC images so you can easily upgrade from VPC.  Once you upgrade though you won’t be able to go back.

One very interesting feature of WVPC (which no other application has) is XP mode.  WVPC ships with (or at least you can download) a free image XP for use in WVPC.  When running this image you can seamlessly integrate any installed application into Windows 7.  What this means is that you can install an application under WVPC and run it directly from Win7.  When you click on the generated shortcut it will start WVPC and the XP image in the background and provide a seamless desktop to run it under.  This mode was developed almost entirely to allow you to run XP applications that normally would not run under Win7.  Primarily this is designed for corporate environments but it works well enough to be of general use. 

VMWare Workstation

Version used: VMWare Workstation 7
Website: http://www.vmware.com

VMWare has been around for years (since at least the 90s). As with VPC there are either workstation or server editions but we will restrict ourselves to the workstation edition.

A nice feature of VMWare is that it can run, albeit with a reconfiguration, VPC images as well.  Once you load it into VMWare though you will no longer be able to use it in VPC.

Qemu

Version used: Qemu v0.9.1
Website: http://fabrice.bellard.free.fr/qemu/

I have little experience with Qemu so I will not review it here.  It’s biggest strengths are that it is open source, free and can emulate non-Intel processors.  Its weaknesses include it is not as stable or easy to use as the other products and it does not perform as well, in my experience.  It is command-line driven although there are some addins that give it a user interface.  It is definitely something to watch for down the road.

Feature Comparisons 

A caveat is in order before we discuss the features.  I used VMWare for many years in the 90s.  I loved all the features it had.  When I switched jobs my new employer would not justify the cost of virtual machines.  At the same time I received a complimentary copy of VPC.  I used VPC for several years since I did not own a copy of VMWare anymore.  Beyond 64-bit support I could not justify the cost of VMWare.  Recently VMWare was nice enough to give me a complimentary copy of VMWare and I now run both versions, at least for now.

Cost

VPC: Free
VMWare Workstation : $199
WVPC: Free (requires Win7)

For some shops cost does not matter but for many it does.  $200 is not much money for software but for single developers, like myself, it can be hard to justify free for most situations.  VPC wins here.  However it is important to note that VMWare has a program called VMWare Player that allows you to run virtual machines.  It is free but does not allow for configuration changes.

Performance

Running a virtualized machine on top of a regular machine is going to be slower in most cases.  Therefore performance of both the guest and host are important.  Virtual machines make heavy use of the processor, memory and hard drive space.  A host system must have a really strong processor (preferably multiprocessor), a lot of memory and a lot of fast hard drive space to get good performance out of a guest.

VPC and VMWare both run about the same in my experience.  VMWare has a slight performance advantage when running guests when certain device options are set (discussed later) but otherwise they both run really well.  VMWare also seems to shut down guests faster than VPC.  However VPC handles undo faster.

WVPC has similar performance to VPC.  WVPC generally prefers to hibernate VMs rather than shutting them down.  This results in faster start ups at the cost of more hard drive space.

Device Support

Common: Pentium 4, SCSI and IDE drives, CD/DVD drive, network cards, SoundBlaster card, SVGA
VPC: —
VMWare: USB devices, multiple processors, 64-bit processors, 3D graphics
WVPC: USB devices, multiple processors

VMWare has superior device support to VPC.  Beyond support for USB devices attached to the host machine VMWare also supports emulating a 64-bit processor.  This is a fundamental feature that may sway many people to VMWare.  64-bit processors have been around a while.  Many people are running 64-bit versions of Windows as a host.  It is therefore logical that people will want to run a 64-bit guest machine.  Only VMWare can do that at this point. 

VMWare also supports 3D graphics with hardware acceleration.  My experience at this point though is that it is sufficent to run basic apps but not sufficient to run 3D games and the like.  It simply is too slow.

WVPC is catching up to VMWare in terms of hardware (especially with XP mode).  It does support multiple processors but only limited.  One area to be concerned about though is the host CPU.  The host CPU must support virtualization otherwise WVPC will not run.  All modern CPUs support this feature but some older processors do not.  Confirm virtualization support before deciding on WVPC.

(UPDATE: 3/28/2010 – Effective immediately WVPC no longer requires a host CPU with virtualization support.  An update to WVPC removes this requirement.)

Operating System Support

VPC: DOS (unsupported), All versions of 32-bit Windows except servers (Servers work but are unsupported), OS/2 Warp (certain versions), Others (unsupported)
VMWare: DOS, All versions of 32/64-bit Windows, most variants of Linux and FreeBSD
WVPC: 32-bit versions of Windows

VPC and VMWare support basically the same operating systems.  If it runs under the virtualized hardware then it will run.  Non-Windows operating systems are unsupported, but work, in VPC.  OS/2 Warp is documented as supported with VPC but I have never gotten it to work properly.  To be fair I have never gotten it to work under VMWare or Qemu either.

WVPC is really designed to get legacy XP machine running under Win7.  It does technically support any 32-bit Windows version but no other operating system is formally supported.  This is actually a step back from VPC’s support.

The big point here is that, since VMWare emulates a 64-bit processor, you can run 64-bit operating systems.  Therefore VMWare takes the win here just for that.  Running a 64-bit guest under 32-bit host can solve many issues when trying to migrate from 32 to 64-bits.  I run this way quite a bit and it works well.

Cloning Support

Both VMWare and VPC support cloning of existing virtual machines but in different ways.  Cloning is important when you want to set up multiple guests with the same, or similar, environment.  Installing an operating system is time consuming.  With cloning you can set up the environment once and then reuse the environment in different guests.  To understand the differences between VPC and VMWare’s approaches you must understand how cloning works.

The simplest approach to cloning is simply copying the virtual machine directory and doing a little renaming.  This works in both VPC and VMWare.  VMWare actually detects this and offers to make a few adjustments.  While doable this is not the best approach because it wastes a lot of space, especially if the cloned guest does not change much.  We will not refer to this approach anymore.

The better approach is to difference a base image.  With differencing only the changes made to a guest are stored.  With differencing the amount of space a clone takes is less because it only tracks the changes.  However it needs the base image to run properly.  Even more important though is that the base image cannot change without invalidating the clone. 

VPC supports cloning but it is not directly obvious how to do it.  Basically you would set up a new virtual machine with a regular virtual drive.  You would then create a new virtual drive.  Within the wizard is an option to create a differencing disk (off an existing virtual drive).  Replacing the original virtual drive created when the virtual machine was created with the new virtual drive results in a clone.  While not obvious, it works.

VMWare does basically the same thing but it exposes these options easily within the main UI.  The clone option is exposed for each virtual machine.  When you select this option you can chose to do a differencing clone or a full copy.  As with VPC this creates a brand new virtual machine.

WVPC follows in VPCs footprints for cloning support.  For WVPC, where you want XP mode support, the best solution is to copy the base XP image and then use it as the hard drive for the new virtual machine.  This is the closest you’ll get to cloning.

For easy of use VMWare wins here but otherwise the products are identical.

Undo Support

Along the same lines as cloning is undo support.  After you’ve made changes to a guest it is sometimes important to undo those changes.  Without direct undo support you would have to resort to cloning.  Both VPC and VMWare support undo. 

VPC/WVPC exposes this as an option in the virtual machine settings.  By default it is disabled so all changes made to the guest are permanent.  When undo support is enabled VPC tracks the changes made for the current session in a separate file (a difference disk again).  When the session finally ends VPC asks if you want to save the changes permanently, save them for later or throw them away.  If you decide to save the changes permanently then VPC will merge the differences in.  This can take up to 10 minutes.  If you opt to save the changes until later then the differencing disk is retained.  When the session starts again you will be able to resume from where you left off.  Finally if you decide to throw away the changes then the differencing disk is deleted. 

VMWare does things differently.  VMWare uses snapshots instead.  With snapshots VMWare basically uses differencing disks for each snapshot beyond the first one.  Unlike VPC it does not prompt after you close a session.  Instead you snapshot the guest when you want and it tracks changes from there.  Another advantage of snapshots is that you can have multiple versions.  With VPC you have the base image with or without changes.  With VMWare you have the base image along with any number of snapshots containing changes.  A big, big negative for VMWare is the number of files needed to support snapshots and all the space they take up.  There really is no good way to clean this mess up even when you don’t want snapshots.

For simple undo support VPC/WVPC wins here but if you want multiple variants then VMWare is the winner.  Personally I prefer the simplistic on/off support of VPC.  Normally I set up a VM with a bunch of changes.  If they stick then I want it permanent.  If they don’t then I want to undo it.  I don’t see the big benefit in multiple undo levels especially when cloning is involved.

Host-Guest Communication

Another critical area for emulation software is how well it integrates the guest and the host.  At a minimal the guest must be able to transfer files to and from the host.  VPC and VMWare both support this but in different ways. 

VPC allows you to identify one or more folders on the host machine to be exposed to the guest.  Each folder is exposed as a separate drive to the guest.  Whether the folder is exposed each time the virtual machine restarts or not is optional.  The guest can have read-only access, if desired.

VMWare also allows you to identify one or more folders on the host machine to be exposed to the guest.  However VMWare exposes all the folders as subfolders of a single network folder.  As with VPC the folders may or may not be writable and can be persisted or not.

WVPC has an even nicer integration over VPC.  It allows you to identify the local drives that you want to directly integrate with in the VM.  This eliminates the need to set up network shares or map drives and is really nice.

I personally prefer WVPC’s approach of using truly integrated drives.  In a close second is VPC’s mapped drives.  While you are limited to 26 shared folders they all look like drives.  With VMWare’s network folder approach the drives are accessed using UNC paths.  For some applications, like .NET, this introduces problems.  For .NET security of a network application is different than a local application.

Another host-guest communication features is copying and pasting.  It is very common to want to copy something (other than a file) from the guest to the host or vice versa.  Both VPC and VMWare support copying and pasting using the clipboard from/to the guest.  This is optional in the products but should generally be left enabled.   

Finally there is actually getting to and from the guest.  VPC creates a separate window for each guest.  To give control to the guest you need only move the mouse within the window and click (by default).    To return control to the host a keystroke is used.  If a certain keystroke is pressed while inside the guest the Ctrl+Alt+Del key sequence is generated.  If you accidentally press Ctrl+Alt+Del while the guest is active then VPC will intercept it and let you know, although the host will still receive it as well.

VMWare follows a similar approach in that moving the mouse within the window and clicking will give the guest control.  It also uses special keystrokes to send the special Windows keystrokes and detects if you press them while inside the guest.  What is different is that VMWare uses a single, tabbed window to house the virtual machines.  Each guest gets its own tab.  Within the tab you can see either the guest window or the configuration window. 

WVPC in normal mode works just like VPC.  In XP mode though you run apps as though they are running natively on the host machine when in fact they are running in VMs.  This is really cool, when you need that feature.

Additions

Each emulation software has tools that it can install into the guest.  These additions generally include optimized video and network drivers and support for the host-guest communications.  Neither product has an advantage here.

VMWare supports an interesting features that VPC/WVPC lacks.  VMWare can install a Visual Studio addin.  This addin allows you to debug code on the guest directly rather than relying on remote debugging.  For developers this is an excellent feature. 

TFS 2010 In Offline Mode

The initial release of TFS did not support having offline clients – meaning it did not support clients that could not connect to the server.  This introduced quite a few problems for folks would might work from home or on the train.  With VS 2008 SP1 (I believe) offline support was partially supported but it required that you use the TFS Power Tools.  With TFS 2010 it is getting a little easier.  Recently I had the need to work offline and I wanted to share the process with others who might be going down this same road because it is not quite obvious.

The Problem

First we need to set the stage for where problems can occur.  If you start up VS and attempt to load a project contained in TFS and TFS cannot be found then VS will prompt you to work in offline mode.  VS only does this when loading a solution. 

So far so good.  If TFS goes offline while a solution is already open though VS will not switch to offline mode.  Instead it’ll keep trying to connect to TFS and then time out.  This is really inconvenient.  It is made worse if you happen to have any files modified, not checked out yet and the appropriate check out options set because you won’t be able to save the files at all.  Really annoying. 

Before continuing let’s be clear about when VS will attempt to check out files.  The options reside in ToolsOptions -> Source ControlEnvironment.  The Checked-in Items – Saving option determines what happens when you attempt to save a file whereas the Editing option determines what happens when you try to edit a read-only file.  The default in both cases is to automatically check the file out.  If TFS is offline then these operations will time out and VS will not allow you to continue.

Making TFS Offline

To get VS to realize that TFS is offline and to force VS into offline mode you have to use the TFS Power Tools that ship external to TFS.  The Power Tools ship with a command line utility called ‘tfpt’.  If you run this command with the ‘tweakUI’ option then you will get a nice little UI for working with TFS.   

(NOTE: The version that ships with TFS 2010 RC does not appear to have the UI available.  I use the version from TFS 2008 instead.)

Selecting the appropriate server and clicking Edit will take you to the server properties dialog.  Within this dialog is an option to force the server into offline mode.  Note that this only impacts VS and not TFS.

Now VS is in offline mode.  This takes effect immediately (even if VS is already open).  Once you set the server to offline mode you still have an extra step to take.  Each file that you want to edit will need to be modified to not be read-only.  You should remove the read-only flag only on those files that you modify.  Doing this will allow you to edit and save files while offline.

Note that TFS will remain offline until you re-run tfpt and uncheck the offline option.

Synchronizing Back to TFS

When you finally connect back to TFS it is important that you synchronize TFS with your changes.  To do this do the following:

  1. Open a command prompt.
  2. Change to the directory where your workspace is mapped.
  3. Run the tweakui command again and uncheck the offline option for the server.
  4. Run the following command: tfpt online

Tfpt will examine the entire directory structure looking for any files that are not read-only.  For each file it will check the file out of TFS if it is not already checked out.  It will also do adds and removes.  You should confirm all the pending changes before checking the files into TFS.  Once you’ve completed these steps you are working with TFS normally again.

Access Control in .NET

(Originally published: 30 Mar 2008)

As of v2 of .NET you can get and set the security of securable objects such as files, folders and registry keys.  Other object types can be added as needed.  While all the access control objects derive from the same base classes they share only a few common elements.  Therefore working with the security of a file requires using different classes than registry keys.  This is an unfortunate limitation of the existing implementation since a generic security handler can not be created with the existing objects.  This article will discuss working with the access control classes in .NET.  While the article will use folders as the target object the concepts (but not the classes) apply to any securable object.  To provide a graphical representation for this discussion open the properties for a folder in Windows Explorer and go to the Security tab.  Click the Advanced button to gain access to the more advanced features we will discuss.

All the core access control types are contained in the System.Security.AccessControl namespace.  The types used to get user/group information are available in System.Security.Principal.

Security Basics

Before we get into the .NET implementation it is important to clarify a few security-related concepts.

Object Rights

Each securable object has a list of operations that it supports.  This list of operations are known as security rights (or just rights).  Rights are generally represented as a set of bit flags internally.  Rights can be combined to form more complex rights.  For example Full Control is a combination of all rights.  Certain rights are shared by all objects including read and modify. 

Due to the variety of rights available, security rights are not exposed directly by the base security classes.  Instead each object-dependent implementation exposes its own set of rights as an enumeration.  We will discuss these rights later.

Identity References

In Windows both users and groups (known as identities) are represented as globally unique values known as SIDs (security identifiers).  SIDs are well-formed but not easily remembered.  All identities are referenced via SIDs internally.  Part of a SID includes the domain or machine that owns the identity.  All SIDs from the same machine/domain share at least a partial identifier.

In .NET the SecurityIdentifier class is used to wrap a SID.  While useful for uniquely identifying a user or group creating an instance of this class directly is rarely done.  Instead this class is generally returned by security-related methods.

Since working with SIDs is not generally useful, other than for uniqueness, .NET provides a more user-friendly version available called NTAccount.  This class is specifically used for users and groups.  It provides the user-friendly name for a SID.  You can easily convert between NTAccount and SecurityIdentifier using the Translate method.  In the few cases where NTAccount can not map the SID to a friendly name it will simply use the SID.  The following example displays the name and SID of the current user.

public void DisplayUser () 

   WindowsIdentity id = WindowsIdentity.GetCurrent(); 

   string name = id.User.Translate(typeof(NTAccount)).Value; 
   string sid = id.User.Translate(typeof(SecurityIdentifier)).Value; 

   Console.WriteLine(“Current User = {0}, [{1}]”, name, sid); 
}

Both NTAccount and SecurityIdentifier derive from IdentityReference.  The base class is generally used in the various security calls to allow either class to be used.  You will almost always want to use NTAccount.

DACLs and SACLs

In Windows there are discretionary access control lists (DACLs) and system access control lists (SACLs).  DACLs specify the rights assigned to identities on an object.  Each entry in the DACL is known as a ACE.  SACLs determine the auditing done on an object.  In most cases you will be working with DACLs. 

In .NET DACLs are known as access rules and SACLs are known as audit rules.  Each access rule contains a security right and the identity that has (or does not have) the right.  .NET uses the abstract class AccessRule to represent an access rule.  Each audit rule contains a security right and the condition under which it is logged (success or failure).  .NET uses the abstract class AuditRule to represent an audit rule.  Neither of these classes are used in most cases as the derived classes provide more information.

In Windows an access rule can either allow or deny a specific right to an identity.  Most rules allow the right.  In the rare case of a deny right it takes precedence over all access rules.  In fact access rules are always listed such that deny rules come first.  A deny rule always overrides an allow rule.  Therefore Windows (and your code) can stop looking for a right as soon as it sees a deny rule for the right.

Object Security

Each securable object has a set of security properties exposed through an ObjectSecurity-derived class.  The derived class exposes methods to access all the security settings of an object including the access and audit rules and the owner.  It is also through this class that we can modify the security of an object.  Unfortunately the base ObjectSecurity class does not expose any of these methods.  This makes working with securable objects in a generic manner difficult.  Instead the base class exposes properties that define the types used to represent the access and audit rules, discussed later. 

The following table defines some common securable objects and their associated object security type.

Object Type ObjectSecurity Class Accessor Class
Active Directory ActiveDirectorySecurity DirectoryEntry
Directory DirectorySecurity Directory, DirectoryInfo
File FileSecurity File, FileInfo
Registry Key RegistrySecurity RegistryKey

Fortunately all the built in classes expose the same set of methods so, other than the type name, working with each type is the same.  There is another solution to this delimma.  Most of the security classes derive from CommonObjectSecurity (which itself derives from ObjectSecurity).  This base class exposes the core methods that we will discuss later.  Therefore you can use CommonObjectSecurity in cases where you might want to work with different object types.  Remember that not all security classes derive from this base class.  ActiveDirectorySecurity is one such case.

So how do you get the object’s security to begin with?  As with the security classes, there is no generic way to get this information.  In the above table the Accessor Class column identifies one or more classes that can be used to get access to an object’s security.  In all cases except Active Directory, the type(s) expose a GetAccessControl method.  Static classes return the object security for the parameter passed to the method.  Instance classes return the object security for the current instance.

The following example gets the security for the C:Windows directory.

DirectorySecurity sec = Directory.GetAccessControl(@”c:Windows”);

 

Access Rules

Each access rule represents: a right, allow/deny flag and the associated identity.  For the standard objects a custom enumerated type is defined to identify the security rights available.  The type of the enumeration is available through the AccessRightType property on the object’s security class, if desired.  The following table defines the enumeration for the standard object types.  We will discuss the last two columns later.

Object Type AccessRightType AccessRuleType AuditRuleType
Active Directory ActiveDirectoryRights ActiveDirectoryAccessRule ActiveDirectoryAuditRule
Directory FileSystemRights FileSystemAccessRule FileSystemAuditRule
File FileSystemRights FileSystemAccessRule FileSystemAuditRule
Registry Key RegistryRights RegistryAccessRule RegistryAuditRule

Starting to notice a pattern yet.  The access right enumerations all end in -Rights.  All the enumerations are marked as flags because several of the rights are combinations of other rights (such as full control).  To get the access rules associated with an object use the GetAccessRules method.  The following example gets the access rules for a directory.

public void PrintDirectorySecurity ( string path ) 

   DirectorySecurity sec = Directory.GetAccessControl(path); 

   … 
}

The GetAccessRules method accepts three parameters: include explicit, include inherited and type of identity.  The first parameter specifies whether rights explicitly assigned to an object are returned.  This is almost always the desired case.  The second parameter specifies where rights inherited from the object’s parent are returned.  The final parameter determines the type of identity reference to be returned.  As discussed earlier there are two standard types: NTAccount and SecurityIdentifier.  For user interfaces the NTAccount is the general choice.

Once we have the access rules we can enumerate them.  The following table lists the important properties of each rule.

Property Description
AccessControlType Determines if this is an allowed or denied right
IdentityReference The user/group with the right
InheritanceFlags Controls how the right is inherited by children
IsInherited Determines if this is an explicit or inherited right
PropagationFlags Determines how the right propagates to children

Notice that the actual rights are not part of the access rule, at least not directly.  Remember that CommonObjectSecurity provides a generic implementation of the security.  However the actual rights are enumerations defined for each object type.  Since CommonObjectSecurity has no way to know what the enumerated values are it doesn’t expose them as a strongly typed property.  The AccessMask property can be used to get the underlying bitmask.  Fortunately each AccessRule-derived class exposes the rights as a strongly typed property named after the enumerated type.  This is the preferred method for getting the rights. 

The following code will list all the rights associated with an object along with some other property values.

public void PrintDirectorySecurity ( string path ) 

   Console.WriteLine(String.Format(“{0,-30} Allowed Inherited {1,-15}”
                            “User/Group”“Right”)); 
   Console.WriteLine(new string(‘-‘70)); 

   DirectorySecurity sec = Directory.GetAccessControl(path); 
   foreach (FileSystemAccessRule rule in 
                sec.GetAccessRules(truetruetypeof(NTAccount))) 
   { 
      Console.WriteLine(“{0,-30} {2} {3} {1,-15:g}”
            rule.IdentityReference, 
            rule.FileSystemRights, 
            rule.AccessControlType == AccessControlType.Allow, 
            rule.IsInherited); 
   }; 
}

There are a couple of important points about enumerating access rules.  Firstly the rights are not always a valid combination of flags from the enumeration.  Secondly an identity can appear more than once in the list.  This can occur for a variety of reasons, inheritance being one of the more common.  Therefore if you want to get all the rights owned by an identity you need to enumerate all rules.  Finally remember that there are both allow and deny rules.  Deny rules come before allow rules and take precedence.

The following method is a simple implementation for getting the rights of a specific identity.  It takes the associated group memberships into account and deny rights.

static FileSystemRights GetObjectRights (
   DirectorySecurity security,
   WindowsIdentity id ) 

   FileSystemRights allowedRights = 0
   FileSystemRights deniedRights = 0

   foreach (FileSystemAccessRule rule in 
              security.GetAccessRules(truetrue, id.User.GetType())) 
   { 
      //If the identity associated with the rule        
        //matches the user or any of their groups  
      if (rule.IdentityReference.Equals(id) ||            
             id.Groups.Contains(rule.IdentityReference)) 
      {
            uint right = (uint)rule.FileSystemRights & 0x00FFFFFF;

         //Filter out the generic rights so we get a           
            //nice enumerated value  
         if (rule.AccessControlType == AccessControlType.Allow) 
            allowedRights |= (FileSystemRights)(right); 
         else 
            deniedRights |= (FileSystemRights)(right); 
      }; 
   }; 

   return allowedRights ^ deniedRights; 
}

The method basically enumerates the access rules of the object and builds up the list of rights for the user (taking their group membership into account).  Denied rights are tracked separately.  After all the rights are determined the allowed rights are returned with any denied rights removed.  Notice the filtering that is going on when adding the rights to the list.  The filter removes the extra bits and generic rights that might be associated with a rule.  These are not supported by the various Rights enumerations and would cause us problems if we wanted to see the string representation.

In the Advanced Security Settings of a folder in Windows Explorer the Permission entries map to these access rules.  You will find that Explorer collapses some of the rules for convenience.  We will discuss the Apply To column later.

Inheritance and Propagation

You can determine if a rule is inherited through the IsInherited property.  Whether a rule is inherited or not is determined by the InheritanceFlags and PropagationFlags properties.  The inheritance flags determine who inherits the rule: child containers (folders), objects (files), both or neither.  The propagation flags determine whether the object (for which you are adding the rule) gets the rule as well.  The following table defines the various combinations of flags and how they map to folder security in Explorer.

PropagationFlags InheritanceFlags Description (Explorer)
None ContainerInherit This folder and subfolders
  ObjectInherit This folder and files
  ContainerInherit | ObjectInherit This folder, subfolders and files
InheritOnly ContainerInherit Subfolders only
  ObjectInherit Files only
  ContainerInherit | ObjectInherit Subfolders and files only

The table left out the propagation flag NoPropagationInherit.  This odd flag can be combined with any of the other entries in the table.  When applied it identifies the rule as applying only to the objects and containers within the target object.  In Explorer this maps to the checkbox below the permission entries (when editing) that says Apply these permissions to objects and/or containers within this container only.

Modifying Access Rules

Modifying access rules can be easy or hard depending on the situation.  To give an identity a new right you create a new access rule and then add it to the object security instance using the AddAccessRule method.  The following example gives the specified user the delete right to a directory.

public void GiveUserDeleteAccess ( string path, WindowsIdentity user ) 

   DirectorySecurity sec = Directory.GetAccessControl(path); 

   FileSystemAccessRule rule = new FileSystemAccessRule(user.User, 
                FileSystemRights.Delete, AccessControlType.Allow); 

   sec.AddAccessRule(rule); 

   Directory.SetAccessControl(path, sec); 
}

Notice the call to SetAccessControl.  The object security instance you obtain from GetAccessControl is a snapshot of the current state of the object.  Changes you make to it do not actually get applied until you call SetAccessControl.  As a result it is when you call SetAccessControl that you are most likely to get an exception such as for unauthorized access or for a missing object.  Whenever you make any changes to an object’s security you must remember to call this method to persist the changes.  In general you will make all the necessary changes before attempting to persist them.

Removing rights is even easier.  When removing rights you have a little more flexibility.  You can remove a specific identity/right rule, all rights for a particular identity or all rights for all users.  To remove a specific identity/right use the RemoveAccessRule.  The following example removes the right we added earlier.

public void TakeUserDeleteAccess ( string path, WindowsIdentity user ) 

   DirectorySecurity sec = Directory.GetAccessControl(path); 

   FileSystemAccessRule rule = new FileSystemAccessRule(
                user.User, FileSystemRights.Delete, 
                AccessControlType.Allow); 

   sec.RemoveAccessRule(rule); 

   Directory.SetAccessControl(path, sec); 
}

To remove all the rights a user has to an object use PurgeAccessRules instead.  It simply requires the identity reference.  You could also the ModifyAccessRule method if you like.  The various methods all basically do the same thing.  In fact methods implemented in derived classes generally call into the base class methods.  Prefer the versions defined in derived classes but do not worry about going out of your way to call them if the base classes are sufficient.

Modifying Inherited Rules  

You can not just modify an inherited rule.  The rule is defined and owned by a parent object.  To modify such a rule you first need to break the inheritance.  When breaking inheritance you have the option of either copying the existing inherited rule or removing it altogether.  In Explorer when you attempt to edit an inherited rule you will notice that it does not allow you to edit the inherited rights.  You can, of course, add new rights. 

Unchecking the Include inheritable permissions from this object’s parent box will display a dialog prompting you to copy or remove the inherited rules.  In .NET you use the SetAccessRuleProtection method on the security object.  The following code demonstrates this method.

DirectorySecurity sec = Directory.GetAccessControl(path); 

//Copy existing inherited rules 
sec.SetAccessRuleProtection(falsetrue); 

//or, remove inherited rules 
sec.SetAccessRuleProtection(falsefalse);

After calling SetAccessRuleProtection you can then modifying all the access rules on the object.

Audit rules  

Audit rules work similar to access rules.  In fact the only real difference is that the members contain -Audit rather than -Access.  Therefore if you want to add a new audit rule use AddAuditRule.  To get the audit rules use GetAuditRules, etc. 

Audit rules have the same properties as well except in lieu of AccessControlType (to define allow and deny rules) they have the AuditFlags property.  Auditing can occur when the operation succeeds, fails or both.  The property is a bitwise flag combination of AuditFlags.  The following table defines the flags.

AuditFlags Description
None No auditing
Success Audit on successful attempts
Failure Audit on failure attempts

Ownership

Each securable object is owned by an identity.  Owning an object gives an identity the special privilege of changing the permissions irrelevant of access rules.  This prevents an owner from being locked out of their own object.

To get the owner of an object use the GetOwner method.  The following example gets the owner of the folder.

public IdentityReference GetDirectoryOwner ( string path ) 

   DirectorySecurity sec = Directory.GetAccessControl(path); 
   return sec.GetOwner(typeof(NTAccount)); 
}

Setting the owner is just as easy using the SetOwner method.  To take ownership of an object requires the right permissions so this method can fail.  The following code makes the Administrators group the owner of the folder.

public void SetDirectoryOwner (     
        string path,     
        IdentityReference newOwner ) 

   DirectorySecurity sec = Directory.GetAccessControl(path); 
             
   sec.SetOwner(newOwner); 
}

 

Summary

There are quite a few features of the .NET access control that we did not cover.  However this article should hopefully get you started in the right direction.

Parallel Test Execution in Visual Studio 2010

Visual Studio 2010 is adding the ability to run tests in parallel.  If you have lots of tests to run and a multi-processor machine then this is a great feature.  It is discussed here.  There have been several posts on the forums about this feature not working but it actually does.  Here are the formal requirements for parallel test execution (you can read the gory details in the link).

  1. Must be running on a multi-processor machine.
  2. Must be running unit tests.  No other test category is supported.
  3. Tests must be thread-safe.  Most unit tests are but tests that use shared resources like a database or the file system may have issues.
  4. Data collections are not allowed. 
  5. Tests must be run locally only.
  6. You must enable the option.  There is no user interface for setting it.

Here is a sample test file that I used to test parallel execution.  It consists of 4 test cases where each test case sleeps for 5 seconds.  When running sequentially this test should take approximately 20 seconds but when running in parallel (assuming 2 processors) it should only take 10. 

[TestMethod] 
public void TestMethod1 () 

   Thread.Sleep(5000); 


[TestMethod] 
public void TestMethod2 () 

   Thread.Sleep(5000); 


[TestMethod] 
public void TestMethod3 () 

   Thread.Sleep(5000); 


[TestMethod] 
public void TestMethod4 () 

   Thread.Sleep(5000); 
}

Here are steps you need to take to get parallel tests to run.  Note that since tests must be run locally you will be updating the local test settings.

  1. Open your test project and double click on the local test settings file (Local.testsettings).
  2. In the Test Settings dialog go to the Data and Diagnostics section and ensure that all the options are unchecked.  This will disable the data collection.

  3. Close the Test Settings dialog.  Now you need to enable parallel test execution.  There is no UI for this so do the following.
    1. Right-click the test settings file in Solution Explorer, select Open With and then XML Editor to open the file in the XML editor.
    2. Go to the Execution element and add the parallel attribute.  You can set it to a specific number of processors or to 0 to allow the tests to run on all processors.

    3. Save and close the file.
  4. Close the solution and reopen it.  The parallel settings are only read when the project loads, at least for Beta 2 and the RC.
  5. Open the Test Results window and group by Result so you can see the parallel execution.
  6. Start debugging (F5 or via the menu).
  7. Open the Test Results window again and you should see the tests running parallel.

A couple of caveats about parallel tests.  Firstly you must close and reopen the solution (or at least the project) in order for changes in the parallel settings to take effect.  Secondly the settings file is rewritten whenever you make changes in the Test Settings dialog.  So if you make changes to the test settings through the UI you will need to modify the settings file manually to get the parallel settings back again.

Parallel test execution is a really neat idea for running unit tests quickly.  Hopefully Microsoft will move the option into the UI so we do not have to go through the manual editing, reload process each time.  Whether that happens before the final release of VS 2010 or not is anyone’s guess.

 

Diskeeper Undelete to the Rescue

For those of you who are not aware Diskeeper Corporation (http://www.diskeeper.com), the creators of the Diskeeper defragmentation tool that I’ve recommended in the past, has a relative new tool out called Undelete.  I received a copy a while back and provided my feedback to Diskeeper and then moved on.  I personally do not use undelete tools.  My philosophy is “if I delete an important file then maybe I’ll pay attention better next time”.  Needless to say I have deleted more than one important document in the past and cursed myself for doing it. Fast forward to recent days while I was at work.  A coworker was cleaning up their hard drive and inadvertently deleted entire directories of important files, most of which were not backed up.  Ignoring the whole “should have had a backup plan” discussion they were in trouble.  A quick search of the Inet revealed a couple of potential undelete tools.  I downloaded one in trial mode and tried it.  Sure enough the files were still there but, since it was a trial, we’d have to buy the product to recover the files.  Then I remembered Diskeeper sending me a copy of their new Undelete tool. I did a quick check and sure enough I had a license for the program and, even better, it comes with Emergency Undelete which allows you to recover files even if Undelete was not already installed.  This is exactly what I needed.  I burned a copy of Emergency Undelete to CD (see below as to why) and ran it on my coworker’s computer.  Sure enough it, really quickly, found the files that were deleted.  Even better is that it was able to restore almost all of them.  We restored all the files to a USB drive and I left my coworker to figure out which files they actually needed.  I went back to my desk: happy that we were able to recover most of the files, and impressed with the speed and ease at which we could do it.  It saved my coworker days of work in trying to recover the data by hand. Without a doubt Emergency Undelete is something I’m keeping around on CD for emergency purposes.  I’m still not comfortable running Undelete-like tools but Emergency Undelete is definitely handy to have.  If nothing else it makes me look like a magician to folks who just lost some critical files.  If you do not have an emergency undelete program then you should get one.  Regular backups are great but they require time and effort to restore.  I, for one, will be recommending Undelete from Diskeeper because I can say first hand that it works as advertised. Caveat: When a file is deleted it is generally just removed from the file system’s directory table.  The actual data is generally still there.  Undelete programs work by scanning the drive and finding these files.  However the operating system treats newly deleted files just like free space so the more data that gets written to a drive the more likely it is that your deleted file will be overwritten.  When you discover that you accidentally deleted a file it is critical that you stop doing anything that might write something to the drive.  This includes running an INet browser, shutting down Windows or even closing programs.  Any of these could save files and overwrite your data.  Go to another program and do the following.
  1. Get an undelete program like Emergency Undelete.
  2. Most good programs are going to allow you to copy the necessary files to removable media like a USB or CD.  Put the undelete program on the media.  A CD is best because you can store the program there and use it whenever you need it.  It saves time and eliminates the need for a secondary computer.
  3. Put the CD (or USB) containing the undelete program into the target computer and run the undelete program.
  4. If all goes well you should see the file(s) you want to recover.  Select the file(s) you want to recover.  If you are unsure then it might be best to recover all the files and then selecting merge the files you actually need.
  5. Now you need to restore them but you cannot restore them to the target machine.  Again any writes might overwrite the very data you are trying to recover.  Restore the files to removable media or a secondary hard drive.  USB works great here.
  6. Once you have recovered all the files you might need you can begin placing them back onto the target machine.  Once you start this step you can assume that any files that were not recovered will be gone.