Condusiv has recently released the next version of Diskeeper 12. I’ve been running it for a while now and I’m still satisfied with the way it optimizes my drives without eating up the resources. Traditionally I wipe my machine on a yearly basis because of all the extra stuff that gets installed and the slow down on the hard drives but since I’ve been using Diskeeper I’m averaging closer to 18 months now. The hard drive just isn’t running slow.
Now that VS 2012 RC is available I think it is time to reevaluate VS 2012 and see where it stands.
UPDATE: Great news!!! Microsoft has seen the error of their ways. They have formally announced that they will in fact release VS2012 Express for desktop apps (instead of just Metro)- http://blogs.msdn.com/b/visualstudio/archive/2012/06/08/visual-studio-express-2012-for-windows-desktop.aspx. This is a big win for students and hobbyists.
Well VS2012 is finally in RC mode. I have to say the UI is looking better but please, for the love of all that use VS every day, GET RID OF THE STUPID ALL CAPS MENUS!!!!! Who in the world is designing this stuff in Redmond? It’s like they never took UI design 101 classes. Everybody who has used a computer for more than 5 minutes knows where the menus are so why do they have to be highlighted? It couuld just be me but it seems like the management at MS are making more and more missteps with this release as we get closer. Now on to the really, really disasterous news for VS2012…
After another week of working with it I’ve come up with a few new pieces of information.
Now that the beta is out I’ve been working with it a week and I can provide some feedback on the things I do and don’t like. To organize things a little better I’ll identify some of the new features that were added and give my opinion about the usefulness. All this is strictly my opinion so take from it what you would like.
There are two types of developers in the world – those who use CodeRush and those who use ReSharper. I happen to be in the CodeRush (CR) group for various reasons. One of the benefits I really like about CR is its flexibility and the ability to easily define my own custom templates. A template in CR is like a smart code snippet. When you type a certain key combination in the right (configurable) context then what you type can be replaced by something else. In this post I’m going to discuss a simple context provider that can be used in CR templates.
Streams in .NET are prevalent. Most everything that requires input or output accepts a stream. The issue with streams is that they are too generic. They only support reading and writing bytes (or byte arrays). Since streams can be read only or write only this makes some sense. The reality though is that most times you know whether a stream is readable or writable. If worse comes to worse you can query the stream for read and write access. But you’re still stuck with reading and writing bytes. To make working with streams easier Microsoft introduced the BinaryReader/BinaryWriter types. These stream read/writers allow you to read and write CLR types to an underlying stream. The theory being that the code is more readable if you explicitly create a reader or writer. Here is an example of writing some data to a stream.
I’m really, really disappointed with Microsoft about SandCastle. For those of you not in the know SandCastle (SC) is the documentation generator from Microsoft. Supposedly they use it internally for generating the .NET Framework documentation but with the tools they released publicly I find it hard to believe. The last time SC was updated was 2010. It’s been over a year and I still can’t generate anything near the style or accuracy of the existing documentation.
I long for the days of NDoc where I could pass my documentation on to a GUI tool and it could spit out the professional looking documentation. With SC it spits out beta quality documentation with styles that hardly work and absolutely cannot handle anything beyond basic doc comments. This is really sad. It doesn’t even ship with a GUI so I can configure the various options to generate something reasonably close without having to read through help files. Fortunately there are third-party tools available but honestly they don’t update that often either.
What makes me really mad is that the whole reason SC is suppose to be so awesome is that it is configurable to the point that we should be able to generate any style of documentation. The reality though is that the existing styles are horribly broken, can’t handle any external stuff and doesn’t even match the existing framework styles. You’d figure that MS would release the very same style and resources that are used for the framework but I’ve yet to see any SC-generated documentation come close. There’s always something broken whether it is bad image references, poorly formatted examples or just plain ugly styles.
Don’t even get me started on the horrible new MS Help system that was introduced in VS2010. Help is sliding so far backwards that I think MS needs to just start over. The day we can’t ship a simple help file (or even a bootstrapper) is a sad day indeed. I’d hate to be the folks at MS who have to go through all the steps needed to install/modify help files just for testing. This is truly ridiculous and a bad, bad sign for help in general.
Therefore I throw out the challenge for MS to step up to the plate and actually provide us an updated version of SC that can generate MSDN-style documentation out of the box without all the extra work generally involved. Better yet, integrate this functionality into VS directly so I don’t have to use third-party tools. Unless MS can fix SC I feel that it’ll fall to the wayside like so many other MS projects. This is unfortunate because documentation is key to good class library design.
Am I the only one that is not that excited about Win8? I haven’t had time yet to play around with it (waiting for VMWare to upgrade me to Workstation 8) so I might be missing something but I’m just not seeing anything worth looking at. Let me elaborate. Firstly note that I always like to stay with the latest stuff so Win8 and VS11 will be on my machine as soon as they are released. I’m more interested in the general public’s interest in the products.
First there was XP (ignoring anything prior to that) which is a decade old and still popular. It doesn’t have any of the bells and whistles of modern OSes but it is solid and heavily used. Many companies still have no intention of upgrading from XP. MS even extended the support cycle longer than normal to give people time.
Vista came out and lasted a year. It was a horrible disappointment in most areas but it introduced some radical UI changes that are popular. Jumplists, task dialog, restart manager, etc. Unforunately nobody wanted to upgrade so applications that want to use these new features either can’t or have to write bridge code to hide the differences. If you’ve ever written such bridge code you’ll realize that MS should have done it before release. Even worse is that .NET (which MS said is their platform of choice going forward) doesn’t even support most of the new UI stuff. Instead MS released a code pack that adds in the functionality that should be in the framework. Still worse is that you can’t even use that code unless you’re running under Vista+ because it’ll blow up. The code pack was the perfect opportunity to write the bridge code once so we didn’t have to but alas no.
Quickly on the heels of Vista is Win7, the current OS. Win7 didn’t have any radical changes but it did fix the issues with Vista. Win7 is, overall, a success and generally recommended as a required upgrade. Companies are moving to it as they get new computers but there is certainly no big rush. Win7 is the spiritual successor to XP and worth the effort. With Win7 came VS2010 and .NET v4. Alas still no support for the Vista+ features. So MS wants us writing Win7 apps but we can’t because: a) XP is still popular, b) there is no bridge code to hide the details and c) none of the new features are in .NET anyway. So while Win7 is nice, we’re still using the XP subset.
Here’s where things get really frustrating. Win8 doesn’t offer anything that anybody has been asking for as far as I can tell. MS is pushing the Windows Phone 7 (WP7) UI with tiles and touchscreens (touch). How many people are using touch today? How many people are even talking about buying touch? I know no one. There simply isn’t a demand for them on the desktop yet. Why would there be? Unlike a phone or similar device we have a mouse that gives us everything we need. Yes touch may one day become a commonplace item but not today. Certainly MS should add support for touch in Windows but it shouldn’t be at the cost of deprecating everything else. Touch works well for small lists of items but as you add more it gets harder to work with. Have you ever tried using touch for a list of states? It takes a while to get to the bottom. That’s why scrollbars were invented. I think it is a bad idea for MS to build the Win8 UI around touch and tiles. There simply isn’t a demand or support for this technology yet.
Win8 is also introducing a brand new application type – Metro. When I think of metro I think of the various subway systems I’ve ridden. How awful they are. How dirty. Where does MS come up with these names? But I digress – Metro is a new way of developing apps that (seems) to forgo existing technologies in lieu of HTML5 and XAML. Potentially a good idea but let’s look at what is wrong with this approach.
Firstly only Win8 supports Metro. Therefore if you want to build your app using Metro you have to build 2 – one for Win8 and one for everybody else. Since Metro looks so vastly different I don’t know how much code sharing is possible at this point. It’s like telling developers that they need WPF for Win7 and WinForms for everybody else. What developer in their right mind would do that? Developers will just stick with the universal application until everybody gets upgraded. So I think Metro, while highly touted by MS, is going to see little commercial success until post-Win8.
Secondly MS has been pushing SL and WPF for several years now. Suddenly they’re saying these technologies are deprecated in lieu of Metro. Really? Why should I learn yet another technology when it can be deprecated itself in a handful of years? One of the things that is important for developer buy in is longevity of the technology. Yes technology needs to evolve but to effectively pull the switch on one set while pushing another just seems wrong.
Will Metro be all that MS is hyping it up to be? Will developers jump on the bandwagon? Will we really have to learn a whole new set of technology just to be Win8-friendly? We’ll know as Win8 gets closer but right now it certainly seems like a very risky bet on MS’es part.
Overall I currently see Win8 as an upgrade only if you are running XP, buy a new machine or just want to be on the leading edge. There is nothing that has been announced that even remotely sounds interesting as a consumer. On the developer front I’m not going to be writing Metro apps because I need to support non-Win8 as well. Maybe when Win.Next.Next comes out and everybody is running Win8 I’ll look into it but not until then. I think pushing Metro and touchscreen is just going to alienate more people, both developers and consumers. WPF and related technologies aren’t going away just because MS might want them to.
Visual Studio 11
Here’s where things get more interesting. VS11 will have a lot of cool new features. .NET v4.5 is introducing some really nice new features for parallel development and ASP.NET. WPF, WCF and WinForms are not really getting any major enhancements that I see. VS11 itself though will be a recommended upgrade. Here’s a few of the high points for me.
- Project Compatibility – This is a major feature and I’m glad it was finally announced. It will allow teams to upgrade piecemeal rather than requiring everyone to upgrade at once. The only cost is that the solution/project must already be in VS2010 SP1. Still it’ll be nice to have mixed teams. I’m curious to know how it’ll impact replication of bugs and whatnot though. Time will tell.
- Model Binding – MVC already has this feature so it is nice to see it in ASP.NET. Model binding will greatly reduce the likelihood of bad binding causing runtime errors while making development easier. Intellisense enhancements are also nice.
- Solution Navigator – I’m really not a big fan of this but a lot of people are. The ability to browse code from Solution Explorer will certainly make some windows like ClassView obsolete.
- Find and replace with regular expressions – It is about time. I never used the FR regular expression search because it used a bizarre syntax. Now that it uses the .NET implementation it’ll make finding things easier.
- Find window – Available in VS2010 today with the appropriate extras installed, this is actually pretty handy. It makes finding things easier. Once it is integrated into VS11 it’ll be better.
- C++/CLI Intellisense – C++/CLI folks have been screaming for it for years. It’s finally hear. Hopefully C++/CLI isn’t needed too much anymore but it’s nice to have when you need it.
- Async – For VB and C# the addition of async will mean we have to write less anonymous methods and/or functions to accomplish the same thing. This alone is worth the price.
- Standalone Intellitrace – In VS2010 (from what I can remember) you had to do a lot of work to get IT running on a machine without VS. You might have had to even install VS, I can’t remember. In VS11 you’ll be able to drop IT on a machine that is having problems and then view the log on your dev machine. This will make IT far more useful.
- LocalDB – SQL Express is dead. Long live LocalDB. Finally a dev version of SQL that doesn’t require all the extras of SQLX while still giving us the same features. Unfortunately I think this may impact project compatibility with VS2010 so beware.
- IIS Express – WebDev is dead. Long live IISX. Actually IISX has been out for a while and available as an option in VS2010 SP1 but it’ll now be integrated into VS11. All the features of IIS without the overhead.
So, in summary, I think Win8 is going to be the next Windows Millenium Edition. It’s going to be a lackluster releas from the consumer’s point of few. At this point it doesn’t seem to have any features that anybody is going to care about and yet a lot of emphasis is being given to Win8-specific functionality. I don’t think there is going to be a great migration to Win8 except for new computers. VS11 on the other hand is going to be a recommended update but just for its new features and .NET v4.5. Maybe MS can get the Win7 features that are missing in .NET into v4.5 but I won’t hold my breathe. Instead we’ll be stuck with a couple of Win8-specific features that nobody will use for the foreseeable future. As both releases get closer we’ll get a better idea of what impact they’ll have but this is my opinion for now.
Unit testing is important for code quality. Most people don’t question this fact today. There are lots of tools available to help you write unit tests. But here’s the problem – you have to morph your design to make it testable. I’m all for making my code better but I get annoyed when I have to modify a design just to fit the tools I’m using. To me this is a bad sign. Our tools are there to support our work. If we’re modifying our code to make the tools work then we have things backwards. When we’re designing our code we shouldn’t be focused on the tools we will use (IDEs, testing, build, etc). That would be equivalent to designing our code with the limits of our database or communication infrastructure in mind. While these will impact the implementation, they should not impact the design. And yet unit testing, more often than not, requires that we design our code with testing in mind.
Mocking objects is a very common practice in unit testing. It allows us to focus on what is specifically being tested without having to worry about setting up all the extra stuff. There are many mocking frameworks available but the majority of them have the same limitations, just with different syntax. Most mocking frameworks can only mock interfaces or extensible types. Sealed and static types are out of the question. Even more frustrating is that often the member(s) to be mocked must be public or internal and virtual (but not always). Sealed and static types have very specific uses in design. They identify classes that are either self-contained and/or non-extensible. It is a design decision. Unit testing with these types is difficult so the common approach is to either modify the design (bad) or use abstraction.
A lot of code these days go overboard with abstraction. It reminds me of the early database days when DBAs wanted to normalize everything. Abstraction is important but it should be used only when it is needed. Unit testing is not a need. This is just one example of modifying design to meet the needs of the tools. As an example take DateTime.Now. If you need to be able to test code that uses this member then you either have to get tricky with date management or you have to abstract out the current date from your design. Keep in mind that your production code would (probably) never need a time other than the current time and yet you abstract it out for testing purposes. Some folks will argue that you wouldn’t hard code such a value anyway, you’d just pass it as a parameter but that is just moving the problem up the call hierarchy. Somewhere along the way the time has to be specified.
In state-based testing it is generally necessary to expose property getters (and even setters) internally so the framework can access it. This allows us to test the state of an object but not expose the members to the production code directly. (We’re ignoring the whole discussion for and against state-based testing and domain development.) This is a dirty hack. We are once again modifying our design (albeit in a hidden manner) to allow for testing. As an aside MSTest has an interesting approach using accessors to allow access to private members without using this hack. Unfortunately though it is generally broken, hard to maintain and not recommended.
What is the problem with the current set of tests? The problem is that they almost always use reflection as they evolved from the existing framework tools. Reflection unto itself is slow but for testing it is acceptable. What is a little harder though is working around the security model of .NET to get access to private members. Even worse is that testing tools can enumerate objects, create types (or derive new types) and invoke members but only if the base type is “designed” properly. This is hardly the fault of the tools but rather a consequence of their reliance on reflection. It is my belief that it is time for unit testing frameworks and tools to evolve into the tools we really need.
How can testing tools evolve? If I knew the answer I would have already shared it. We can take a look at a few tools available today to get an idea of what could be though. TypeMock and Moles approach mocking in an interesting way – they rewrite the code under test. As an aside note that I’ve never used TypeMock as it is commercially available. I have used Moles in limited scenarios but I intend to use it more going forward.
Code rewriting is an old technique. Traditionally it is slow and brings into question the validity of testing but the benefits are tremendous. Using either of these tools we can stub out almost any code irrelevant of how it was designed. These tools allow us to design our code the way we want to and still unit test them. As an example we can use DateTime.Now without the wasteful abstraction and yet we can set the time in our tests. Need to stub out a member that isn’t virtual? No problem the rewriter can rewrite the method body. This, to me, is the approach that has the best hopes of evolving testing but it is still a relative new, and difficult, task.
There are two big issues with current rewriting tools (for me at any rate). Firstly is performance. Rewriting takes a while. Unit tests should run quickly. We can’t be rewriting code for every test. Even with processors as fast as they are this would be too slow. It might take runtime-level rewriting to get performance where it needs to be but once performance gets better then rewriting will become more feasible.
The other problem is configuration. Today, at least for Moles, you have to be explicit about what you want stubbed. For a handful of types this would be OK but as code gets more complex we don’t want to have huge lists to manage. We need to have the same facilities available to us that mocking frameworks use today. When the test starts the rewriter takes a look at what needs to be rewritten and does it on the fly. Today rewriting happens beforehand and this is simply not flexible enough.
The ideal test tool would allow us to test any code. We can configure the tool to return stubbed values anywhere in the calling code, we can mock up objects to be returned from methods so we can track expectations, we can control when things should and shouldn’t be called and we can do it all at test time rather than at compile time. The test framework cannot modify what gets called, only what happens when it gets called and what it returns. Just like the mocking frameworks of today.
In summary unit testing and the tools that it uses, such as mocking, are critical for properly testing your code. But today’s tools are using technologies that require too many sacrifices on our design. Testing tools need to evolve to allow us to design our code the way it needs to be designed and the tools just adapt. Code rewriting currently looks like a good way to go but it is still too early to be fully usable in reasonable size tests. This is a challenge to all testing tools – revolution the testing landscape!!! Create tools that adapt to our needs rather than the other way around. The testing tool that can do that will become the clear winner of the next generation of tools.