In the previous article we expanded the template to dynamically set the type and namespace names based upon the project the template is used in.Now we are going to turn our focus to generating properties for each of the settings in the configuration file. For this we’ll be adding more code to the class block to open and read the settings from the project’s configuration file (which we’ll have to write code to retrieve). We’ll also have to decide how to convert string values to typed equivalents.
In the first article in this series we created a basic, static T4 template.The template allowed us to replace standard boilerplate code for reading an app setting
string setting = ConfigurationManager.AppSettings["someSetting"];
with strongly typed property references like this
var myIntSetting = AppSettings.Default.IntValue; var myDoubleSetting = AppSettings.Default.DoubleValue;
Here’s a summary of the requirements from the first article (slightly reordered).
- All settings defined in the project’s config file should be exposed, by default, as a public property that can be read. I’m not interested in writing to them.
- Based upon the default value (in the config file) each setting should be strongly typed (int, double, bool, string).
- The configuration file cannot be cluttered with setting-generation stuff. This was the whole issue with the Settings designer in .NET.
- Sometimes the project containing the config file is different than the project where the settings are needed (ex. WCF service hosts) so it should be possible to reference a config file in another project.
- Some settings are used by the infrastructure (such as ASP.NET) so they should be excluded.
- Some settings may need to be of a specific type that would be difficult to specify in the value (ex. a long instead of an int).
In this article we’re going to convert the static template to a dynamic template and begin working on requirements 1-3. From last time here are the areas of the template that need to be made more dynamic.
- The namespace of the type
- The name of the type, including pretty formatting
- Each public property name and type
AppSettings are settings stored in your configuration file under the
<appSettings> element. Almost every application has them. Each setting consists of a name and value. To access such a setting in code you need only do this.
string setting = ConfigurationManager.AppSettings["someSetting"];
There are a couple of problems with this approach.
- Quite a bit of boilerplate code to access a setting given what it is actually doing
- The setting name is hard coded and must match the config file
- The returned value is a string so if you need a different type then you’ll need to convert it
In .NET v2.0 Microsoft added the Settings class to work around these issues. It allows you to create a setting with a type and value and the designer will generate a type to back it where each property matches the setting. This seems great but never really took off. Not even Microsoft uses it in their own framework. Part of the problem is that the config entries it generates are overly complex storing things like type information, default values and other things. Needless to say
appSettings continue to be popular anyway. Fortunately we can get the simplicity of
appSettings with the power of the newer
Settings class all via T4.
In this series of posts I’m going to walk through the process of generating such a template including the ability to add some more advanced functionality. A full discussion of T4 is beyond the scope of a blog so refer to the following links for more information.
I recently found out that I was one of the MVPs of the year for our group. Wow what an honor!! Given the caliber of MVPs one cannot help but be humbled by this. Unfortunately other obligations prevent me from attending the ceremony to receive the award. I found out later that one of the reasons was that I was supposedly the top bug reporter on VS 2012 for US dev MVPs. I reported 27 issues and 13 were actually resolved. This got me to thinking about Connect and how Microsoft has historically used it.
Historically when an issue was reported someone at Microsoft would try to replicate the issue. If they could then they would escalate it to the team and you’d receive some feedback. At that point the issue would either be fixed or, more likely, closed without reason. More recently Microsoft has started to close items with short descriptions like ‘by design’ or ‘won’t fix’. The one that drives me mad though is ‘we have to evaluate the priority of each item reported against our schedule and this issue is not sufficiently important. We will review it in the future’. Closed. The problem is that I’m not convinced they every do “review it in the future”. Even worse is that once an item is closed you cannot do anything with it anymore.
If, as happened recently to me, the folks at MS failed to grasp the issue you were reporting and closed the item then there is no way to tell them they messed up. Recently I reported an issue to Microsoft about the behavior of inline tasks for MSBuild (https://connect.microsoft.com/VisualStudio/feedback/details/768289/msbuild-path-used-for-inline-task-reference-is-not-honored). The issue is that an inline task can reference an assembly using a path. At compile time this is fine but at runtime the assembly path is not honored so unless you copy the assembly to the same location as the task then it will fail to the find the assembly. Now I understand the reasoning behind why it would fail. I also know that I’m not the only one who has seen this issue. It has been reported on multiple blogs over the years.
Somehow the folks looking at the issue got caught up with what the sample code was trying to do rather than what the actual bug was and reported that I should be using some built in task instead. How does that in any way relate to my problem? I don’t know but the item was closed anyway. Without starting a new item I cannot recover this closed item. Sure I left a follow up comment about it but the item is still closed, the bug still exists and I doubt it will ever get resolved. And I’d like to think that as a contributor to the community that my items get a least a little more attention than the general user but it wouldn’t appear so in Connect. If MS really wants us to report bugs and Connect is the tool to do so then the entire system needs to evolve into a more full featured issue tracking system where we can alert MS to issues that may have been closed incorrectly. Even more important issues that “may be resolved in a future release” shouldn’t be closed but deferred so we know that at least they are under consideration. Right now Closed means it’s fixed, it’s by design, it cannot be repro or it ain’t going to be fixed.
Historically MS has taken some flax from the community about the uselessness of Connect. With VS2012 they seem to have upped their game and started taking feedback more seriously but there is still much work to be done. There has to be more insight into where an item is in the process, policies for getting items reevaluated and a better system for identifying items that are closed or deferred. Perhaps the User Voice site will take over bugs as well. Right now it is more for suggestions. Time will tell. Having 50% of my reported items resolved indicates that Connect is starting to work, at least for me, but it has to work for everybody or else it isn’t going to be used.
Our industry is plagued by large egos. I try to keep mine in check (except around a few people who I know will take me for who I am, not what I’ve done). Where I work we have a motto “If you’re truly good you don’t have to say anything”. What that means is that bragging about being an MVP, writing a book, publishing a popular framework or whatever else gets you nowhere. If you’re truly good your works will speak for themselves. As such I will quietly place my plaque next to my MVP awards and move on. But recently one of our team members won both the Chili Cookoff contest and the Employee of the Year award in the span of two weeks. They proudly carried their awards the next few days to all their meetings. Maybe, just maybe, I’ll carry mine to a couple of meetings. If you’re truly good you don’t have to say anything but awards don’t talk do they :}
Visual Studio extensions are really popular these days. Not surprisingly Visual Studio has a very active extensions gallery. But unfortunately the gallery is public so if you develop some extensions for your personal use, or for your company, then you cannot use the public gallery. Fortunately though VS supports the concept of a private gallery. A private gallery is nothing more than an extension gallery that is not accessible to the world. It is discussed on MSDN so refer to there for more information about the what’s and why’s. Unfortunately the documentation is a little vague and setting one up is not trivial. I’m going to discuss the steps needed to get a private gallery up and running.
These days most developers use NuGet to reference third-party dependencies like EntityFramework, Enterprise Library and NInject. A full discussion of NuGet is beyond this post. There are lots of articles on what it is, how to use it and how to package and publish to it so I’ll ignore all these things. But what about internal dependencies like shared libraries used by company applications? It turns out you can host your own private NuGet repository as well. Visual Studio is already set up to support this. The hard part is getting the repository set up. At my company we’ve been running a local NuGet repository for a year and it works great, for the most part. We have a company policy that all dependencies must be stored in NuGet. This pretty much eliminates versioning issues, sharing code across projects and storing binaries in source control. In this post I’m going to go over the steps to set up a private NuGet repository. Others have posted on this topic and there is some documentation available but unfortunately it is out of date.
Recently I was working on an MVC 4 application and I ran into a couple of issues with how property validation works with the default binder. I thought it was useful enough to share.
I’m finishing up my company’s second rewrite of an ASP.NET application to MVC and Razor. One of the things we learned along the way is that controllers and views are no more testable than forms and code behinds in ASP.NET. So we have been using the orchestrator pattern for newer code. I’d like to share our thought process on this approach.
UPDATE: My post triggered an e-mail exchange with Rowan Miller from the Entity Framework group. He wanted to explain and clarify some of my comments around EF. I’ve added them in the appropriate sections below. Thanks for the clarifications Rowan!
I understand the need for making some breaking changes to the framework as we transition to newer versions but one of the big selling features of .NET has been writing apps without having to worry too much about having the exact same version on end machines. Microsoft seems to be moving away from this goal with newer releases. .NET 4.5, as a framework, has a lot of nice, new features. But as the next version of the framework I think it fails on many levels. I’m fearful that this is becoming the new trend at Microsoft given the recent releases of .NET and other libraries.
I’ve recently been working on a project where I used Entity Framework for the data access. I’ve run into some challenges that don’t seem to be discussed much in the articles on how to properly use ORM. I’m curious if anyone else is having these challenges. While I’m using EF as my base I’m pretty sure that these challenges exist for other ORMs as well including NHibernate so I don’t believe the challenges to be caused by a specific ORM implementation but rather as a result of using ORMs in general.