In the first article in this series we reviewed the different ways to customize a build with MSBuild. In this article we will look into the first approach, project files.
In this series of articles I will discuss the various approaches to customizing builds in .NET. There are several different approaches with advantages and disadvantages. We will discuss each in turn.
Here’s a non-technical post of something humorous that happened to me today.
Several years ago I published a series of articles on how to use T4 to generate code. As part of that series I showed how you can, at build time, get Visual Studio to run environmental transforms on any project. To get this to work we relied on a custom .targets file being installed. The
.targets file was installed as part of a Visual Studio extension that also installed the T4 templates. As newer versions of Visual Studio have been released the extension has been updated through Visual Studio 2017.
This process still works but as you start moving to build servers in the cloud or that don’t have Visual Studio installed you cannot rely on extensions being available. This article will discuss the process of moving the transform process to a NuGet package that can be used in any build system. This is consistent with how popular packages are now injecting
.targets files into the build process as well and reduces the dependencies needed to build a solution.
It happens all the time. A developer needs some functionality in their application so they start writing an interface and then an implementation to go with it. Then something doesn’t compile, the code doesn’t work right or they cannot figure out how to get their design to work. Next comes the inevitable question “How can I fix my interface?” The answer is “fix your interface by throwing it away.” To be fair it isn’t generally the developer’s fault they went this way. We are trained from an early (developer) age to write code like this. We hear about it in school. We go to our first job and they are using interfaces everywhere. The problem is it is the wrong approach. In this article I’m going to discuss why we (generally) have it backwards and how you can start doing things “the right way”.
If you aren’t aware, Kraken is my personal class library that I’ve been using since the early days of .NET. It has evolved over the years to support new featuers as .NET and my needs have changed. The entire source code is available on Github and I have recently been pushing builds to NuGet. Prior to that I ran my own private NuGet server where I hosted the packages and used them in my projects. Now that .NET Standard 2.0 is out and my company is looking at moving to .NET Core it is time for me to upgrade Kraken to .NET Standard. I’m going to document that process here because I think my upgrade path is going to be very similar for others.
It seems so simple. Someone posts an example of how to do something using a cool, new feature in C# and you try to use their code and it fails to compile. Or you see something documented as working and you try it and it fails to compile. Why is that? It actually is quite simple but probably not at all obvious to the average developer.
If you are not aware yet Visual Studio 2017 shipped with support for a newer project format (informally known as the SDK project format). The newer format came about for a variety of reasons including:
- The traditional project format is very verbose.
- The traditional project format is hard to read and edit.
- The traditional project format requires the file to be modified for every change made to the project.
- The .NET Core
project.jsonformat did not easily map to the traditional project format.
While the focus on the new format has been to support .NET Core applications it can be used with many other project types as well, with restrictions discussed later. In this article I will discuss how to migrate a traditional project file to the newer SDK format. There are numerous other blogs on this same topic if you want to get different viewpoints.
PowerShell has become the defacto scripting language for Windows platforms. It brings the power of .NET to the command line. One of the benefits of PowerShell is creating scripts or functions that can be shared with others. This allows teams to simplify repetitive tasks by putting them into scripts and then sharing them with each other. The downside to this is that PowerShell, by default, makes it hard to run scripts (for security reasons). Additionally each user has to copy the scripts someplace where PowerShell will find them. In this article I’ll talk about an easier way to deploy scripts such that they can be run just like the built in commands. Unfortunately it requires a little more effort upfront but once you have the pattern down it is really easy.
If you’ve never used a C# iterator before then it may seem like magic. But under the hood it isn’t doing anything you can’t already do yourself. It is a time saver and as such there are quick actions and analyzers to recommend that you use them. If you’re like me you use them without even thinking but it is still important to remember they are generated code. It really shouldn’t be necessary to understand the underlying implementation of how they work but sometimes you have to. Here’s an interesting oddity that I saw recently with iterators.