Migrating from TFS to VSTS, Part 1

My company recently migrated our on-premise TFS 2017 server to VSTS. There were a variety of reasons for doing so including faster updates, less maintenance for us and cleaning up a system that has been upgraded for years and had a lot of baggage. This series of posts is going to discuss the approach we took, the issues we had and (most importantly) provide code to help anyone else who has to go through the same process.

Our Approach

The general rule of thumb we followed was to migrate as much as we could for historical purposes. We wanted our new system to be as close to the original as possible. Since we were planning on doing a lot of testing and would need to clean the system frequently, it had to be automated as much as possible.

The TFS REST APIs seemed like the best approach for automation so that is what we used. However there are issues with the APIs.

  • Some functionality is not yet covered by the APIs.
  • Some of the documentation is just blatantly wrong.
  • The client APIs have bugs.

Since we wanted to automate as much as possible we used the REST APIs where possible but deferred to other tools if the functionality was missing or buggy. There were some things that probably could have been migrated automatically but either the APIs weren’t sufficiently documented or it would simply take too much time so we left them as a manual process.

Overall our migration process ran smoothly. For code we migrated over 90 projects with at least 10,000 files. We migrated over 2500 work items (with full history), 90 packages, 30 builds and numerous queries. The entire process took about 5 hours.

The original plan was to use Powershell so we could write less code. Unfortunately as we began using the APIs and setting up the infrastructure to allow logging and tracking we realized we’d need a lot of custom code. While it can be done in Powershell, we decided to switch to a C# console application that we could more easily control.


Before going any further it is important to talk about licensing of the code. The code is provided as is. There is no warranty given for the reliability of the code. It worked for us but solid testing should occur before you use it yourself.

The code is being freely shared by the Federation of State Medical Boards to anyone interested in using it. Feel free to download and modify the code for your own purposes (and you will need to). All we ask is that you leave the copyright notice in the files and do not take credit for the code that you did not write.

Using TfsMigrate

TfsMigrate is the name of the console application that runs. It is really just a host process that calls out to “processors” to do the actual work. TfsMigrate is responsible for the following

  • Command line parsing
  • Error handling
  • Logging configuration
  • Providing core services

To use TfsMigrate you need the following.

  • An account on the source TFS server and a personal access token (see below). The account needs access to everything that will be migrated.
  • An account on the target VSTS server and a personal access token. The account needs to have Project Collection Service Account privileges at least temporarily.
  • Space to store temporary files. In general you’ll need enough space to store a copy of your largest code base and one set of your packages (if any).
  • NuGet command line. This is only necessary if migrating NuGet packages.
  • Git command line. This is only necessary if migrating source code.
  • A package feed on the target VSTS server. The feed needs to be a package source accessible from the NuGet command line. This is only necessary if migrating NuGet packages.

The target VSTS server project should be empty when running the final migration. This includes cleaning out work items, queries, all source code, packages, etc. During testing this is not necessary as most processors support overriding.

TfsMigrate accepts some arguments.

logFile logFilePath
Optional. The file to write all the output to (in addition to the screen). Default = TfsMigrate.log.
processor processName
Required. The name of the processor to run.
settings settingsFilePath
Optional. The name of the settings file. Default = settings.json.
Optional. If specified then verbose logging is turned on.

A processor is responsible for migrating a piece of the system. There are different processors for work items, code, queries, etc. Each processor is designed to migrate the parts of the system it is responsible for. Each processor runs independently of the others. To run the migration you must specify a processor using the parameter. The processor name is passed as the argument. Processors should be run in a specific order (see below). By having each processor separated though you can run portions of the migration to verify behavior and correct any issues.

The settings that control how and what get migrated are stored in a JSON file. The default is settings.json stored in the same directory as the executable but you can specify a different path. In general processors only work against a single TFS project. If you need to migrate multiple projects then you will need to use different settings files.

One of the running themes you will see in the code is a change in the architecture. Initially we were interested in getting TFS migrated. As work on the migration code continued we changed the architecture to meet the needs of the processor being written. In some cases we went back and retrofitted any changes to previous processors but not all changes went through. Remember, this code was designed for migration so take it as such

Scripting the Calls

For the full migration we need to call TfsMigrate multiple times. Additionally there are some manual steps that either need to occur ahead of time or after. The runall.ps1 script is designed to wrap all this up. This script is what will be run to do the full migration. Since the migration can easily take hours it is  This file will need to be migrated to fit your migration. Here’s a sample of the starter code.

$postMigrateSteps = @()

cd $migratePath

# 1
if (-Not (Confirm('New project created'))) { exit }
$postMigrateSteps += "Add customized image"

# 2
if (-Not (Confirm('Process template customized'))) { exit }

# 3 - Code
if (Confirm('Migrate source code'))
    .\tfsmigrate.exe -processor VersionControl -logFile "$logPath\VersionControl.log" -verbose
    $postMigrateSteps += "Add root files"
    $postMigrateSteps += "Lock source code"

The confirm functions pause the migration to give you time to confirm a manual process has occurred before the migration continues. Each processor is wrapped in a similar confirmation to allow you to re-run migrations as needed. For processors that have post-migration steps the script keeps a list of messages to display. Once the migration is finished the list of messages is shown to remind you of what steps you need to complete.

The above starter code confirms that the new project has been created in VSTS. Post migration it displays a message to remember to change the image. Next it confirms you have customized your process template. Any template customizations need to occur before work items are migrated. Then it migrates the source code. Once the code is migrated it prompts to make adjustments to the existing code or old code. Additional steps are in the version in the code archive for migrating the remainder of the information.

In each case TfsMigrate is called with a new processor and a separate log file. This causes each processor to write to a different log in case something goes wrong. Verbose logging is turned on to help narrow down issues. For your migration you can call a processor multiple times as needed.

Configuring for Your Needs

The code and settings are very much specific to the needs we had. But most of the processors were written to be a little flexible. The processors are controlled using a settings file. This file determines what gets migrated and how. You can adjust this file to fit your needs. Remember that the migration tool is designed to run multiple times so you can have multiple settings files if you’re migrating different sets of projects. In our case we were moving from 1 team project to another so we really didn’t need multi-project support but nothing prevents you from using different settings files to migrate multiple projects.

Settings are case insensitive. The settings parser does a mapping from the named setting to the corresponding property, if any.

Global Settings

The global settings are used to store settings needed by all the processors. Some of the processors do not use the global settings because they were written before the settings were added.

Global settings are stored under the Global option.

Equivalent to the verbose option on the command line.
The path to store temporary files needed by some processors.
The URL to the source TFS collection.
The name of the TFS project to migrate. Different projects require different settings files.
The name of the user in TFS.
The personal access token for the source user.
The URL to the VSTS target account.
The target project.
The personal access token for the user who will do the migration.
"Global": {
   "Debug": true,

   "OutputPath": "C:\\Temp",

   "SourceCollectionUrl": "https://mytfs:8080/DefaultCollection",
   "SourceUser": "user name of someone with permissions to TFS",
   "SourceAccessToken": "personal access token of user in TFS",
   "SourceProject": "SourceProject",

   "TargetCollectionUrl": "https://account.visualstudio.com",
   "TargetAccessToken": "personal access token of user in VSTS",
   "TargetProject": "TargetProject"

Build Definitions

The BuildManagement processor is responsible for migrating build definitions. VSTS does not support XAML builds anymore so they will not be migrated. No attempt is made to fix up the source paths although they will likely be wrong so they will need to be fixed manually.

If you use task groups in builds then they are not supported by the REST API at this time. In a build definition a task group is identified by its unique GUID. The GUID is generated when the group is created and therefore will be different on different servers. To migrate task groups do the following.

  1. Take note of the group ID on the source server. It is available in the URL.
  2. Export the group using the UI.
  3. Import the group to the VSTS server.
  4. Take note of the group ID on the VSTS server.
  5. Add an entry to the TaskGroups array in the settings to map from the source to the target group.

When the processor runs, if it detects a task group then it will map the source ID to the target ID contained in the settings file.

Build definition settings are stored under BuildManagement.

True to overwrite any existing definition or false to skip the definition.
The agent queue to have the definition use.
An array of build definitions that should not be migrated.
An array of task groups to migrate.

The GUID of the task group in TFS.
The GUID of the task group in VSTS. Must be migrated manually before the build definitions.
"BuildManagement": {
    "CopyTemplates": true,
    "Overwrite": true,
    "TargetAgentQueue": "Hosted VS2017",
    "ExcludeDefinitions": [],
    "TaskGroups": [
        "sourceGroupid": "10434ce0-8d5e-4447-97ec-906cebf605ca",
        "targetGroupId": "28703595-4c96-4c0e-abd0-8216cd2aa528"

NuGet Packages

The PackageManagement processor handles migrating packages hosted in TFS to VSTS. Only NuGet packages are supported.

The REST API starts to break down when it comes to packaging. The first issue is that high level package management is one set of APIs while individual packaging systems are separate (for example NuGet). Secondly the published TFS clients do not currently expose a client for the APIs. Therefore the migration tool uses a combination of a custom HTTP client and Nuget.exe to download packages. Using NuGet greatly simplifies the code but requires more configuration (and external dependencies). Nevertheless it makes working with the packages much easier.

Package settings are stored under PackageManagement. Pay careful attention to the URLs. Packages are accessed through HTTP normally so they use a different URL than the standard TFS/VSTS URLs when accessing the feed.

The full path to the nuget.exe.
The URL to TFS. Same as the global setting.
The name of the package feed in TFS.
The URL to the TFS feed. Unlike the normal target URL this feed will have feeds in the name.
The name of the package source that the local NuGet command uses to access the target feed.
If set to true then all versions of a package are migrated. If false only versions that have not been delisted are migrated.
If set to true then only the latest version of a package is migrated. If false then all versions of a package are migrated.
An array of packages to ignore. This is useful for skipping packages no longer needed.
"PackageManagement": {

    "NuGetCommandLine": "C:\\NuGet\\Nuget.exe",

    "SourceUrl": "https://mytfs:8080/DefaultCollection",
    "SourceFeed": "SourceProject",

    "TargetUrl": "https://account.feeds.visualstudio.com",
    "TargetFeed": "TargetFeed",
    "TargetPackageSource": "LocalNuGetSourceName",

    "includeDelistedVersions": false,
    "latestVersionOnly": false,

    "excludePackages": [

Work Item Queries

The QueryManagement processor manage how work item queries are migrated. Only queries under Shared Queries are supported. Users will have to migrate their queries manually.

Queries are not validated when they are migrated. In most cases a bad (such as a bad area path) will still migrate but will not run. In other cases (such as referencing a field that does not exist) will cause the query to fail migration. Ensure all custom fields are migrated before migrating queries.

Query settings are stored under QueryManagement.

True to overwrite the query if it already exists.
An array of queries to ignore. Shared queries are migrated unless they are listed here.
"QueryManagement": {

    "Overwrite": true,

    "ExcludeQueries": [
      "Old Query"

Source Code

The VersionControl processor is responsible for migrating code from TFVC to Git. Only TFVC projects are supported as the source and only Git is supported as the destination. This lines up with the common scenario of on-premise TFS servers using TFVC while Git is recommended going forward.

A project, for purposes of migration, is a folder structure in TFVC under which code (normally the Visual Studio solution) and, optionally, branches reside. The general guidelines for TFVC has been to have a branching structure for code. The baseline branch is where the master copy of the code resides. There is a development branch where active development occurs. When a product is released a copy of the master branch is made into some sort of release branch. In our case the release branch is versioned.

As an example a typical project will look like this.

  • MyProject
    • Dev (branched from Baseline)
    • Baseline (branch)
    • Releases
      • v1.0 (branched from Baseline)
      • v1.1 (branched from Baseline)

If your code uses a different branching scheme then you will need to modify the code accordingly. Not all projects necessarily use branching so this is configured in the settings file per project.

When a project is migrated a tip migration is done as recommended by Microsoft. What this means is that only the most recent version of the code is migrated. Any code history remains in the original system. This reduces the amount of data being copied and also eliminates information that likely isn’t useful anymore. If you really need all the history then consider keeping a backup version of the original TFS system. One challenge with this approach though is that you likely want to keep both the last released version of your code along with any changes that are currently being made. So the migration tool uses a modified version of the tip migration. For each project TfsMigration does the following.

  • Create a Git repo in VSTS for the project
  • If the project has branches
    • Find the baseline branch – this will either be the latest release branch or the baseline branch if no releases exist yet
    • Download the baseline branch
    • Clean up the folder (see below)
    • Commit the repository as the baseline version
    • If there was a release branch
      • Create a release branch from the baseline in the repo using the version of the release
      • Switch back to master
      • Check out the master branch
  • Download the latest version of the code – the development branch if the project supports branches
  • Clean up the folder
  • Copy the template files
  • Set up the metadata file (if any)
  • Commit the changes as the latest version

For a project with branches then there will be a baseline and latest commit. For a project with no branches (or a project with branches but is new) then there will only be the latest commit. The clean up process removes unneeded files and folders from the structure. This is configurable but includes the following by default.

  • .tfignore
  • Packages\*
  • .gitingore
  • .gitattributes

Notice that some Git files are included. The tool removes those because of the template folder. Most teams will have a standard set of Git files. The template folder is where any files that should be associated with each repo should be stored. For our migration we reset all the Git files to a standard set of files. All the files in the folder are copied when the latest commit occurs. Additionally a metadata file (default is readme.md) is included in the template folder. The metadata file is updated to include some migration information. Refer to the sample file for an example. It supports basic text substitution and is designed to be a breadcrumb back to the original version.

Similar to packages the REST API is a little lacking when it comes to Git support. Everything can be done using the client API except for the most important thing – change detection. Git works based upon file changes. When committing the latest version of each project it would be nice if only the changed files (from the baseline version) are committed. This is not directly supported in the API. You could certainly write your own logic to detect added, deleted and modified files but Git command line already does that. Therefore the processor uses the Git command line to actually set up the repositories and commit changes. This adds some complexity to the system but it ensures that the repository history mimics (as close as possible) the actual differences.

Note: When setting up a new team project with Git, VSTS will require that you have at least one repository. This dummy repository can be anything you want and should be removed after the migration. During the migration existing repositories will be deleted as needed and the process will fail if it is the last repository.

Source code settings are stored under VersionControl.

The name of the “master” branch in Git to use.
The name pattern to use for the “release” branch in Git.
The path and name of the Git command line executable.
For projects that support branches, the name of the baseline branch.
For projects that support branches, the name of the development branch.
For projects that support branches, the name of the subfolder containing the release branches.
The path and name where the template files are stored.
The name of the metadata file to update in the repository with the breadcrumb data.
An array containing the folders to remove before committing changes.
An array containing the files to remove before committing changes. Supports wildcards.
If true then the local repository that was created is cleaned up so it does not use up space on the drive.
An array of projects to migrate.

The full TFS path to the source folder (for example, $/tfs/MyTfs/MyProjectFolder).
The name of the Git repository to create.
The name of the project in VSTS to create the repository. This is one of the few places where multiple projects are supported.
If set to true then the project is assumed to use branches. Set to false if it does not.
"VersionControl": {
    "GitMasterBranch": "master",

    "GitReleaseBranch": "release/{major}.{minor}",
    "GitCommandLine": "C:\\Program Files\\Git\\bin\\git.exe",

    "BaselineBranch": "Trunk",
    "DevelopmentBranch": "Dev",
    "ReleaseBranch": "Releases",

    "TemplatePath": "template",
    "MetadataFile": "readme.md",

    "CleanFolders": [

    "CleanFiles": [

    "CleanAfterCommit": true,

    "Projects": [
        "sourcePath": "$/MyTfs/MyProjectFolder",
        "hasBranches": true,

        "destinationProject": "TargetProject",
        "destinationPath": "my-project"

Work Items

The WorkItemTracking processor handles migration of work items, areas and iterations. It is surprisingly complex because of what it needs to handle. The processor starts by migrating areas and iterations defined in the settings file. Then it migrates the work items.

Work items are really nothing more than key-value pairs known as fields. To properly migrate work items both the areas and iterations need to be set correctly. Additionally any custom fields added to work items need to be added to the process template. VSTS will allow the migration of invalid work items in some cases but it does not provide any easy way to detect invalid items making it harder to fix bad data.

Migrating a work item involves looking at each field in the item and determining whether to migrate the value or not. Some fields (for example Title and Assigned To) need to be migrated. Other fields (for example AreaId and Revision) do not. Extensions may add their own fields as can custom process templates. To simplify this process the processor can be configured to either include all fields (the default) or no fields. In either case fields need to be adjusted, excluded or included. This is where field handlers come in. Each field can be assigned zero or more field handlers. Field handlers allow the field value to be modified before it is saved to the new work item. The following field handlers are supported.

  • AreaFieldHandler – A custom handler for migrating areas (see below).
  • IgnoreFieldHandler – The field is ignored.
  • IterationFieldHandler – A custom handler for migrating iterations (see below).
  • RenameFieldHandler – A handler for renaming a field.
  • UserFieldHandler – A custom handler for migrating user fields (see below).
  • ValueFieldHandler – A handler that allows a dynamic LINQ expression to convert the value.

For areas, and iterations, the path must be valid otherwise the migration will fail. In the REST API an area and iteration are known as classification nodes. Before migrating work items the processor migrates the classification nodes defined in the settings file. This should migrate the core nodes that work items need. As part of the migration a node may be renamed. The processor tracks what nodes were migrated and their new name. The AreaFieldHandler and IterationFieldHandler use these lists to map the nodes for fields marked with the appropriate handler. If a node has not been migrated yet, either explicitly or because of another work item, then the node is migrated so the work item will not fail.

The IgnoreFieldHandler is used to ignore fields that should not be migrated. This generally includes extension fields and those fields that are maintained by VSTS. If this handler is applied then none of the other handlers will run for that field.

The RenameFieldHandler is used to rename a field. This is really for custom fields defined in a process. VSTS has strict rules on how fields are named that are more stringent than TFS. So renaming a field is sometimes necessary.

The UserFieldHandler is used for fields that store identity information. VSTS will allow a work item to be migrated with an invalid identity. But when the item is shown the identity is shown to be an error. Migrating from TFS to VSTS will require identity mapping. The processor is responsible for building the identity map but ideally this would have been handled by a separate processor if it had been known earlier. The identity mapping is stored in the settings file because trying to automate the mapping was more trouble than it was worth. The handler will try to figure out the identity in the field and map it to the identity specified in the settings file. If successful then the new identity is used otherwise the old identity is left in place. It is recommended that all active users be added to the identity mapping in the settings file.

The ValueFieldHandler is another handler designed for custom fields. In our case we needed to invert a boolean value. This handler allows you to specify an expression against the field’s value. The handler will then execute the expression to get the final value.

One of the trickier aspects of work items is the history. Whenever a work item is changed the History field is updated. Provided this field is not ignored the history will be migrated. The problem however is that only the latest value of each field is migrated. For active work items the history of why things changed is important. The processor attempts to rebuild the entire history of the work item by looking at the revisions. Each time a work item changes a new revision is created. Using the API you can query for the revisions which includes what fields changed. To migrate a work item the processor starts at revision 1 and rebuilds the history in VSTS revision by revision. Each time the modified fields are sent through the field handlers so the correct values are stored. When the migration is complete the work item history is (mostly) restored. As part of the progress the dates of the changes are also restored so the historical aspect is retained as well.

Note: The processor will use the identity of the user who made the original change. However when looking at the history a note is made indicating that the change was made by the user running TfsMigrate on behalf of the actual user. It is recommended that you run the migration using a migration account so it is clear the changes were made by the migration tool.

The last step in migrating a work item involves adding a comment and tag to indicate that the work item has been migrated. This can be useful later to get back to the original work item if needed. As part of the sample migration it is assumed that the VSTS process has a custom field to store the original work item ID. Work item IDs are not changeable and so this information is lost. Having a custom field to store the legacy ID is useful for getting back to the original item.

The migration is not without some limitations to be aware of. Firstly attachments and embedded images are not supported. In the case of attachments we didn’t feel they needed to be migrated. For embedded images the problem is harder. In order to create an embedded image you have to jump through hoops as discussed here. Ultimately you have to attachment the image, add the link and then remove the image. Since the processor tries to rebuild the history as is the process would fail if the attachment/embedded image was added as part of the work item creation. This currently isn’t supported in the REST API.

The other limitation is with the work item ID. Since the work item ID is not going to be the same any mentioned links will be broken. There is no easy solution to this problem without parsing the text fields of the work item. But it is important to keep any links in place. The processor will attempt to restore links to work items. For purposes of migration the processor will recreate parent, child and related work item links. In each case the processor will add a link to the migrated item by using the new ID (which it tracks). Depending upon the settings file none or some of the links will be restored based upon the state of the work item being migrated. For example it makes sense to migrate any children of an active story but the children of a closed epic probably not.

Where issues come in is when the linked item hasn’t been migrated yet. But before getting there let’s talk about how the processor determines what work items to migrate. Since different migrations might want different sets of items the processor runs the query(ies) defined in the settings file. Each query is run and any returned items are added to the list of work items to migrate. The  processor then runs through them one by one to migrate them. The processor is tracking the items that have been migrated, both their old and new IDs. If it runs across an item that has already been migrated then it skips it.

VSTS uses bidirectional links so every child link has a corresponding parent link on the other side. Related links are on both sides. While migrating a work item the processor, using the settings configuration, will either link the work item to the already migrated parent/child/related item or add the linked item to the list of items to migrate. Later, when the linked item is migrated, the link will be restored because of the bidirectional linking. So the processor can, rather easily, restore linked work items as well.

Work item settings are stored under WorkItemTracking.

An array of areas to migrate.

The area path in TFS.
The optional area path in VSTS if different from SourcePath.
An array of iterations to migrate.

The iteration path in TFS.
The optional iteration path in VSTS if different from SourcePath.
An array of user identity mappings.

The identity in TFS in the form domain\user.
The identity in VSTS in the form of User Name <email>.
If set to true then child links on closed work items are migrated.
If set to true then parent links on closed work items are migrated.
If set to true then related links on closed work items are migrated.
True to include all the fields or false to only include the fields defined later.
If set to a non-empty value then the tag is added to any migrated work item.
An array of fields to migrate or customize.

The formal reference name of the field in TFS (for example System.Title).
If specified then formal reference name of the field in VSTS. Note that in VSTS field names include the name of the process template.
If specified and set to true then the field is ignored.
If specified and true then the field goes through identity mapping.
If specified then the expression that is run to calculate the field value. Use the identifier value to indicate the field value.
If specified then the full type name of the custom handler to use.
An array of queries to run to get the initial list of work items to migrate.

The name of the shared query to run.
"WorkItemTracking": {

    "Areas": [
        "SourcePath": "MyTfs/MyProject"
		//"DestinationPath": "MyTfs/NewProject"

    "Iterations": [
        "SourcePath": "MyTfs/Backlog"

    "users": [
        "source": "myad\myuser",
        "target": "User Name "

    "includeAllFields": true,

    "fields": [
        "name": "System.AreaPath",
        "handler": "TfsMigrate.Processors.WorkItemTracking.FieldHandlers.AreaFieldHandler,TfsMigrate.Processors.WorkItemTracking"
        "name": "System.AreaId",
        "ignore": true
        "name": "System.ResolvedBy",
        "isUser": true
        // Create the legacy ID to store the original ID field
        "name": "System.Id",
        "targetName": "AgileProcessName.MyCompany_LegacyId"

    "includeRelatedLinksOnClosed": false,
    "includeChildLinksOnClosed": false,
    "includeParentLinksOnClosed": false,

    "migrationTag": "tfs-migration",

    "queries": [
        "name": "Shared Queries/_Migration/To Be Migrated"


This completes a description of the TfsMigrate tool and how the migration works. The full code is available on GitHub. The next article will discuss how the backend code works in case anyone needs to update it. Again, feel free to download and modify the code to help with your TFS migration. It is freely available for anyone to use.

Download the code on GitHub.


    1. Yes we did. There were several problems with this service that caused us to remove it from consideration.

      1) It was still in beta at the time we started the migration.
      2) Release and package management wasn’t fully supported.
      3) The import process will continue to use the hosted XML process instead of the inheritance process. While Microsoft has said they will eventually support somehow switching from hosted to inherited there is no ETA. Using hosted XML, while supported, is not going to take advantage of the new process improvement features without manually updating the template which is one of the core reasons we wanted to switch.
      4) We were merging multiple team projects into one. This isn’t supported by the service. Instead it creates separate processes and projects for each one.
      5) When we ran the import tool against our process it reported a large number of “warnings” that were related to system fields that had changed case between versions of TFS. This has historically been one of the issues we’ve had upgrading TFS versions and unfortunately isn’t fixable because they are system fields.
      6) While not an issue for us, the import tool doesn’t support over 300 projects.
      7) The import tool requires additional Azure resources during the migration such as a Storage container. This would involve our infrastructure team which introduces complications.