Marc Lognoul's IT Infrastructure Blog

Cloudy with a Chance of On-Prem

SharePoint: Content Migration Tools Comparison from an Architecture Perspective

Leave a comment


The SharePoint MVP Benoit Jester has recently posted an excellent French-speaking presentation comparing SharePoint content migration tools.

This gave me an opportunity to discuss them from an architecture perspective. In large or complex infrastructures/organizations, the architecture of the migration solution can be nearly as important as the tools’ functionalities or results. You got it, in this post I will therefore not cover those two last criteria.

In a nutshell, 2 architectures competes:

  • Fat Client Applications (Metalogix, ShareGate…)
  • Server Infrastructures (mostly DocAve)

Fat Client Applications (Metalogix, ShareGate…)


Like depicted on the schema hereunder, fat client architecture implies that the migration logic entirely resides in the client application, the application interacting with SharePoint through the standard remote API’s such as web services similarly to Office applications, SharePoint Designer, OneDrive for Business Client or so. Optionally, a server-based extension can be installed on SharePoint server Under the shape of a web service too, increasing functionalities and ultimately, migration fidelity.


2 possible évolutions:

  • Hosting the application closer to the data center by deploying it on a “migration server”
  • Accessing directly source SharePoint database to increase speed and in a certain way, control (Metalogix)

The main factors limiting performance:

  • The source and destination SharePoint infrastructures
  • The client computer the application is installed on
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Simplicity and ease of deployment: it just takes a few second to install then the migration activity can immediately start
  • Insensitive to server infrastructure topology change: as long as your browser can speak wth SharePoint server(s), the application can as well
  • Entirely honoring the SharePoint security model: the authorization level of the migration user maps exactly the authorization as defined in SharePoint. Consequently it leverages the auditing and tracing capabilities as well
  • Ease of delegation: deriving from the previous point, delegating migration is as easy as granting permission in SharePoint itself, therefore preventing large scale accidents due to farm-account level permissions
  • No deep infrastructure change required: to the exception of the optional server-side extension (pretty light actually), there is no need for deploying heavy server-side component nor configuration. This allows “tactical” migrations to take place very easily and over a very short time frame.
  • Obvious network flows: in case perimeter firewall must be configured, flow are extremely simple are reduced. They are actually identical to the ones used between a browser and the SharePoint infrastructure, therefore, surely already in-place.
  • Capability to limit the impact of the migration on SharePoint infrastructures, using, for example, native SharePoint functionalities such as Resource Management
  • The technical follow-up of migration is fairly easy because most of (if not all) migration technical steps are logged on the client-side, directly accessible, with a satisfying level of details
  • Finally, this solution, even combined with other tools, remains standalone and souple

The Cons

  • Permits rogue deployments of migration applications: there is no way to centrally control the installation and the use of fat client-based solution since to SharePoint, they look like any other application. Obviously, you still require administrator privileges on the client to install it though
  • Sensitive to authentication configured on the SharePoint web application: because the tool consumes SharePoint identically to browsers, it must also comply with the authentication settings applied at web application level. If you remain within well-known configuration, there won’t be any problem, if you run an advanced configuration, be prepared for surprises even showstoppers
  • Heavily depends on the SharePoint Remote API’s: If they are disabled for security or functional reasons, the tool is simply be unable to work (to the exception of reading databases directly of course)
  • The tool’s configuration (and state information) reside on the client. Therefore, you may want to back it up in order to protect the logs, the configuration and other elements such as migration analytics, mapping files and so on
  • The impact on the client computer’s performance can be huge: depending on the amount of data, the number of items and the complexity of the migration, the client computer’s performance can be severely impacted. Moreover, the time necessary to complete a migration job is difficult to estimated, therefore the PC might be stuck on migrating for hours with large jobs, making other applications unusable due to congestion. To work around it, you might want to dedicated clients or even servers to this task: keep in mind this increases the hardware and license costs (RDP, Citrix or so). Not to mention the network connection (bandwidth, stability and low latency) between the client and the SharePoint servers is a crucial factor
  • Installing server-side extension is usually a must in order to benefits from the full range of functionalities. This will allow near full-fidelity migrations
  • In the event of a long-running migration project, you may have to update the migration tool more than once. Therefore, having an up-to-date inventory of the installed base will be useful

Note: As stated earlier in this post Metalogix comes with the ability directly read from the SharePoint databases, by-passing the SharePoint API and authentication and therefore, the SharePoint security model. Make sure you read this MS KB article before proceeding Support for changes to the databases that are used by Office server products and by Windows SharePoint Services

The Server Infrastructures (DocAve mainly)


This architecture type relies on 2 key components:

  • The manager: the web-based administration and usage interface and the component responsible for coordinating actions with agents
  • The agents: installed on one or multiple SharePoint server as well as on the server hosting the manager, their purpose is to execute the migration instructions received from the manager through the control flows. Afterwards, source agent communicates with the destination agents to migrate the content. The active element of the agent, implemented under the shape of a Windows service, requires high privileges at windows, SQL and SharePoint levels, close to the ones granted to SharePoint’s farm account.


The DocAve solution being entirely built on those components for migration as well as for other purposes, the activation of each additional feature (or module) is just a question of configuring the appropriate licenses using license files provided by the vendor.

The main factors limiting performance are:

  • The source and destination SharePoint infrastructures
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Very limited Client prerequisites: once the server-side is in place, access to the web interface just requires recent a browser
  • Entirely insensitive to SharePoint web application authentication configuration and to remote API’s accessibility
  • No heavy resource usage on the client: migration tasks are jobs run by the agents located on SharePoint therefore therefore leaving the client free of any migration-related load
  • Potentially the faster migration solution: assuming the network connection between the source and the destination is appropriately sized and the sources and destination SharePoint can cope with the extra load
  • Part of an end-to-end SharePoint management solution exposed through a single streamlined single-seat administration and usage interface

The Con’s

  • Heavy server-side deployment: requires software installation, service account, security modification and potentially additional hardware and firewall configuration. IT might require a dedicated server for hosting the manager (see later in this post)
  • Completely by-passes the SharePoint security model: by making uses of a Windows service running with a highly privileged user
  • Higher risk of human mistakes with dramatic consequences: Once the user is granted access to the migration module, he/she can perform any action on any scope regardless of the permission he/she holds at SharePoint level since everything happens under the context of a technical account. It is not unusual to see incidents requiring massive content restoration after accidental use
  • Almost no traceability on user actions: once again, because the privileged account is doing the job, the traceability will stop at the job creation steps. When performing multiple migration projects at the same time using the same DocAve platform, volume-based license usage can also be difficult if not impossible to measure
  • Partially black box operations once a job is started: usually, you’ll to wait for the job to complete get the details of what work and what did not and therefore measure the actual progress
  • No server protection: if the SharePoint servers are already facing a certain load due to normal usage conditions, running the migration agent might saturate them without any native option to prevent it. Therefore, it is always wise to run the agent on the least used server of the farm
  • Advanced troubleshooting can be tedious because logs must be collected in 3 different locations: the DocAve log from the migration job, the log from the agent and finally the SharePoint ULS logs. The 3 are usually necessary to have the complete picture of a migration batch behavior
  • Software Updates on the Migration solution: during the life cycle of the migration infrastructure, it is likely you will have to deploy numerous updates. Those are often specific to the customer and non-cumulative. Therefore it’s the customer’s responsibility to make sure the servers and agents are always aligned and up-to-date. Consequently, this also applies to server added to the farm over time

Special Case: DocAve vs. O365 vs. Windows Server 2003

Let’s take the following scenario: a SharePoint 2007 infrastructure running on Windows Server 2003 to be migrated to Office 365.

Considering the fact that data always travel from agent to agent, in this case, there is no destination agent on the Internet. However, the migration solution must cope with with O365 standards (such as authentication using WS-Federation…) and unfortunately, the necessary software pieces to do so are not available to Windows Server 2003 (Framework 4.5, WIF…).

The workaround consists in hosting the DocAve Manager and an agent on a dedicated server running at least Windows Server 2008 while another agent is running on the farm. DocAve will automagically established the flow between agent and to the Internet as shown in the schema below.


Credits to AvePoint for the technical guidance and to @j3rrr3 for the set-up.

Useful Links

Author: Marc L

Relentless cloud professional. Restless rider. Happy husband. Proud father. Opinions are my own.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s