Marc Lognoul's IT Infrastructure Blog

Cloudy with a Chance of On-Prem


Leave a comment

SharePoint: Content Migration Tools Comparison from an Architecture Perspective

Introduction

The SharePoint MVP Benoit Jester has recently posted an excellent French-speaking presentation comparing SharePoint content migration tools.


This gave me an opportunity to discuss them from an architecture perspective. In large or complex infrastructures/organizations, the architecture of the migration solution can be nearly as important as the tools’ functionalities or results. You got it, in this post I will therefore not cover those two last criteria.

In a nutshell, 2 architectures competes:

  • Fat Client Applications (Metalogix, ShareGate…)
  • Server Infrastructures (mostly DocAve)

Fat Client Applications (Metalogix, ShareGate…)

Overview

Like depicted on the schema hereunder, fat client architecture implies that the migration logic entirely resides in the client application, the application interacting with SharePoint through the standard remote API’s such as web services similarly to Office applications, SharePoint Designer, OneDrive for Business Client or so. Optionally, a server-based extension can be installed on SharePoint server Under the shape of a web service too, increasing functionalities and ultimately, migration fidelity.

spmigfatclienten

2 possible évolutions:

  • Hosting the application closer to the data center by deploying it on a “migration server”
  • Accessing directly source SharePoint database to increase speed and in a certain way, control (Metalogix)

The main factors limiting performance:

  • The source and destination SharePoint infrastructures
  • The client computer the application is installed on
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Simplicity and ease of deployment: it just takes a few second to install then the migration activity can immediately start
  • Insensitive to server infrastructure topology change: as long as your browser can speak wth SharePoint server(s), the application can as well
  • Entirely honoring the SharePoint security model: the authorization level of the migration user maps exactly the authorization as defined in SharePoint. Consequently it leverages the auditing and tracing capabilities as well
  • Ease of delegation: deriving from the previous point, delegating migration is as easy as granting permission in SharePoint itself, therefore preventing large scale accidents due to farm-account level permissions
  • No deep infrastructure change required: to the exception of the optional server-side extension (pretty light actually), there is no need for deploying heavy server-side component nor configuration. This allows “tactical” migrations to take place very easily and over a very short time frame.
  • Obvious network flows: in case perimeter firewall must be configured, flow are extremely simple are reduced. They are actually identical to the ones used between a browser and the SharePoint infrastructure, therefore, surely already in-place.
  • Capability to limit the impact of the migration on SharePoint infrastructures, using, for example, native SharePoint functionalities such as Resource Management
  • The technical follow-up of migration is fairly easy because most of (if not all) migration technical steps are logged on the client-side, directly accessible, with a satisfying level of details
  • Finally, this solution, even combined with other tools, remains standalone and souple

The Cons

  • Permits rogue deployments of migration applications: there is no way to centrally control the installation and the use of fat client-based solution since to SharePoint, they look like any other application. Obviously, you still require administrator privileges on the client to install it though
  • Sensitive to authentication configured on the SharePoint web application: because the tool consumes SharePoint identically to browsers, it must also comply with the authentication settings applied at web application level. If you remain within well-known configuration, there won’t be any problem, if you run an advanced configuration, be prepared for surprises even showstoppers
  • Heavily depends on the SharePoint Remote API’s: If they are disabled for security or functional reasons, the tool is simply be unable to work (to the exception of reading databases directly of course)
  • The tool’s configuration (and state information) reside on the client. Therefore, you may want to back it up in order to protect the logs, the configuration and other elements such as migration analytics, mapping files and so on
  • The impact on the client computer’s performance can be huge: depending on the amount of data, the number of items and the complexity of the migration, the client computer’s performance can be severely impacted. Moreover, the time necessary to complete a migration job is difficult to estimated, therefore the PC might be stuck on migrating for hours with large jobs, making other applications unusable due to congestion. To work around it, you might want to dedicated clients or even servers to this task: keep in mind this increases the hardware and license costs (RDP, Citrix or so). Not to mention the network connection (bandwidth, stability and low latency) between the client and the SharePoint servers is a crucial factor
  • Installing server-side extension is usually a must in order to benefits from the full range of functionalities. This will allow near full-fidelity migrations
  • In the event of a long-running migration project, you may have to update the migration tool more than once. Therefore, having an up-to-date inventory of the installed base will be useful

Note: As stated earlier in this post Metalogix comes with the ability directly read from the SharePoint databases, by-passing the SharePoint API and authentication and therefore, the SharePoint security model. Make sure you read this MS KB article before proceeding Support for changes to the databases that are used by Office server products and by Windows SharePoint Services

The Server Infrastructures (DocAve mainly)

Overview

This architecture type relies on 2 key components:

  • The manager: the web-based administration and usage interface and the component responsible for coordinating actions with agents
  • The agents: installed on one or multiple SharePoint server as well as on the server hosting the manager, their purpose is to execute the migration instructions received from the manager through the control flows. Afterwards, source agent communicates with the destination agents to migrate the content. The active element of the agent, implemented under the shape of a Windows service, requires high privileges at windows, SQL and SharePoint levels, close to the ones granted to SharePoint’s farm account.

spmigsrvinfraen

The DocAve solution being entirely built on those components for migration as well as for other purposes, the activation of each additional feature (or module) is just a question of configuring the appropriate licenses using license files provided by the vendor.

The main factors limiting performance are:

  • The source and destination SharePoint infrastructures
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Very limited Client prerequisites: once the server-side is in place, access to the web interface just requires recent a browser
  • Entirely insensitive to SharePoint web application authentication configuration and to remote API’s accessibility
  • No heavy resource usage on the client: migration tasks are jobs run by the agents located on SharePoint therefore therefore leaving the client free of any migration-related load
  • Potentially the faster migration solution: assuming the network connection between the source and the destination is appropriately sized and the sources and destination SharePoint can cope with the extra load
  • Part of an end-to-end SharePoint management solution exposed through a single streamlined single-seat administration and usage interface

The Con’s

  • Heavy server-side deployment: requires software installation, service account, security modification and potentially additional hardware and firewall configuration. IT might require a dedicated server for hosting the manager (see later in this post)
  • Completely by-passes the SharePoint security model: by making uses of a Windows service running with a highly privileged user
  • Higher risk of human mistakes with dramatic consequences: Once the user is granted access to the migration module, he/she can perform any action on any scope regardless of the permission he/she holds at SharePoint level since everything happens under the context of a technical account. It is not unusual to see incidents requiring massive content restoration after accidental use
  • Almost no traceability on user actions: once again, because the privileged account is doing the job, the traceability will stop at the job creation steps. When performing multiple migration projects at the same time using the same DocAve platform, volume-based license usage can also be difficult if not impossible to measure
  • Partially black box operations once a job is started: usually, you’ll to wait for the job to complete get the details of what work and what did not and therefore measure the actual progress
  • No server protection: if the SharePoint servers are already facing a certain load due to normal usage conditions, running the migration agent might saturate them without any native option to prevent it. Therefore, it is always wise to run the agent on the least used server of the farm
  • Advanced troubleshooting can be tedious because logs must be collected in 3 different locations: the DocAve log from the migration job, the log from the agent and finally the SharePoint ULS logs. The 3 are usually necessary to have the complete picture of a migration batch behavior
  • Software Updates on the Migration solution: during the life cycle of the migration infrastructure, it is likely you will have to deploy numerous updates. Those are often specific to the customer and non-cumulative. Therefore it’s the customer’s responsibility to make sure the servers and agents are always aligned and up-to-date. Consequently, this also applies to server added to the farm over time

Special Case: DocAve vs. O365 vs. Windows Server 2003

Let’s take the following scenario: a SharePoint 2007 infrastructure running on Windows Server 2003 to be migrated to Office 365.

Considering the fact that data always travel from agent to agent, in this case, there is no destination agent on the Internet. However, the migration solution must cope with with O365 standards (such as authentication using WS-Federation…) and unfortunately, the necessary software pieces to do so are not available to Windows Server 2003 (Framework 4.5, WIF…).

The workaround consists in hosting the DocAve Manager and an agent on a dedicated server running at least Windows Server 2008 while another agent is running on the farm. DocAve will automagically established the flow between agent and to the Internet as shown in the schema below.

spmigsrvinfrao365en

Credits to AvePoint for the technical guidance and to @j3rrr3 for the set-up.

Useful Links

Advertisements


Leave a comment

SharePoint: Comparatif architectural des solutions de migration de contenu

Introduction

La parution récente de l’excellente présentation de B. Jester ci-dessous,

me donne l’opportunité de commenter les solutions de migration de contenu d’un point de vue architectural (composants, flux…) car celui-ci peut s’avérer être un facteur déterminant dans le cadre du choix de l’outil de migration dans les environnements complexes. Vous l’aurez compris, ce comparatif ne traitera donc ni les aspects fonctionnels ni financiers.

2 types architectures s’opposent:

  • Les Clients lourds (ShareGate ou Metalogix)
  • Les Infrastructure Serveurs (DocAve)

Les Clients lourds (ShareGate ou Metalogix)

Aperçu

Comme le montre le schéma ci-dessous, ce type d’architecture s’avère particulièrement simple. Elle s’articule autour d’un client lourd, tout la logique de migration résidant au sein de celui-ci, interagissant avec les API distantes SharePoint (web services etc.). Pour SharePoint (ou O365), ces solutions se comportent rigoureusement comme n’importe quelle application cliente (navigateur, suite Office, SharePoint Designer, client OneDrive for Business…). Note: Les extensions « serveurs » optionnelles proposées par ces solutions revêtent également la forme de web service.

spmigfatclient

Deux variations possibles :

  • L’hébergement de l’application sur un serveur « de migration »
  • L’accès direct aux bases de données source en lecture, contournant ainsi le modèle de sécurité SharePoint

Principaux facteurs limitant la performance:

  • Les infrastructures serveur SharePoint source et destination (ou O365)
  • Le poste client
  • Le lien réseau client vers infrastructure SharePoint source et destination

Les Pours

  • Simplicité et facilité de déploiement : l’installation prend quelques secondes. Après cela, la migration peut immédiatement commencer.
  • Insensibilité au changement d’infrastructure : peu importe les modifications de topologie des infrastructures SharePoint et pour autant que l’accès au service soit maintenu, il n’y aura aucune incidence sur l’application de migration en elle-même
  • Facilité de délégation : Déléguer la fonction de migration, même de manière granulaire, s’avère particulièrement simple, tout en s’appuyant entièrement sur le modèle de sécurité SharePoint (voir ci-dessous)
  • Honore entièrement le modèle de sécurité SharePoint : Les limites de l’outil dérivent directement du niveau d’habilitation de l’utilisateur qui en fait usage. Cela peut permettre éviter bon nombre de dérapages ». Les accès seront également retracés côté SharePoint (Audits, traçabilité etc…)
  • Aucune modification de l’infrastructure requise. Note : à l’exception des extensions « serveur » qui restent toutefois optionnelles. Cela peut également facilité l’intervention des prestataires externes, par exemple, dans le cas d’une migration “tactique”
  • Facilité d’identification et de suivi des flux réseau : en cas de sécurisation des infrastructures à l’aide de pare-feu de périmètre, ceux-ci sont très certainement déjà adaptés à l’usage des outils de migration car identiques à ceux requis par l’utilisation de SharePoint
  • Possibilité de limiter l’impact des activités migration sur le service SharePoint par le biais de fonctionnalités natives comme par ex. le Request Management
  • Le suivi technique des migrations s’effectue par des log côté client, ce qui facilite leur accessibilité et leur lecture
  • Cette solutions, même combinées à d’autres produits du même éditeur, reste isolée, ce qui prévient des effets de bord éventuels en cas de déploiement de correctifs etc.

Les Contres

  • Permet les déploiements non contrôlés : ces solutions n’étant pas verrouillables de manière centralisée (SharePoint ou outil serveur), un utilisateur, pour autant qu’il dispose de privilèges administrateur sur son poste de travail, sera en mesure d’installer et d’utiliser le client lourd sans aucune restriction à l’exception des permissions dont il dispose dans les sites SharePoint auquel il a accès.
  • Sensibilité à l’authentification en place sur les web applications SharePoint : bien que compatible avec l’éventail des authentifications standards, ces solution rencontrerons des problèmes avec les authentifications customisées ou les pré-authentifications (clé RSA, certificats client, certains types de cookies…). C’est inhérent à leur architecteur client lourd adressant les API distantes SharePoint.
  • Dépendance vis-à-vis des API SharePoint distantes: Si celles-ci sont désactivée (pour des raisons de sécurité notamment), l’application sera incapable de communiquer avec SharePoint
  • Paramétrage de l’outil (incluant les scénarios de migration) qui réside sur le poste client. En cas de paramétrage complexe à répéter régulièrement, il peut s’avérer intéressant d’en prendre une sauvegarde afin de faire face à la perte du poste
  • L’impact en terme d’utilisation de ressources système sur le poste client et la mobilisation de celui-ci peut s’avérer importante voire bloquante pour d’autres applications. Il existe évidemment une solution de contournement consistant à installer l’application sur un serveur dit « de migration ». Dans ce cas, il doit également être dimensionné en fonction du nombre d’utilisateurs concurrent et de la lourdeur de la migration. Ce contournement pourrait également entrainer des surcouts en termes de licences (RDP, Citrix ou autre). Par voie de conséquence, il est difficile de prévoir la durée exacte d’exécution d’une tâche de migration. Le facteur réseau (type de raccordement, bande passante, latence) est particulièrement déterminant
  • Il peut s’avérer nécessaire d’installer les extensions sur les serveurs SharePoint afin de tirer parti des toutes les fonctionnalités proposées
  • Lors de longs projets de migration requérant des interventions du support produit, il ne sera pas rare de devoir, à plusieurs reprises, déployer des correctifs. Il est donc recommander de tenir un inventaire des sièges ou l’application de migration est installée afin de faciliter les mises à jour

Note : Metalogix propose également un accès direct en lecture aux bases de données. Cela permet d’augmenter sensiblement les performances en lecture, se s’affranchir des contrainte d’authentification SharePoint et de l’exploitation des API distante mais aura pour conséquence de contourner la sécurité SharePoint. Pour rappel : Article de KB MS Prise en charge des modifications apportées aux bases de données qui sont utilisés par les produits Office server et Windows SharePoint Services

Les infrastructures serveurs (DocAve)

Aperçu

L’architecture de ce type de solution s’appuie sur 2 composants-clé :

  • Le manager : expose l’interface web d’administration et d’utilisation du produit
  • Les agents : installés sur un ou plusieurs serveurs SharePoint et sur le serveur hébergeant le manager, ils ont pour fonction d’exécuter les demandes saisies dans celui-ci par le flux de contrôles. Ensuite, les agents communiquent entre eux pour transmettre les données. Le service Windows constituant l’élément actif de l’agent nécessite des privilèges élevés au niveau Windows, SharePoint et SQL (proches de ceux requis par le compte de la ferme).

spmigsrvinfra

La solution DocAve se base, dans sa globalité, sur ces composants, aussi bien pour la migration que pour d’autres usages. L’activation de chaque module se faisant par la mise en place des licences appropriées.

Principaux facteurs limitant la performance:

  • Les infrastructures serveur SharePoint source et destination (ou O365)
  • Le lien réseau client vers infrastructure SharePoint source et destination

Les Pours

  • Prérequis postes clients réduits : une fois en place côté serveur, la solution ne demandera, dans la plupart des cas, aucune intervention sur le poste client
  • Solution insensible aux configurations spécifiques de l’authentification SharePoint. Elle fonctionnera dans tous les cas de figures, mêmes les plus exotiques, car elle ne dépend pas de ce composant
  • Aucune mobilisation de ressources sur les postes clients lors des exécutions des tâches de migration. Cela permet de lancer des tâches de migration et d’attendre la fin de leur exécution tout en continuant d’utiliser les applications bureautiques sans dégradation ni interruption
  • Solution potentiellement la plus performance : si le lien réseau entre l’infrastructure SharePoint et la destination est rapide, peu chargé et sujet à peu de latence, le résultat sera supérieur à celui proposé par l’approche client lourd
  • Peut s’inscrire dans une solution complète de gestion de SharePoint de bout-en-bout par le biais d’une seule interface homogène et d’une seule instanciation

Les Contres

  • Déploiement côté serveur assez lourd (applications, compte technique, modifications de sécurité etc). Peut également requérir un serveur dédié pour le rôle de manager (voir point d’attention plus loin dans cet article)
  • Contournement du modèle de sécurité SharePoint : l’outil contourne entièrement le modèle de sécurité SharePoint par le biais d’un compte technique aux privilèges particulièrement élevés. Note : une habilitation spécifique au module de migration peut-être prévue mais elle ne couvre pas 100% des cas de figure
  • Risque d’erreur humaine avec de lourdes conséquences (voir points précédents et suivants) : le niveau d’autorisation n’étant pas granulaire, il est possible de migrer le “mauvais » contenu sur la “mauvaise” destination. Il est impératif de s’en protéger grâces à des techniques de restaurations adaptée (du même fournisseur, notamment
  • Très peu de traçabilité des actions : bien que chaque tâche de migration soit rattachée à l’utilisateur qui l’a initiée, la plupart des actions techniques sous-jacentes seront ensuite exécutée par le biais du compte technique. Cela empêchera donc toute traçabilité de celles-ci. Note : le manque de traçabilité s’applique également à l’usage de la volumétrie de migration liée à la licence
  • Fonctionnement “semi boîte noire” : Une fois une tâche de migration lancée, il faut attendre la fin de celle-ci pour en mesurer réellement le résultat
  • Aucun mécanisme de limitations de l’impact sur les ressources serveur: si les serveurs SharePoint sont proches de la saturation, l’exécution de tâches de migration pourrait potentiellement les achever
  • Suivi technique des migrations qui peut s’avérer compliqué car les informations sont éparpillées : rapport de migration par le biais du manager, fichier log sur les agents (donc sur les serveurs SharePoint) et log SharePoint. Pas de consolidation de ceux-ci
  • Complexité liée au déploiement des correctifs: Les correctifs livrés par le support sont souvent spécifiques au client et non cumulatifs. Il revient donc au client de les consolider dans les déploiements (par ex ajout d’un serveur à la batterie et par conséquent, d’un agent)

Point d’attention : DocAve vs. O365 vs. Windows Server 2003

Imaginons le scénario suivant : une infrastructure SharePoint 2007 sur Windows Server 2003 devant être migrée vers Office 365.

Pour rappel, les flux se font d’agent à agent. Dans ce cas-ci toutefois, il n’y a pas d’agent DocAve sur Office 365, il y aura donc une communication agent vers Internet. Or celle-ci va devoir se conformer avec les standard Office 365 (sur l’authentification par WS-Federation…) et malheureusement, les composants Windows requis ne sont pas disponibles pour Windows Server 2003 (Framework .Net 4.5, WIF…).

Il faudra donc mettre en place un serveur de rebond (Windows Server 2008 à minima) afin d’en bénéficier. Ce serveur, bien que n’hébergeant aucun composant SharePoint, sera équipé d’un agent DocAve connecté au même manager que ceux de la ferme dont le contenu doit être migré. Le schéma ci-dessous reflète l’architecture à mettre en place.

spmigsrvinfrao365

Merci à AvePoint pour le support technique et à @J3rrr3 pour la mise en place.

Liens Utiles


SharePoint 2010: Impressive 3-Part Post over Migrating from SPS2003 to SP2010 by MCS MEA

SharePoint 2010

Microsoft MEA HQ near shoring team has recently published an impressing 3-part post over Migrating from SharePoint Portal Server 2003 to SharePoint Server SP2010, enjoy:

Happy migration!

Marc


Leave a comment

SharePoint: STSADM –O migrateuser to PowerShell

With the early days of SharePoint, changing user accounts in order to reflect user changes in Active Directory after domain migration, migration or split  was a nightmare.
With WSS2/SPS2003 and assuming you have compiled the excellent SPSUTIL tool , the situation was much better though not perfect (Anthony, the other Windows Director, suffered a lot from this in a previous job actually, who did not?).

With the WSS3/MOSS era, STSADM now comes with the built-in command “–o migrateuser” in order to do the job. So why bother turning this command into PowerShell? Simply because it greatly eases the automation since you can write custom scripts to scan AD, parse XML or CSV and finally update your SharePoint content DB’s accordingly. The code is incredibly simple, just like the command is:

First, load the assemblies as usual:
[Void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[Void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")

Then get a farm object:
$spFarm = [Microsoft.SharePoint.Administration.SPfarm]::Local

And finally use the
$spFarm.MigrateUserAccount("OLDDOMAINOLDUSERNAME", "NEWDOMAINNEWUSERNAME", $False)

The first parameter being the original domain name and user name
The second parameter is the new domain nameuser name
And the third is Boolean indicating sidhistory on the new user should be used or not.

Keep in mind that while this code will modify the “Account” attribute of a user through the farm, it will not change the “Name” attribute. %If you wish to change this name and you’re lucky enough to operate MOSS, you can rely on the profile import process (see my previous post for some Powershell automation), if, on the other hand, you run WSS3, you’ll have to go through extra code that does some iterations in all your sites and modify the contents of the list named “User Information List” in each of them. The Powershell to update the list would look like:
$spUser = $spWeb.SiteUsers["NEWDOMAINNEWUSERNAME"]
$spUser.Name = "NEW USER DISPLAY NAME"
$spUser.Update()

A good movie match for this post could be Face/Off directed by John Woo, the "unofficial inventor" of slow motion.

Face/Off Poster

And cut!


Active Directory: ADMT Episode 2 – The Attack of the Clone Principals

The galactic story is going backwards, one prequel at a time…
There are some AD migration scenario’s (I’d rather call them “consolidation”) were, for various reasons, the user accounts already exist in the destination domain. This is typically the case when organizations use to operate a “mail forest”, dedicated to Exchange and used another forest for user to log and then decide they want to merge them, preferably keep the one used for mail because it contains rich address book information but also because Exchange is difficult to move from one domain to another. The “mail forest” only contains disabled user account linked to the user forest.

The ADMT does not provide a way to “reconcile” account and populate SID history of the existing users in destination domain using the original SID of the source domain. Fortunately, the force is with you if you use a scripting COM object part of the Windows 2000 Reskit: DSUtils.ClonePrincipal. This component allows you to script, using WSH or PoSh, the reconciliation of both users in the destination domain, consequently preserving permission set with the SID of the source domain.

In order to be able to use DSUtils.ClonePrincipal, the same requirements as the ones applicable to ADMT with Sid history usage are applicable: Trusts, Forest functional level, Registry modification (NT4)… Here is a short example of how to use it, where “Anakin” is the name of the user in the source domain and “darthvador” is the matching user in the destination domaine. Note that the actual domain names are specified in the line starting with “objClonePr.Connect” and must not be supplied in the line starting with “objClonePr.AddSidHistory”, which contains only SAM Account names of the users.

Set objClonePr = CreateObject(“DSUtils.ClonePrincipal”)

objClonePr.Connect “SourceDomainPDCDNSName”, “SourceDomainDNSName”, “DestinationDomainPDCDNSName”,”DestinationDomainDNSName”

objClonePr.AddSidHistory “anakin”, “darthvador”

Note: This also works for groups and computers (do not forget the $ at the end of the computer name in this case). Of course, it will not migrate the group membership, only the SID.

Now, practically, how would look a typical domain migration/consolidation end-to-end with the following constraints:

– Users exist in source and destination domain and must be reconciled

– Computers exist in source domain only and must be migrated

– Groups exists in source domain only and must be migrated

– Profiles and Permissions must be translate on all systems

You would have to plan the following steps:

1. Prepare a CSV text file containing the user-mapping between the source and the destination domain. It would look like:

lightsideanakin,darksidedarthvador

lightsidepalpatine,darksidedarthsidious

2. Using ADMT, migrate the groups preserving SID History

3. Migrate/reconcile users using the script snippet above and the file made at step 1

4. Manually or using a script (I’ll provide an example in a later post), reconcile the group membership

5. Using ADMT, migrate the computer and perform security and profile translation. ATTENTION: you will need to run ADMT multiple times in order to get the permission and profiles correctly translated:

Run #1: Just to change the domain affiliation of the computer. The Migrated computer will reboot

Run #2: Chose Security Translation and all options as default. The Migrated computer will not reboot

Run #3: Chose Security Translation BUT specify a SID-mapping file, which is actually the file made at step 1

6. Optionally migrate logon scripts, GPO’s… as necessary

In a later post, I will publish and comment scripts to help getting the whole job done.

More info: http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/deploy/dgbf_upg_ojiy.mspx?mfr=true

And Cut!


Leave a comment

Active Directory: ADMT Episode 3.1 – The Revenge of the SID’s

SW EP3 Poster

Finally the new version of ADMT is out, together with the following tools and document, and downloadable from MS.com:

Sadly, it is not supported to deploy this version of ADTM on Read-only DC’s and on CORE version of Server 2008 or on any version prior Windows Server 2008. Apparently NT4 is not supported as source domain anymore and the migration agent is not supported on NT4 computers anymore…

Hopefully, it now uses SQL2005 as backend (remote or local) and comes with extended command-line capabilities for set-up and post set-up configuration

It’s therefore time to revise my backup/restore process a bit compared to by original post from April 08 (http://www.marc-antho-etc.net/blog/post/2008/04/BackupRestore-of-the-ADMT-Database.aspx)

Thanks to SQL2005 you can now backup to a remote folder, example:

“%PROGRAMFILES%Microsoft SQL Server90ToolsBinnosql.exe" -E -S %computername%MS_ADMT -Q"BACKUP DATABASE ADMT TO DI

SK = '\SERVERSHAREadmt.bak'"

Note that that the path contains 90 instead of 80 since the SQL version is incremented.

Now for another stuff, just a little bit tricky. It is possible to use ADMT against a remote SQL server. But the problem is: you have to create the database first and then install and configure ADMT while, to get your hands on the tool to create the database, you have to install ADMT first… Catch my drift?

Here is how to do it seamlessly:

  1. Log on you DC, preferably with a domain admin or with a user administrator on both DC and SQL Server
  2. Start the ADMT Setup
  3. On the Welcome screen, click Next
  4. On the EULA, select “I agree” then click Next
  5. On the customer Experience Improvement Program, chose whatever you want the click Next
  6. On the Database Selection, select “Use and existing SQL Server” and enter the name of the remote SQL then click Next
  7. The next screen will show an error messaging complaining about the fact that the setup could not connect to the remote SQL or find the ADMT database. Keep the wizard open and do not click on Finish
  8. Open a command-line and navigate to %WINDIR%ADMTAdmtDb 
  9. Execute the command “admtdb create /s:MYSQLSERVERMYINSTANCE” (the instance is optional and depends on your SQL configuration. The command should return “The ADMT database was created successfully”. If not, check permissions, connectivity, name resolution etc.
  10. Return to the Wizard and click Back
  11. On the Database Selection, select “Use and existing SQL Server” and enter the name of the remote SQL like you previously stated in the command-line then click Next
  12. The next screen should now show a successful message, click Finish

Now let’s say that you changed your mind and wish to use the local SQL Express that ADMT installed during the setup (yes it actually installed one instanced then disabled it). Here is how to do:

  1. Open a command-line and navigate to %windir%ADMTAdmtDb
  2. Execute the command “sc config MSSQL$MS_ADMT start= auto”, this will set the SQL instance’s startup mode to “Automatic”
  3. Execute the command “sc start MSSQL$MS_ADMT”. It should return a text containing “STATE              : 4  RUNNING”. This will start the SQL instance
  4. Execute the command “admtdb create /s:%computername%MS_ADMT” to create the ADMT database locally. It should return “The ADMT database was created successfully”
  5. Set the path to %windir%ADMT
  6. Execute the command “admt config setdatabase /s:%computername%MS_ADMT” to configure ADMT to use the local database. Note: The document from MS over ADMT v3.1 contains typos regarding the parameters of this command.
  7. Start the ADMT console to check if everything OK

And cut!