Marc Lognoul's IT Infrastructure Blog

Cloudy with a Chance of On-Prem


Leave a comment

Office 365: MS Directory Synchronization Tool Comparison

Introduction

Over time, the number of free tools provided by Microsoft for synchronizing (and sometimes syncing back) on-premises AD and Azure AD has increased up to 3 (not to mention Azure Active Directory Connector for FIM 2010 R2):

  • Directory Sync aka DirSync
  • Azure AD Sync aka AADSync
  • Azure AD Connect aka AADConnect

While the first is apparently set for retirement and the two others would ultimately merge, it is still valuable to have a good idea of their capabilities and constraints before making the right choice for each implementation.

I will later publish the upgrade paths from DirSync to AADConnect.

Tool Comparison

The table hereunder is attempt to compare them as comprehensively as possible. Please note that 95% of credit for this comparison table go to French Directory Service MVP Maxime Rastello. Here is his original French article: DirSync vs Azure AD Sync vs Azure AD Connect : lequel choisir ?

Note: I will try to keep this table as up to date as possible at the following location: Office 365: MS Directory Synchronization Tool Comparison.

Tools Directory Sync
(DirSync)
Azure AD Sync
(AADSync)
Azure AD Connect
(AADConnect)
Capabilities
General
Latest Version Download 1.0.7020.0000
(07/31/2014)
1.0.0494.0501
(05/02/2015)
1.0.628.2
Public Preview 2
(03/20/2015)
Version History TechNet Wiki Article MSDN Article Not Currently Officially Available
Multi-Domain Sync Yes Yes Yes
Multi-Forest Sync No Yes Yes
Filtering by OU Yes Yes Yes
Filtering by Attributes Yes Yes Yes
Customizable Attribute Set Yes But Not Supported Yes Yes
Customizable Sync Rules Yes Yes Yes
Sync On-Premises to Cloud
Users Yes Yes Yes
Contacts Yes Yes Yes
Security Group Yes Yes Yes
Distribution Group Yes Yes Yes
Password Yes Yes Yes
Extended Attributes No No Yes
(Requires Azure AD Premium)
Devices No No Yes
(Requires Azure AD Premium)
Sync Cloud to On-Premises
Users No No Yes
(Requires Azure AD Premium)
Contacts No No Future Release
Security Group No No Future Release
Distribution Group No No Future Release
Password (Write-back) No Yes
(Requires Azure AD Premium)
Yes
(Requires Azure AD Premium)
Office 365 Group No No Yes
(Requires Azure AD Premium)
Devices No No Yes
(Requires Azure AD Premium)
Interoperability
Office 365 UPN Selection Yes But Not Supported Yes Yes
Hybrid Exchange Migration Support Yes But Single-Forest Only Yes But Single-Forest Only Yes
3rd Party LDAP Server Support No No Future Release
Assistance to ADFS Set-up No No Yes
Manageability
PowerShell Cmdlets Yes Yes Yes
Staging Mode No No Yes
Requirements
Hosting Server Operating System Windows Server 2008 64-bit with SP1 or later
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Windows Server 2008 64-bit with SP1 or later
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Hosting Server .Net Framework v3.5 Service Pack 1
v4.5.1
v4.5.1 v4.5.1
Hosting Server Domain Membership Member Server
Domain Controller
(Same Forest)
Workgroup
Member Server
Domain Controller
Member Server
Domain Controller
(Same Forest)
AD Functional Level Windows Server 2003 or Higher Windows Server 2003 or Higher Windows Server 2003 or Higher
Domain Controller Operating System Windows Server 2003 with SP1
Windows Server 2008 64-bit with SP1 or later
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Windows Server 2003 with SP1
Windows Server 2008 64-bit with SP1 or later
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Windows Server 2008 R2 with SP1 or later
Windows Server 2012
Windows Server 2012 R2
Note: SSO with AD FS option requires Windows Server 2012 or higher

Additional Information’s

Advertisements


Leave a comment

SharePoint: Content Migration Tools Comparison from an Architecture Perspective

Introduction

The SharePoint MVP Benoit Jester has recently posted an excellent French-speaking presentation comparing SharePoint content migration tools.


This gave me an opportunity to discuss them from an architecture perspective. In large or complex infrastructures/organizations, the architecture of the migration solution can be nearly as important as the tools’ functionalities or results. You got it, in this post I will therefore not cover those two last criteria.

In a nutshell, 2 architectures competes:

  • Fat Client Applications (Metalogix, ShareGate…)
  • Server Infrastructures (mostly DocAve)

Fat Client Applications (Metalogix, ShareGate…)

Overview

Like depicted on the schema hereunder, fat client architecture implies that the migration logic entirely resides in the client application, the application interacting with SharePoint through the standard remote API’s such as web services similarly to Office applications, SharePoint Designer, OneDrive for Business Client or so. Optionally, a server-based extension can be installed on SharePoint server Under the shape of a web service too, increasing functionalities and ultimately, migration fidelity.

spmigfatclienten

2 possible évolutions:

  • Hosting the application closer to the data center by deploying it on a “migration server”
  • Accessing directly source SharePoint database to increase speed and in a certain way, control (Metalogix)

The main factors limiting performance:

  • The source and destination SharePoint infrastructures
  • The client computer the application is installed on
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Simplicity and ease of deployment: it just takes a few second to install then the migration activity can immediately start
  • Insensitive to server infrastructure topology change: as long as your browser can speak wth SharePoint server(s), the application can as well
  • Entirely honoring the SharePoint security model: the authorization level of the migration user maps exactly the authorization as defined in SharePoint. Consequently it leverages the auditing and tracing capabilities as well
  • Ease of delegation: deriving from the previous point, delegating migration is as easy as granting permission in SharePoint itself, therefore preventing large scale accidents due to farm-account level permissions
  • No deep infrastructure change required: to the exception of the optional server-side extension (pretty light actually), there is no need for deploying heavy server-side component nor configuration. This allows “tactical” migrations to take place very easily and over a very short time frame.
  • Obvious network flows: in case perimeter firewall must be configured, flow are extremely simple are reduced. They are actually identical to the ones used between a browser and the SharePoint infrastructure, therefore, surely already in-place.
  • Capability to limit the impact of the migration on SharePoint infrastructures, using, for example, native SharePoint functionalities such as Resource Management
  • The technical follow-up of migration is fairly easy because most of (if not all) migration technical steps are logged on the client-side, directly accessible, with a satisfying level of details
  • Finally, this solution, even combined with other tools, remains standalone and souple

The Cons

  • Permits rogue deployments of migration applications: there is no way to centrally control the installation and the use of fat client-based solution since to SharePoint, they look like any other application. Obviously, you still require administrator privileges on the client to install it though
  • Sensitive to authentication configured on the SharePoint web application: because the tool consumes SharePoint identically to browsers, it must also comply with the authentication settings applied at web application level. If you remain within well-known configuration, there won’t be any problem, if you run an advanced configuration, be prepared for surprises even showstoppers
  • Heavily depends on the SharePoint Remote API’s: If they are disabled for security or functional reasons, the tool is simply be unable to work (to the exception of reading databases directly of course)
  • The tool’s configuration (and state information) reside on the client. Therefore, you may want to back it up in order to protect the logs, the configuration and other elements such as migration analytics, mapping files and so on
  • The impact on the client computer’s performance can be huge: depending on the amount of data, the number of items and the complexity of the migration, the client computer’s performance can be severely impacted. Moreover, the time necessary to complete a migration job is difficult to estimated, therefore the PC might be stuck on migrating for hours with large jobs, making other applications unusable due to congestion. To work around it, you might want to dedicated clients or even servers to this task: keep in mind this increases the hardware and license costs (RDP, Citrix or so). Not to mention the network connection (bandwidth, stability and low latency) between the client and the SharePoint servers is a crucial factor
  • Installing server-side extension is usually a must in order to benefits from the full range of functionalities. This will allow near full-fidelity migrations
  • In the event of a long-running migration project, you may have to update the migration tool more than once. Therefore, having an up-to-date inventory of the installed base will be useful

Note: As stated earlier in this post Metalogix comes with the ability directly read from the SharePoint databases, by-passing the SharePoint API and authentication and therefore, the SharePoint security model. Make sure you read this MS KB article before proceeding Support for changes to the databases that are used by Office server products and by Windows SharePoint Services

The Server Infrastructures (DocAve mainly)

Overview

This architecture type relies on 2 key components:

  • The manager: the web-based administration and usage interface and the component responsible for coordinating actions with agents
  • The agents: installed on one or multiple SharePoint server as well as on the server hosting the manager, their purpose is to execute the migration instructions received from the manager through the control flows. Afterwards, source agent communicates with the destination agents to migrate the content. The active element of the agent, implemented under the shape of a Windows service, requires high privileges at windows, SQL and SharePoint levels, close to the ones granted to SharePoint’s farm account.

spmigsrvinfraen

The DocAve solution being entirely built on those components for migration as well as for other purposes, the activation of each additional feature (or module) is just a question of configuring the appropriate licenses using license files provided by the vendor.

The main factors limiting performance are:

  • The source and destination SharePoint infrastructures
  • The network connection between the client and the source and destination SharePoint infrastructures

The Pro’s

  • Very limited Client prerequisites: once the server-side is in place, access to the web interface just requires recent a browser
  • Entirely insensitive to SharePoint web application authentication configuration and to remote API’s accessibility
  • No heavy resource usage on the client: migration tasks are jobs run by the agents located on SharePoint therefore therefore leaving the client free of any migration-related load
  • Potentially the faster migration solution: assuming the network connection between the source and the destination is appropriately sized and the sources and destination SharePoint can cope with the extra load
  • Part of an end-to-end SharePoint management solution exposed through a single streamlined single-seat administration and usage interface

The Con’s

  • Heavy server-side deployment: requires software installation, service account, security modification and potentially additional hardware and firewall configuration. IT might require a dedicated server for hosting the manager (see later in this post)
  • Completely by-passes the SharePoint security model: by making uses of a Windows service running with a highly privileged user
  • Higher risk of human mistakes with dramatic consequences: Once the user is granted access to the migration module, he/she can perform any action on any scope regardless of the permission he/she holds at SharePoint level since everything happens under the context of a technical account. It is not unusual to see incidents requiring massive content restoration after accidental use
  • Almost no traceability on user actions: once again, because the privileged account is doing the job, the traceability will stop at the job creation steps. When performing multiple migration projects at the same time using the same DocAve platform, volume-based license usage can also be difficult if not impossible to measure
  • Partially black box operations once a job is started: usually, you’ll to wait for the job to complete get the details of what work and what did not and therefore measure the actual progress
  • No server protection: if the SharePoint servers are already facing a certain load due to normal usage conditions, running the migration agent might saturate them without any native option to prevent it. Therefore, it is always wise to run the agent on the least used server of the farm
  • Advanced troubleshooting can be tedious because logs must be collected in 3 different locations: the DocAve log from the migration job, the log from the agent and finally the SharePoint ULS logs. The 3 are usually necessary to have the complete picture of a migration batch behavior
  • Software Updates on the Migration solution: during the life cycle of the migration infrastructure, it is likely you will have to deploy numerous updates. Those are often specific to the customer and non-cumulative. Therefore it’s the customer’s responsibility to make sure the servers and agents are always aligned and up-to-date. Consequently, this also applies to server added to the farm over time

Special Case: DocAve vs. O365 vs. Windows Server 2003

Let’s take the following scenario: a SharePoint 2007 infrastructure running on Windows Server 2003 to be migrated to Office 365.

Considering the fact that data always travel from agent to agent, in this case, there is no destination agent on the Internet. However, the migration solution must cope with with O365 standards (such as authentication using WS-Federation…) and unfortunately, the necessary software pieces to do so are not available to Windows Server 2003 (Framework 4.5, WIF…).

The workaround consists in hosting the DocAve Manager and an agent on a dedicated server running at least Windows Server 2008 while another agent is running on the farm. DocAve will automagically established the flow between agent and to the Internet as shown in the schema below.

spmigsrvinfrao365en

Credits to AvePoint for the technical guidance and to @j3rrr3 for the set-up.

Useful Links


Leave a comment

SharePoint: Comparatif architectural des solutions de migration de contenu

Introduction

La parution récente de l’excellente présentation de B. Jester ci-dessous,

me donne l’opportunité de commenter les solutions de migration de contenu d’un point de vue architectural (composants, flux…) car celui-ci peut s’avérer être un facteur déterminant dans le cadre du choix de l’outil de migration dans les environnements complexes. Vous l’aurez compris, ce comparatif ne traitera donc ni les aspects fonctionnels ni financiers.

2 types architectures s’opposent:

  • Les Clients lourds (ShareGate ou Metalogix)
  • Les Infrastructure Serveurs (DocAve)

Les Clients lourds (ShareGate ou Metalogix)

Aperçu

Comme le montre le schéma ci-dessous, ce type d’architecture s’avère particulièrement simple. Elle s’articule autour d’un client lourd, tout la logique de migration résidant au sein de celui-ci, interagissant avec les API distantes SharePoint (web services etc.). Pour SharePoint (ou O365), ces solutions se comportent rigoureusement comme n’importe quelle application cliente (navigateur, suite Office, SharePoint Designer, client OneDrive for Business…). Note: Les extensions « serveurs » optionnelles proposées par ces solutions revêtent également la forme de web service.

spmigfatclient

Deux variations possibles :

  • L’hébergement de l’application sur un serveur « de migration »
  • L’accès direct aux bases de données source en lecture, contournant ainsi le modèle de sécurité SharePoint

Principaux facteurs limitant la performance:

  • Les infrastructures serveur SharePoint source et destination (ou O365)
  • Le poste client
  • Le lien réseau client vers infrastructure SharePoint source et destination

Les Pours

  • Simplicité et facilité de déploiement : l’installation prend quelques secondes. Après cela, la migration peut immédiatement commencer.
  • Insensibilité au changement d’infrastructure : peu importe les modifications de topologie des infrastructures SharePoint et pour autant que l’accès au service soit maintenu, il n’y aura aucune incidence sur l’application de migration en elle-même
  • Facilité de délégation : Déléguer la fonction de migration, même de manière granulaire, s’avère particulièrement simple, tout en s’appuyant entièrement sur le modèle de sécurité SharePoint (voir ci-dessous)
  • Honore entièrement le modèle de sécurité SharePoint : Les limites de l’outil dérivent directement du niveau d’habilitation de l’utilisateur qui en fait usage. Cela peut permettre éviter bon nombre de dérapages ». Les accès seront également retracés côté SharePoint (Audits, traçabilité etc…)
  • Aucune modification de l’infrastructure requise. Note : à l’exception des extensions « serveur » qui restent toutefois optionnelles. Cela peut également facilité l’intervention des prestataires externes, par exemple, dans le cas d’une migration “tactique”
  • Facilité d’identification et de suivi des flux réseau : en cas de sécurisation des infrastructures à l’aide de pare-feu de périmètre, ceux-ci sont très certainement déjà adaptés à l’usage des outils de migration car identiques à ceux requis par l’utilisation de SharePoint
  • Possibilité de limiter l’impact des activités migration sur le service SharePoint par le biais de fonctionnalités natives comme par ex. le Request Management
  • Le suivi technique des migrations s’effectue par des log côté client, ce qui facilite leur accessibilité et leur lecture
  • Cette solutions, même combinées à d’autres produits du même éditeur, reste isolée, ce qui prévient des effets de bord éventuels en cas de déploiement de correctifs etc.

Les Contres

  • Permet les déploiements non contrôlés : ces solutions n’étant pas verrouillables de manière centralisée (SharePoint ou outil serveur), un utilisateur, pour autant qu’il dispose de privilèges administrateur sur son poste de travail, sera en mesure d’installer et d’utiliser le client lourd sans aucune restriction à l’exception des permissions dont il dispose dans les sites SharePoint auquel il a accès.
  • Sensibilité à l’authentification en place sur les web applications SharePoint : bien que compatible avec l’éventail des authentifications standards, ces solution rencontrerons des problèmes avec les authentifications customisées ou les pré-authentifications (clé RSA, certificats client, certains types de cookies…). C’est inhérent à leur architecteur client lourd adressant les API distantes SharePoint.
  • Dépendance vis-à-vis des API SharePoint distantes: Si celles-ci sont désactivée (pour des raisons de sécurité notamment), l’application sera incapable de communiquer avec SharePoint
  • Paramétrage de l’outil (incluant les scénarios de migration) qui réside sur le poste client. En cas de paramétrage complexe à répéter régulièrement, il peut s’avérer intéressant d’en prendre une sauvegarde afin de faire face à la perte du poste
  • L’impact en terme d’utilisation de ressources système sur le poste client et la mobilisation de celui-ci peut s’avérer importante voire bloquante pour d’autres applications. Il existe évidemment une solution de contournement consistant à installer l’application sur un serveur dit « de migration ». Dans ce cas, il doit également être dimensionné en fonction du nombre d’utilisateurs concurrent et de la lourdeur de la migration. Ce contournement pourrait également entrainer des surcouts en termes de licences (RDP, Citrix ou autre). Par voie de conséquence, il est difficile de prévoir la durée exacte d’exécution d’une tâche de migration. Le facteur réseau (type de raccordement, bande passante, latence) est particulièrement déterminant
  • Il peut s’avérer nécessaire d’installer les extensions sur les serveurs SharePoint afin de tirer parti des toutes les fonctionnalités proposées
  • Lors de longs projets de migration requérant des interventions du support produit, il ne sera pas rare de devoir, à plusieurs reprises, déployer des correctifs. Il est donc recommander de tenir un inventaire des sièges ou l’application de migration est installée afin de faciliter les mises à jour

Note : Metalogix propose également un accès direct en lecture aux bases de données. Cela permet d’augmenter sensiblement les performances en lecture, se s’affranchir des contrainte d’authentification SharePoint et de l’exploitation des API distante mais aura pour conséquence de contourner la sécurité SharePoint. Pour rappel : Article de KB MS Prise en charge des modifications apportées aux bases de données qui sont utilisés par les produits Office server et Windows SharePoint Services

Les infrastructures serveurs (DocAve)

Aperçu

L’architecture de ce type de solution s’appuie sur 2 composants-clé :

  • Le manager : expose l’interface web d’administration et d’utilisation du produit
  • Les agents : installés sur un ou plusieurs serveurs SharePoint et sur le serveur hébergeant le manager, ils ont pour fonction d’exécuter les demandes saisies dans celui-ci par le flux de contrôles. Ensuite, les agents communiquent entre eux pour transmettre les données. Le service Windows constituant l’élément actif de l’agent nécessite des privilèges élevés au niveau Windows, SharePoint et SQL (proches de ceux requis par le compte de la ferme).

spmigsrvinfra

La solution DocAve se base, dans sa globalité, sur ces composants, aussi bien pour la migration que pour d’autres usages. L’activation de chaque module se faisant par la mise en place des licences appropriées.

Principaux facteurs limitant la performance:

  • Les infrastructures serveur SharePoint source et destination (ou O365)
  • Le lien réseau client vers infrastructure SharePoint source et destination

Les Pours

  • Prérequis postes clients réduits : une fois en place côté serveur, la solution ne demandera, dans la plupart des cas, aucune intervention sur le poste client
  • Solution insensible aux configurations spécifiques de l’authentification SharePoint. Elle fonctionnera dans tous les cas de figures, mêmes les plus exotiques, car elle ne dépend pas de ce composant
  • Aucune mobilisation de ressources sur les postes clients lors des exécutions des tâches de migration. Cela permet de lancer des tâches de migration et d’attendre la fin de leur exécution tout en continuant d’utiliser les applications bureautiques sans dégradation ni interruption
  • Solution potentiellement la plus performance : si le lien réseau entre l’infrastructure SharePoint et la destination est rapide, peu chargé et sujet à peu de latence, le résultat sera supérieur à celui proposé par l’approche client lourd
  • Peut s’inscrire dans une solution complète de gestion de SharePoint de bout-en-bout par le biais d’une seule interface homogène et d’une seule instanciation

Les Contres

  • Déploiement côté serveur assez lourd (applications, compte technique, modifications de sécurité etc). Peut également requérir un serveur dédié pour le rôle de manager (voir point d’attention plus loin dans cet article)
  • Contournement du modèle de sécurité SharePoint : l’outil contourne entièrement le modèle de sécurité SharePoint par le biais d’un compte technique aux privilèges particulièrement élevés. Note : une habilitation spécifique au module de migration peut-être prévue mais elle ne couvre pas 100% des cas de figure
  • Risque d’erreur humaine avec de lourdes conséquences (voir points précédents et suivants) : le niveau d’autorisation n’étant pas granulaire, il est possible de migrer le “mauvais » contenu sur la “mauvaise” destination. Il est impératif de s’en protéger grâces à des techniques de restaurations adaptée (du même fournisseur, notamment
  • Très peu de traçabilité des actions : bien que chaque tâche de migration soit rattachée à l’utilisateur qui l’a initiée, la plupart des actions techniques sous-jacentes seront ensuite exécutée par le biais du compte technique. Cela empêchera donc toute traçabilité de celles-ci. Note : le manque de traçabilité s’applique également à l’usage de la volumétrie de migration liée à la licence
  • Fonctionnement “semi boîte noire” : Une fois une tâche de migration lancée, il faut attendre la fin de celle-ci pour en mesurer réellement le résultat
  • Aucun mécanisme de limitations de l’impact sur les ressources serveur: si les serveurs SharePoint sont proches de la saturation, l’exécution de tâches de migration pourrait potentiellement les achever
  • Suivi technique des migrations qui peut s’avérer compliqué car les informations sont éparpillées : rapport de migration par le biais du manager, fichier log sur les agents (donc sur les serveurs SharePoint) et log SharePoint. Pas de consolidation de ceux-ci
  • Complexité liée au déploiement des correctifs: Les correctifs livrés par le support sont souvent spécifiques au client et non cumulatifs. Il revient donc au client de les consolider dans les déploiements (par ex ajout d’un serveur à la batterie et par conséquent, d’un agent)

Point d’attention : DocAve vs. O365 vs. Windows Server 2003

Imaginons le scénario suivant : une infrastructure SharePoint 2007 sur Windows Server 2003 devant être migrée vers Office 365.

Pour rappel, les flux se font d’agent à agent. Dans ce cas-ci toutefois, il n’y a pas d’agent DocAve sur Office 365, il y aura donc une communication agent vers Internet. Or celle-ci va devoir se conformer avec les standard Office 365 (sur l’authentification par WS-Federation…) et malheureusement, les composants Windows requis ne sont pas disponibles pour Windows Server 2003 (Framework .Net 4.5, WIF…).

Il faudra donc mettre en place un serveur de rebond (Windows Server 2008 à minima) afin d’en bénéficier. Ce serveur, bien que n’hébergeant aucun composant SharePoint, sera équipé d’un agent DocAve connecté au même manager que ceux de la ferme dont le contenu doit être migré. Le schéma ci-dessous reflète l’architecture à mettre en place.

spmigsrvinfrao365

Merci à AvePoint pour le support technique et à @J3rrr3 pour la mise en place.

Liens Utiles


Leave a comment

SharePoint: Downloading SPC2014 Content using Synology Download Station

SynologySharePoint 2013

Introduction

Like many of you, I was rushing over Channel 9 web site to download SPC2014 videos. Every year, Scripting guru’s are elaborating incredible pieces of script to automate content download, sometimes in a very reliable way (throttled download, file integrity verification, progress measurement…).

However, Synology owners can rely on the power of DSM’s Download Station to effortlessly download all video’s and slides in the most reliable way.

Using Download Station

  1. First, visit the Channel 9’s SPC2014 page to get the RSS feed of the content: https://channel9.msdn.com/Series/TechDays-2014
  2. In the upper right, copy the URL of the RSS feed for the content’s format you’re interested in. Note: obvisously, you can take note of multiple feeds (video or audio only for example)spcvidssyno1
  3. Then log on to your Syno’s DSM and start Download Station
  4. Click on the RSS Feed view on the left menu then click on the plus sign to add a feed. Add the RSS Feed URL(‘s) you previsously noted and chose if you wish to download all content automatically or select from the list afterwards. Click OK. If you select automatic download, it will start right away.
    spcvidssyno2
  5. Depending upon your bandwidth condition, manage concurrent downloads, pause, resume or stop as needed

Note: This download method works with any Channel 9 content as long as you put your hands on the appropriate RSS Feed and might also works with other sites… So why bother coding when you have the supreme NAS experience at hand 🙂

RSS Feed URL’s

More Information

 

 

 

 

 


PowerShell: Testing if the Logged On User is Really Admin

PowerShell

Since the introduction of User Account Control (UAC) with Windows Vista/Server 2008, scripter have to deal with detecting if the user executing commands or scripts is effectively granted the necessary privileges, ie. is running with elevated privileges.

While you can find plenty of snippets and functions on the Internet to achieve this goal. The reasons why I use this one hereunder are the following:

  • It is compatible with all (decently recent) Windows versions
  • It works with all languages since no names are used
  • It is fairly fast: the speed directly depends on the user’s token size

Function IsCurrentUserElevated()
{
[bool]$IsElevated = $False
If ([System.Environment]::OSVersion.Version.Major -lt 6)
{$IsElevated = [bool]((whoami /groups /SID) -match “S-1-5-32-544”)}
Else
{$IsElevated = [bool]((whoami /groups) -match “S-1-5-32-544”) -and [bool]((whoami /groups) -match “S-1-16-12288”)}
Return $IsElevated
}

Note:  If someone has a native PowerShell replacement for fetching a user token please let me know ;).

Additional information’s:


Leave a comment

SharePoint: PAL from the Field

Introduction

Performance Analysis of Logs (PAL) is a free tool designed to analyze Windows Perfmon-based logs against predefined thresholds. The thresholds are defined in configuration files usually mapped to an MS technology (.Net, IIS…) or product (SQL Server, SharePoint. It produces reports in HTML or XML formats, the first one also including eye-candy charts.

In a nutshell, PAL almost completely removes the hassle of reading and interpreting performance logs.

However, making sense of PAL reports in real life may also require time for experimenting and unfortunately, very few guidance can be found on the web. Therefore I wanted to close the gap a little.

This post assumes you are minimally familiar with PAL. If this was not the case, there are many other blogs detailing the installing and the usage basics. The CodePlex project also includes useful introduction:

What to Expect from PAL

PAL is the perfect tool to be used when you investigate mostly infrastructure-related performance problems impacting Microsoft product and technologies.

It helps translating Perfmon logs into humanly readable reports with added value brought by charts, recommended thresholds and generic guidance. A report is roughly made of 2 sections: chronologically ordered alerts and statistical figures enhanced with their matching charts.

In my opinion, PAL is not designed to help you trending or building up your capacity planning in the long run. for this purpose, product such as SCOM should be preferred. Likewise, PAL should not be used as a performance monitoring tool. Finally, PAL will not help drilling down into the code and will not cover end-to-end performance monitoring or troubleshooting. For this purpose, a real APM or tracing tool should be preferred.

Prerequisites

Make sure your performance counters are healthy, I can’t remember the number of times I had to fix broken counter before anything else could take place:

Practice a little with Perfmon capture and PAL in a test environment. It seems obvious but many organizations I worked for were directly in their production environment with a full counter set, a high capture frequency and this during abnormally long periods. This leads to loss of time for generating reports and lots of frustrations and confusions since the reports contains too many information’s to actually be helpful.

Decide if you will generate PAL report on a computer dedicated to this purpose or if you prefer to do it on the monitored server during off-peak hours. Keep in mind that while capturing counter has very little to no effect on performance, performing PAL analysis is extremely CPU and disk I/O intensive.

Although PAL does it for you, make sure you understand what each counter really means and what it means in your own environment.  Avg. Disk Queue Length/Current Disk Queue Length being a good example of misleading/misinterpreted counter.

Correctly identify your environment: what are the processes running (at least, the ones making sense), what are the physical/logical disks and their purpose, what are the memory sizing (physical and virtual) and of course the CPU characteristics.

In Perfmon/Perflogs, preferably identify processes by their PID instead of their instance ID. This is particularly useful with SharePoint and IIS where you can have multiple IIS Worker Processes (W3WP.exe)running, even in the most basic implementations

While some SharePoint counter will directly refer to SharePoint applications, others won’t. Therefore, it is always useful to have scripts at hand doing the job for you.

On Server 2003/IIS6 using a command-prompt:

cd %windir%System32
cscript.exe iisapp.vbs

From Server 2008/IIS7using a command-prompt:

cd %windir%System32inetsrv
appcmd list wp

Using PowerShell:

gwmi win32_process -filter “name=’w3wp.exe'”|Select ProcessId, CommandLine

Be watchful with process ID’s: they may evolve during the time of the capture since when a process crashes, a new one with its own ID is usually restarted. The same happens to a worker process if it recycles.

Take also time to benchmark PAL:

  • Estimate the storage used by captures
  • Estimate the time take for PAL to produce reports
  • Estimate the storage used by PAL report

While a 2-hours capture using the default SharePoint 2010 will generate from 30 to 50 MB of BLG file and take about 10 minutes for processing, things will start counting in larger amount.

Some counters (like the ones related to processes and SharePoint’s publishing cache) can boost the size and time to generate reports because they are multiplied by the number of running processes or existing Site Collections

And finally, download and install PAL on the computer you selected for this purpose. Remember, PAL will only be used to generate reports, not capture and reading reports. Therefore there is no strict requirement to install it on every SharePoint server.

Planning Performance Captures

To ease you life, generate the Perfmon configuration files directly from PAL: Start PAL, go to the tab Threshold File then select the Threshold file corresponding to the work load and finally click on the button Export to Perform Template File.

Select the format according to the operating system version captures will be taken from. LOGMAN format is the best choice if your goal is superior automation of the capture process.

Carefully plan the capture period. Usually, warm-up of ASP.Net/SharePoint application generate a lot of noise not really relevant to you performance troubleshooting, therefore, it is preferable to start capturing when your application is already in cruise mode. Unless of course if the performance problem occur at compilation time. The same applies to crawl performance troubleshooting: preferably start capturing when the crawl is effectively started, not when it is starting.

Keep the sampling interval between 5 and 15 seconds. While less than 5 does not help because it tends to make things look worse than what they actually are (very short CPU peak or intensive disk I/O…), more than 15 may make the capture inaccurate because some missing numbers. In most cases, 15 seconds will do fine.

Keep the format to binary (BLG): although not humanly readable, It’s way more compact and directly usable by PAL. Note: some tools can convert Perfmon logs whenever needed, I will discuss that at later time.

Finally, and if you run a multi-server farm (remote SQL for example), decide if you prefer to put capture from various servers into the same log file or if you which to use separate logs. Remember that in most cases, the footprint of Perform is usually negligible. if you chose for per-server capture, make sure you sufficient in control to run them simultaneously.

Happy performance troubleshooting!

Marc


Active Directory: Schema Versions and How to Retrieve it

Windows Logo

Hello,

Since Windows Server 2012 RTM is publicly available, you might be busy upgrading your forest (or at least, planning to do so). I actually did the same in my lab environments and wanted, at the same time, to revisit the AD Schema’s possible version numbers and ways to retrieve it.

You will find all details in the article I just posted: Active Directory Schema Versions.

Marc


SharePoint 2010: The local farm is not accessible cmdlets with feature dependency are not registered Revisited

While this error has been around for a while, I recently discovered a new possible cause. An opportunity to pack up this post with all causes identified until now (Therefore a feeling of déjà may be experienced by the reader).

Incorrect Windows PowerShell Version

Description

You recently upgrade to Powershell V3.0 as part of the Windows Management Framework 3.0, it’s likely you see the error message hereunder when starting the SharePoint 2010 Management Shell.

microsoft sharepoint is not supported with version 4.0.30319 of the microsoft .net runtime
the local farm is not accessible cmdlets with feature dependency are not registered

Cause

Powershell V3.0 makes use of the .Net Framework 4.0. This combination prevents SharePoint’s Management Shell from working.

Solution

Locate the SharePoint 2010 Management Shell shortcut from the Windows Start Menu then edit it

For the parameter Target, add the parameter -version and value 2 as described hereunder:

C:WindowsSystem32WindowsPowerShellv1.0PowerShell.exe –version 2 -NoExit  ” & ‘ C:Program FilesCommon FilesMicrosoft SharedWeb Server Extensions14CONFIGPOWERSHELLRegistrationsharepoint.ps1 ‘ “

This will instruct PowerShell to behave like it would do with version 2.0 instead of 3.0.

Additional Information’s

To get the effective version of the PowerShell host running, simply use the $Host object:

The logged on user is not granted SharePoint_Shell_Access

Cause

Assuming you’re not granted high privileges on the SQL Server Instance hosting your SharePoint databases such as SYSADMIN role, using SharePoint 2010 Management Shell requires the logged on user to be granted SharePoint_Shell_Access on the Configuration database.

Solution

Use the command Add-SPShellAdmin cmdlet to grant the user the necessary role.

Additional Information’s

  • To retrieve the list of user granted SharePoint_Shell_Access, use the cmldet Get-SPShellAdmin
  • To remove a user from the SharePoint_Shell_Access role, use Remove-SPShellAdmin

The logged on user is not administrator of the SharePoint server or server has UAC enabled

Cause

Using the SharePoint 2010 Management Shell requires the logged on use to be effective administrator of the SharePoint server where it runs.

Therefore there are 2 possible causes:

  • The user is not member of the local administrators group at all
  • The User Account Control is on and the logged on user did not chose to start SharePoint 2010 Management Shell as Administrator

Solution

Always start SharePoint 2010 Management Shell with a domain user, logged on as administrator and chose the option “Run as Administrator” when right-clicking on the shortcut.

To make your life simpler,  you can also edit the shortcut of the SharePoint 2010 Management Shell, then click on the button Advanced and finally select the check bow corresponding to the option “Run as administrator”. This will not prevent the UAC prompt from popping up but at least, the shell will always start as admin.

Season’s greetings!

Marc


Unable to Attach the Process when Debugging Server Applications in Visual Studio

Recently I was debugging custom SharePoint-based application and timer jobs. Unfortunately and although my account was member of the local administrator’s group on my development server, Visual Studio refused to attach to the process I wanted to debug throwing out the error Unable to Attach the Process. Visual Studio has insufficient privileges to debug the process. To debug this process, Visual Studio must run as an administrator to my face.

The root cause is fairly simple to find: in environments where Windows (Server) high security is a concern, the privilege Debug Programs (SeDebugPrivilege), granted by default to the Administrators built-in group, is removed in order to prevent admins from getting their hands on passwords from other users (as well as other data stored in memory) or service accounts using tools similar to LSADUMP to name juste one.

Unfortunately, the consequence of this is the inability for an administrator to attach to a program in order to debug it live. The problem is not typical to VS but to any debugging program willing to attach to a live process.

3 possible solutions/workarounds:

  • Get your security administrator to restore that privilege. This it can be done on a per computer-basis, it should not be too harmful on a dev computer
  • Run your server application with your own user account (the one you log on with). Obviously against best practices and not always possible in highly secure environment since other privileges might not be granted (log on as service, log on as batch…)
  • Try running Visual Studio through the PSEXEC tool with parameters –s –I and –a. really tricky, has some limitation and, like the workaround above, not always possible in highly secured environments…

To list the privileges your account has been granted, just use the command whoami /priv.

Additional Information’s

Happy debugging!

Marc