Marc Lognoul's IT Infrastructure Blog

Cloudy with a Chance of On-Prem

Windows: The Confusion over DisableLoopBackCheck, DisableStrictNameChecking and Kerberos

Windows Logo


I’ve been answering technical questions in Forums and at customers for a while now and in the recent years there were many related to issue related to DisableLoopBackCheck and DisableStrictNameChecking security features from Windows. I also regularly noticed a lot of confusion and misinformation about these. This post is a modest attempt to explain them more in-depth.


LoopBackCheck is, like its name says, a security feature applicable to connection established to the loopback address ( It applies to NTLM authentication only. It allows protecting a Windows computer against threat exploiting connections to the loop back adapter specifically. This extra protection level applies to all incoming connections and protocols

What is often misleading with LoopBackCheck is the error message making believe invalid credentials have been provided while it is actually not the case. Example with IIS: HTTP 401.1 – Unauthorized: Logon Failed.

Why isn’t it applicable to Kerberos authentication? Simply because Kerberos authentication is so strongly linked to names (host and service names) that is does actually need this security feature.

When will you experience this issue? Usually, in a test/dev environment when you redirect services such as IIS web sites or SharePoint to the loopback address. It might also affect production SharePoint when the crawler process is configured to crawl from the locally-deployed WFE role thanks to a modification of the HOSTS file. You may also experience this problem when the server’s are part of an NLB cluster or when an IIS-based application accesses a SQL Server instance located on the same server using the loopback address together with Windows Authentication.

This feature was originally implemented with Windows Server 2003 Service Pack 1 and is therefore present in all recent versions of Windows


Note: MS KB Article states you have to disable Strict Name Checking as well; this is not true or at least not true if you don’t plan to use file & print sharing over the loopback adapter (see below)

Note2: My field experience tells not to use the loopback adapter anymore for SharePoint crawler because it may also generate other issue related on security software (anti-virus, local firewall…) adding their load of security checks to the communication channel to the loopback adapter.


Strict Name checking is a security feature specific to the Windows implementation of the SMB protocol (File & Print sharing). It will prevent connections from being stabled to a network share or to a shared printer if the host name used is not the server’s real one. The error message might also be considered as misleading: System error 52 has occurred. A duplicate name exists on the network.

The feature has been originally brought by Windows 2000 and is implemented is all subsequent versions of Windows.


Kerberos and how it is related to Names

Like I stated upper in this post, Kerberos authentication protocol integrates name checking in its foundation since the secret exchanged between the client and the service are partially based on the name the service is accessed by. If the name of the service is missing or incorrectly configured in the Kerberos database (Active Directory in the Windows world), the authentication will fail with the internal error KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN and is likely to fall back in NTLM authentication, which will ultimately lead to a successful authentication with a less secure protocol

Therefore, if one or multiple alternate names are used to access a service, the appropriate configuration must be associated with the user account (or computer account) running the service, using, for example SETPSPN or NETDOM like in the example above. Note: While some software offer the possibility to auto-magically register Service Principal Names in AD, very few do that actually)


Leave a comment

SharePoint: Removing HTTP Headers for Security Reasons

SharePoint 2007 SharePoint 2010 SharePoint 2013</


Virtually any decent web security guide will recommend to obfuscate HTTP header revealing technical information’s over the technologies used to host and operate an internet-facing web site or application. In the case of SharePoint, each HTTP response is likely to include at least the following headers:

  • Server: Microsoft-IIS/7.5: Generated by IIS, it indicates the version of IIS SharePoint runs on
  • X-AspNet-Version: 2.0.50727. Generated by ASP.Net, it reports the ASP.Net version using on the IIS site
  • X-Powered-By: ASP.NET: The text says it all 🙂
  • MicrosoftSharePointTeamServices: . Supposedly reports the SharePoint Build Number. Refer to my article Retrieving the Current Build Version Number and to the one from Wictor Wilén: SharePoint Mythbusting: The response header contains the current SharePoint version
  • SPRequestGuid: 11ca6867-9228-455b-a8d9-ddd3e367de86. Implemented from SharePoint 2010 for diagnostics purpose to identify request in order to map them to entries in the ULS logs
  • X-SharePointHealthScore: 0. Indicates to the client the health status of the server on a scale from 0 to 10. The lower the better

SharePoint actually comes with many more headers, they are documented on MSDN: [MS-WSSHP]: HTTP Windows SharePoint Services Headers Protocol Specification.

Removing ASP.Net and MVC Headers

This article from 4 Guys from Rolla covers the topics comprehensively.

Removing SharePoint Specific Headers

SharePoint does not come with a configurable way to remove its own custom headers.

My Recommendation

While it’s always funny to play around with custom .Net HTTP module or even ISAPI filters, I would definitely not recommend this solution to remove HTTP headers since it may break SharePoint functionalities and place a certain load on SharePoint server(s). Not to mention additional hassle in case you have to troubleshoot something in the IIS/SharePoint pipeline. I also attempt, as much as possible, to reduce customization at SharePoint level to improve manageability and lower service costs.

The same logic applies to ASP.Net or MVC configuration changes but with a lower importance.

Instead and if you’re minimally serious about SharePoint infrastructure security on the Internet, you will not expose it directly but place a security gateway or reverse-proxy in front. Those security devices or servers usually come with the capability to alter HTTP headers easily and with a near-zero footprint.

This way, you keep your SharePoint fully functional while another device does the actual security job.

Removing Headers using F5 BigIP

With BigIP, it’s a no-brainer with the command http::header with iRule, example:

   # loop through and remove all instances of the unwanted
   # headers from the server response
   # (Server, X-Powered-By in this example)
   foreach header {Server X-Powered-By} {
      log local0. "Removing $header: [HTTP::header value $header]"
      HTTP::header remove $header

Removing Headers using Forefront UAG 2010

Using AppWrap, it’s a little more complex but in a nutshell, a rule similar the one hereunder can be used:

<APP_WRAP ver="3.0" id="WhlFiltAppWrap_HTTPS.xml">



















  • Do not mess around with custom HTTP Module, ISAPI or other configuration change unless you have no other way.
  • Like best practices usually recommend it, place you internet-facing SharePoint behind a security device such as ForeFront UAG and use that security device to do the response obfuscation work
  • Note1: Removing headers is only one step in the direction of securing Internet-facing SharePoint
  • Note2: While it gives a feeling of security, it will only prevent robots, probes or scripting kiddies to identify the underlying technologies. Real hackers use superior method insensitive to this countermeasure
  • Note3: But at least, the security audit will not blame you for exposing those headers on the Internet 🙂

More Information’s


SharePoint: A Bunch of issues with Windows Vista and SharePoint Explorer View

It’s critical to keep systems up-to-date with patches, the 4 issues described hereunder prove it again 😉 They affect Windows Vista only, not 7 or XP. Certainly because the WebClient went through a serious revamp with Vista, Seven drawing benefit from product maturation.


Explorer View does not work when connection goes through a forward proxy asking for authentication

When browsing a SharePoint site through a forward proxy (or simply web proxy) server requiring authentication, everything is working fine but when switching to Explorer viewing or simply trying to open an MS Office document, whether you directly get an “Access Denied” message or you get prompted multiple times for authentication (pop-up windows).

This problem occurs because early Vista’s implementation of the WebDAV redirector (aka WebClient) used by the Explorer do not handle correctly the HTTP Response 407 (Proxy Authentication Required)

Solution: install this hot fix ( or install Vista Service Pack 2. note: the hot fix requires SP1 to be present


Explorer View does not automatically forward credentials if the site does not belong to Local Intranet zone

A tricky one: let’s say that a user is browsing a SharePoint site that belongs to the Trusted Sites security zone of Internet Explorer while the browser is configured to automatically forward credentials for that zone (non-standard config). Although it will work fine with the browser, it will miserably fail with the Explorer View because on vista, WebClient does not rely on Internet Explorer security zones configuration and therefore does not automatically forward credentials under some circumstances.

Solution: Install this hot fix ( or install Vista Service Pack 2 and configure registry as described in the MS KB article related to the hot fix (this step is mandatory).


Explorer View does not automatically forward credentials if IE’s proxy setting check box “Automatically detect settings” is cleared

Pretty much derived from the previous issue, this one will behave identically.

Solution: install this hot fix ( or install Vista Service Pack 2


Explorer View might merge merge cookies leading to authentication issues (or other issues as well)

Cookies are often used to maintained state and sometimes to allow some kind of authentication mechanism, like form-based authentication.

In the case of authentication, products such as ISA/IAG/TMG may use so called “persistent” cookies to allow application to share authentication. This is typical when you want to seamlessly switch from a browser to an MS Office application when working with SharePoint.

Apparently, Vista’s implementation of the WebClient may accidently merge cookies when passing them to the web server or gateway, making them unusable.

Solution: install this Internet Explorer Cumulative Security Update (, which also includes functional updates solving that problem…

Credits go to Pascal B and Nicolas S both MSFT for this one. Thanks guys!



IIS: URL Rewriter 2.0 is finally out

The best (according to me of course;)) and free URL Rewriting engine for IIS is now available in its version 2.0 including outbound rewriting capabilities.

Get it here:

Cool tutorial:

Configuration Reference:



Leave a comment

IIS: On IIS6 a process serving application pool ‘MyAppPMool’ terminated unexpectedly. The process id was '1234'. The process exit code was '0xffffffff'

Suddenly some apparently well performing IIS servers recently started reporting this error regularly. some of them were also running SharePoint or OWA. all of them configured to use Integrated Windows Authentication (IWA) as authentication mechanism.

The problem with IIS worker process is that it can have so many explanation depending on the code executed that you can easily waste a week until you find a reasonable explanation. In this case, all servers were affected, regardless of the application they run. So my first idea was “they might be under attack”. But that was not the case: performance counters related to the worker process did not give any sign of that, this was confirmed by the IIS logs. Next”usual suspect”, a patch recently installed: bingo, that was it. Here are the details:

  • The Security Update implementing “Extended Protection” for authentication in IIS (KB973917) was just deployed on all servers
  • All impacted servers are running Windows Server 2003 Service Pack 2
  • One or multiple application served by that application pool/worker process have IWA enabled
  • After intensive file version analysis, it appeared that numerous IIS-related files (EXE, DLL’s…) were with a version prior SP2

Due to the inconsistency of IIS files in combination with that extra hot fix, the worker process keeps crashing –> root cause found!

Now how to fix it:

  1. Perform an inventory of currently installed post-SP2 fixes. I personally do it in a very straightforward way using psinfo but I am sure you’ll find plenty of methods to do it the way you like
  2. Reinstall the Service Pack 2
  3. Redeploy post-SP2 hot fixes, see step 1
  4. Check installed IIS File versions
  5. If file versions are OK, Install KB973917

Additional information’s:

Note: Make sure you pay attention to the process exit code which is always 0xffffffff. If you see another code, it might of course have another cause.


Leave a comment

Windows: Disabling PAC Validation II – We Won't Get Fooled Again

I did not expect to receive so much feedback by mail regarding this (not so fascinating) topic. Not to mention referring sites and so on… This brought the motivation to loop the loop by testing on Windows Server 2008 (SP2) as well as on 2008 R2 in-depth in order to cover the whole stuff.

So in summary, when will PAC signature verification will finally occur?

The table hereunder summarizes possible scenario’s:

Server OS/
Target Application or Service
Server 2003 pre SP2 Server 2003 SP2 and above
with extra registry configuration
Server 2008,
Server 2008 R2
File & Print Sharing NO Validation NO Validation NO Validation
Exchange Server Validation NO Validation NO Validation
SQL Server Validation NO Validation NO Validation
IIS with application pool identity set to Local System or Network Service Validation NO Validation NO Validation
IIS with application pool identity set to a domain account Validation Validation Validation

So in short, the only difference between Server 2003 and 2008/2008 R2 is that with from 2008, you do not need to modify registry anymore since the default value is inverted.

Once again, the important point here is: if you configure Kerberos on a IIS farm (SharePoint or “simple” ASP.Net), PAC Validation will ALWAYS occur, regardless what you will do to prevent it UNLESS the application is granted and makes use of the right “Act as part of the operating system”.

If the target application is granted seTCB making use of it:

Granting the seTCB privilege is not sufficient because it will be disabled by default until the application effectively requests it. But why would it need it? For various reasons this privilege might be needed by the server application. 2 common usages are described in the sections below.

Protocol transition

Protocol transition is the ability for a server application to delegate user credentials to a back-end service using Kerberos while they were not initially provided under that form by the client.

In clear, this means that a user may be authenticated by a service using non-Kerberos protocols such as Basic, NTLM, Digest and this service, making use of that feature, will transform the credentials in order to propagate them to another server. Example: a user authenticates against SharePoint using NTLM, want to use reporting service while it runs on a 2nd server, the SharePoint server will perform the necessary transition to push (aka “delegate”) the user’s credentials to the SRS server using Kerberos.

IIS MVP Ken Schaefer gives an excellent overview on his blog: IIS and Kerberos Part 5 – Protocol Transition, Constrained Delegation, S4U2S and S4U2P.

Services For User

SU4 extensions are tightly linked to Protocol Transitions. In very very short, they allow, under certain conditions, an application to perform a logon on behalf of a user without knowing his/her password.

This feature is, for example, used in IAM/SSO products such as IBM TAM/WebSEAL or CA SiteMinder

For both technologies, since the user does not initially authenticate using Kerberos, there is no PAC to validate.

OK but finally, why is disabling PAC validation so important?

Well I won’t say it is “so important”. I might help improving performances under some circumstances.

Since, in short, the PAC is verified by the server application before granting a Ticket-Granting-Service (TGS) to the client, it does not occur at every request as long as the TGS remains valid (note: there are some exceptions to this rule). BUT in some case, this initial verification can take some times because 1) the client’s AD is far (in term of network, hops, latency, bandwidth…) from the server’s AD or 2) the client’s AS is too busy. This could therefore give the wrong impression that client to server authentication seem slow while you expect a big boost by switching to Kerberos.

Additional Resources


Leave a comment

SharePoint: Ways to Lock-down SharePoint Designer Server-Side

Beyond the discussion over the pro’s and con’s of SharePoint Designer, you may sometimes have to prevent its usage in order to comply with the policies or governance in place in your organization. Although you can block it from the client-side (using software restriction policies or Office ADM files), the most efficient measures (talking about enforcement) remain on the server-side.

Disable “Remote Interfaces” at Web Application Level

From the SharePoint Central Administration Web Site, you can modify the “Permissions for Web Application” and uncheck “Remote Interfaces”.

  • Pro’s: Simple, no code, config only
  • Con’s: will not only block SharePoint Designer but ALL “rich” protocols: WebDAV, Soap, FPRPC… No more Explorer View, RSS Feed, working from Office or “Edit in Office menu options”…

Lock Site Definition in ONET.XML

Modifying The configuration or each site definition in ONET.XML (adding the attribute DisableWebDesignFeatures=”wdfopensite” to each Project tag) will prevent customization using SP Designer.

  • Pro’s: Offers granularity per site definition
  • Con’s: Manual edition of ONET.XML by default… Up to you to package it properly otherwise.

Manage Permission at Site-Level

Add and Customize Pages: Removing this permission will prevent edition of pages that are not stored as list items (default.aspx and so on)

Browse Directories: disabling this option will simply prevent SPD to open a site… But it will also break other WebDAV clients!

Manage Lists: Disabling this permission will prevent some kind of modification from SPD but will not prevent its global usage. Moreover, it will also break simple browser-based advanced list edition…

  • Pro’s: Offers granularity per site.
  • Con’s: does not scale for large farms with lots of sites. Light break other functionalities or might not totally lock down SPD Usage… Better be prepared with a bullet-proof functional test matrix 😉

Using ISAPI Filter and User Agent String: UrlScan3.1

Since version 3.0, UrlScan tools comes with rich pattern-based request blocking options. To reject request originating from FrontPage Designer, add the following elements to the configuration file (UrlScan.ini):


DenyDataSection=Agent Strings

[Agent Strings]
MS FrontPage 12.0

  • Pro’s: Easy to deploy and to configure. Excellent performances. Participates to the overall effort to secure the platform. Includes logging (own and IIS W3C files). Relatively granular deployment possible (per web application)
  • Con’s: Remains at ISAPI-level. No rich authorization options (based on user identity, membership…). Not 100% reliable: techies can find easy ways to tweak their User-Agent string

Using ISAPI Filter and User Agent String: Make your own

“Simplistic” example below. Ready to be compiled. Do not forget to chose the appropriate platform (32 or 64-bit) depending on your destination server.

#include <windows.h>
#include <httpfilt.h>

  pVer->dwFilterVersion = HTTP_FILTER_REVISION;
  strcpy(pVer->lpszFilterDesc, "SharePoint Designer Blocking ISAPI, Version 1.0");
  return TRUE;

DWORD WINAPI __stdcall HttpFilterProc(HTTP_FILTER_CONTEXT *pfc, DWORD NotificationType, VOID *pvData)
    char buffer[256];
    DWORD buffSize = sizeof(buffer);
    switch (NotificationType)  {

      BOOL bHeader = p->GetHeader(pfc,"User-Agent:",buffer,&buffSize);
      CString UserAgent(buffer);
      if(UserAgent.Find("MS FrontPage 12.0") != -1) {
          p->ServerSupportFunction( pfc, SF_REQ_SEND_RESPONSE_HEADER, (PVOID) "403 Forbidden" , (ULONG_PTR)"Content-Length: 0rnContent-Type: text/htmlrnrn", NULL );

  • Pro’s: Easy to deploy. Excellent performances
  • Con’s: Requires code and appropriate compilation. Limited functionalities, hard to code compared to HTTP modules/handlers. No logging, totally blind operations. Not 100% reliable: techies can find easy ways to tweak their User-Agent string

Using HTTP Module and User Agent String: Make your own too

HTTP modules are "roughly the .Net translation of ISAPI with tons of extra capabilities. “Simple” example below.

using System;
using System.Web;

namespace BlockSPD
    public class BlockSPDHttpModule : IHttpModule
        public void Init(HttpApplication context)
            context.BeginRequest += new EventHandler(BeginRequest);
        private void BeginRequest(object sender, EventArgs e)
            HttpContext currentContext = HttpContext.Current;

            if (currentContext.Request.UserAgent != null && currentContext.Request.UserAgent.Contains("MS FrontPage 12.0"))
                currentContext.Response.StatusCode = 403;
                currentContext.Response.StatusDescription = "Forbidden";
#160; currentContext.Response.End();
        public void Dispose()

  • Pro’s: Easily to develop and relatively easy to deploy. High flexibility for extra conditions: user’s identity, group membership, URL accessed… with minimal effort compared to ISAPI
  • Con’s: Requires code. Implies more processing overhead than ISAPI. No logging, totally blind operations. Not 100% reliable: techies can find easy ways to tweak their User-Agent string

Using HTTP Module and User Agent String: Cool CodePlex projects

Don’t want to waste your time playing around with own modules/handlers, take a look at these ones from CodePlex:

More options using ISAPI, Modules, handlers or URLScan…

The technologies described above can also be used to reject requests that can be considered as “typical” fro SPD:

– Rejecting request whose URL ends with “/_vti_bin/_vti_aut/author.dll"

– Rejecting verbs or request that are typical from the FrontPageRPC procotol

In both cases, you might also block valid request coming from OFfice applications such as Word or Excel though…

Additional Resources


Windows: Disabling PAC Validation Brings More than meets the eyes

A lot of bloggers have published how to configure Kerberos authentication “how-to” on the Internet, many of them focusing on IIS, Asp.Net and subsequently SharePoint. Some report performance boost over NTLM, others described how to disable “PAC Validation” and increase that performance gain even more.

Recently, I was asked to evaluated that configuration, ensure it works as expected and measure the effective gain from an end-user perspective. This post is a quick summary of the findings not subject to NDA.

What is a PAC and what is PAC Validation (actually PAC Signature Verification)

PAC stands for “Privilege Account Certificate”. It is a data structure contained in some Kerberos requests. It actually contains authorization-oriented information’s such as the group membership of an authenticated user, his/her SID (and SID history) as well as other information’s such as user flags or the time when a forced logoff should potentially happen (most are stuffs you usually configure on the “Account” and “Group Membership” tabs of the AD Users and Computers console.

Since the original Kerberos protocol did not cover the “authorization” part while windows had to maintains the compatibility when switching from NTLM to Kerberos, MS had no other choice than implementing a proprietary extension named PAC. Since the PAC travels over the network when users authenticate to servers (and service actually), it may be altered by malicious user able to decrypt it and modify it on the fly. The purpose of PAC Signature Verification is to make sure the PAC is unaltered when it is used by the server/Service for granting access

The verification/validation process takes place between a server and a domain controller of the domain the server belongs to. The server sends over RPC (therefore NOT using the standard Kerberos protocol) a request to the domain controller of the domain it belongs to specify it wants to verify the PAC signature and of-course, includes that signature (not the PAC itself). if the PAC whose signature is to be validated does not belong to the same domain as the server’s domain, the domain controller contacted for that purpose will follow the trust relationship in order to finally located a domain controller for the user’s domain and send the verification request to it. PAC verification is therefore a security feature implemented to prevent malicious alteration of user’s privileges.

Some people indicate that the verification process brings such an overhead to authentication that Kerberos finally becomes less performing, making end-user’s perceive poor performances and therefore behaves like NTLM or Basic (in term of performance, not security of course).

How to disable it (when you can)

There are 2 main scenario’s in which validation will not occur as long as specific conditions are met:

  • The server must run as least Windows Server 2003 Service Pack 2, the registry must include the setting “ValidateKdcPacSignature” set to “1” (default on Server 2008) and the server application is a Windows system service (its primary security token actually include the “SERVICE” SID)
  • The server application’s identity is granted seTcb privilege (aka Act as part of the operating system) and makes an effective use of it.

Witnessing the PAC Signature Verification Process

From a technical (read “measurable”) perspective, there are multiple ways to witness the PAC verification process:

1. Use a network traffic analyzing tool such as MS Netmon or Wireshark and look for RPC communications between the server and the DC of its own domain right after the user’s attempted to authenticate against that server. Captured using Netmon 3.3, it would look like this:

From the server to the DC: NetLogonr:NetrLogonSamLogonEx Request, NLRNetrLogonSamLogonEx, Encrypted Data

Reply from the DC: NetLogonr:NetrLogonSamLogonEx Response

2. Enable netlogon logging and look for entries such as hereunder:

07/31 15:56:23 [LOGON] SamLogon: Generic logon of MYDOMAIN.LOCAL(null) from (null) Package:Kerberos Returns 0x0

3. Enable Kerberos tracing/lsass logging and look for entries such as

396.500> Kerb-Warn: KerbVerifyPacSignature contacting domain MYDOMAIN.LOCAL for user MYUSER

Some noticeable behaviors…

IIS application pools do not run as “service”, they usually logon as “BATCH” BUT their process token will include the “SERVICE” SID if the identity used to run them is “LocalSystem” or “NetworkService”. Since you cannot used these identities when multiple IIS servers are used in a cluster or a “farm”, Kerberos requiring a domain account, there is very little chance (read no chance) for the PAC validation to be disabled.

Accidentally, this gave me the opportunity to witness that when you schedule a batch job under the context of the “NetworkService”, it will actually get the ‘”SERVICE” SID too!

Although online documentation reports that verification does not occur if the server’s process token holds the seTcb right, this did not seem to be sufficient. The application must explicitly make use of it.Simply granting the right will not lead to preventing PAC validation. This makes sense if you look at the reasons’ why seTcb is often granted: it allows to make use of the S4U Kerberos extension which enable, for example, the ability to perform user logon without having to provide any password (!). These extensions are often used in single sign-on solutions or packages and allow easy transitioning of identities. Since in that case, the a new token is “regenerated”, there is no use in verifying a PAC signature

Generally speaking, granting the SeTCB right is something to be cautiously evaluated before implementation because it would allow its holder to perform OS-level operations such as, for example, logging on as a given domain user without knowing his/her password.

(Challengeable) Conclusions

  • Disabling effectively PAC Signature Validation is a no-brainer when the server application is running as a Windows system service (MS Exchange Store, File & Print  Sharing, MS SQL Server…). IIS Application Pools are NO Windows system services, they require special care and some scenario will not allow preventing PAC Signature Verification
  • Granting the seTcb privilege (aka Act as part of the operating system) is not sufficient per-se to prevent PAC Validation. The server application must effectively make use of the privilege by, for example, making use of MS proprietary Kerberos extensions like Service 4 User (S4U)
  • Disabling PAC validation on a SharePoint farm (IIS application pool running with a domain identity instead of Network Service) in order to prevent roundtrip to domain controllers when a user is authenticating against a WFE will not work, unlike stated by many bloggers!
  • Finally, Unless the authentication rate is extremely high and/or the server must authenticate users from “remote” domain with high-latency communications, disabling PAC Verification does not significantly improves performances.

More Information (to be decoded and tested properly!)

I would like to thank the Windows security specialist Emmanuel Dreux ( for his help and input while deciphering Windows’s internal security behavior as well as my colleague Olivier B for his nearly encyclopedic knowledge of Kerberos on Windows.

I am aware my conclusions may seem to conflict with 1) Some official MS documentation 2) the technical recommendations of highly influential bloggers. Test it careful and feel free to report your findings here!


Windows: An existing connection was forcibly closed by the remote host

Two different projects complaining about the same issue: nice troubleshooting challenge! One is SharePoint-based and the second is a, let’s say “entertaining” .Net-based application. They both make use of SQL Server as back-end data store and both complain about having “existing connection forcibly closed” reported in they stack trace when then attempt to connect to SQL.

This happens when a client application is trying to re-sued an existing TCP connection to a remote host while it closes it, making connection reuse impossible. There are actually multiple possible root causes which do no seem to be mutually exclusive:

Limit set on the number of connection allowed by SQL Server on a given instance

For a given SQL instance, you can set the maximum number of connections that can be used by applications. Depending on the way your application is written, multiple connection might be used for a single transaction… Raise the limit or set it to unlimited as necessary.

The (infamous) Scalable Networking Pack

The Scalable Networking Pack is a set of improvements brought to the Windows Networking stack. It is available as an add-on for Windows Server 2003 but is included from Service Pack 2 as well as from Windows Vista/2003.

This update greatly modifies the way Windows handles network connectivity at TCP-level and might therefore provoke the error. In short, the following settings should be modified on the SQL Server (or on any server acting as the server component):

In the registry, under HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters

  • EnableTCPChimney (REG_DWORD) set to 0 (disabled)
  • EnableRSS (REG_DWORD) 0 (disabled)
  • EnableTCPA (REG_DWORD) 0 (disabled))

Applying the change requires a reboot.[UPDATE] Some MS sources report that a reboot is not necessary for some settings so I switch my statement to *might* require a reboot.

You’ll find a lot of trustworthy online resources recommending to disable the SMP…

On the other hand, recent NIC drivers may allow your system to work properly with these options set to enabled… Look at this page to get a list of SMP “partners”:

Faulty NIC, NIC driver or driver settings 

Some NIC include a TCP Offload Engine (TOE). Incorrectly configured or running an out-dated, they will generate error at TCP-level.

In some cases, the TOE simply does not work, so you also might want to test with this function completely disabled. When editing you driver’s parameters, look for “Large Send Offload”, “Checksum offload”…

Import to note, you might also want to check the link speed and duplex at NIC level AND at switch port level, they might also cause the problem. Remember,: they must be identical on BOTH sides

Applying the change *might* requires a reboot.

Windows TCP/IP Stack Custom Configuration or Hardening

There are plenty of resources describing how to “harden” the Windows TCP/IP stack. Unfortunately most of them simply show the “how to”, not its consequences. Of of them being the performance decrease implied by hardening. You’ll also find those parameters under HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters

In our case, the parameter “SynAttackProtect” set to 1 instead of 0 (disabled) will force Windows to be more restrictive regarding the incoming or TCP connection requests and well as more aggressive with the re)use of existing one. If the parameter is  enabled, the following additional parameters will also be taken into account:

  • TcpMaxPortsExhausted: Determines the maximum number of connections that can be opened before enabling protection against SYN attacks
  • TCPMaxHalfOpen: Determines the maximum number of connections that can be left “half-open” (waiting for re-use)
  • TCPMaxHalfOpenRetried: Same as above BUT applicable to connections that were effectively re-used by the original client

The parameters above are thresholds used by Windows to determine if a TCP-based (SYN) attack is in progress or not. They should only be used if the server is put in a high risk situation (DMZ or internet-facing) while there is not other security device put in place (Firewall…).

Note that, before Windows 2003 SP2, this SynAttackProtect is set to 0 while with SP2, it is set to 1 then with the latest versions of Windows, it returns to 0…

Automatic adjustment for the TCP window size (From Vista or 2008 only)

On the client side, Windows, starting from Vista, comes with a feature that allows to dynamically set the TCP windows size depending upon the network (remote host) conditions. See But I frankly doubt it can be the root cause, I just documented it for comprehensiveness.

If your application is affected by those problem, I hope you’ll find the culprit amongst one of those.

Any network device catching the traffic at TCP-level

If there is any firewall in place, look at their logs, they might reveal that some connections are refused when the client attempts to re-use them.

More Information

Thanks to Tim B (MSFT) and Pascal B (MSFT) for the hints and guidance.

And cut!

The French Connection Poster

Leave a comment

SharePoint: SharePoint Designer, IIS Compression and Eugene Tooms

You all (should) know that since 1st April, SharePoint Designer is free. Since that, there also was a lot of buzz around it as well as interesting blog posts like the one from Mark Rackley SharePoint Designer – A definite Maybe, echoed by SharePoint Mogul Joel Oleson. While both were very interesting readings, one statement from Mark caught my attention: Using SharePoint Design would break IIS Compression. I don’t totally agree with this statement but the truth is (not only out there but) more complex than it seems.

Before diving directly into the bits and bytes, some background information over how HTTP compression works.

The effective HTTP compression by a web server is the combination of two thing: the client that sends a request specifying that it support compression and the server, receiving that requests, which is configured to support and therefore return as compressed response. That compressed response is conditioned by the type of compression supported by the client and the server (there are currently two: Deflate or Gzip) and the server’s configuration: compress all file extension, or only some of them. There are also, on IIS, two kind of file extensions: the static (simple files placed on the file system like a css, and image, an html page) or dynamic (dynamically generated content like ASP, ASP.Net pages, ASMX…). Additionally, IIS also includes more granular settings like the minimum file size for a file to be compressed (there is no gain in compressing very small files).

The important point here is: if the client does not explicitly says it supports compression, the server will never generated a compressed response, even if it was configured to do so!

While all modern browser do include the magic header informing the server they support compression, other client types (I mean other usual SharePoint client types), do not.

As a comparison, here is a request from a browser:

And here is one from the WebDav Mini Redirector (aka Explorer view in SharePoint aka Web Folders aka Web Client):

The conclusion is: most of time, only standard browsers do support compression. Clients like Windows WebDAV client, Office Application (using FPRPC, an MS proprietary extension of WebDAV), Colligo Reader and even the SharePoint Indexing process (aka Crawler) never include the necessary header in their requests.

The extra complexity brought by SharePoint: is the resource on the file system or in the content database?

With SharePoint, some resources requested by clients may be physically stored in two different locations: the file system (usually under the 12 Hive) and the content database(s). When they reside in the content databases, SharePoint uses its internal API to fetch them and present them to the client. In this case, the questions are 1) are those files compressible or not 2) how are they considered: as static or dynamic files (even if their extension is the one from a static file, like html for example).

Subsequently, a very good remark from MVP Eric Schupps on Mark Rackley’s blog was: when you customize a page using SharePoint Designer, it is no longer stored store in the file system but in the CDB instead and this makes it impossible to compress.

I am a bit surprised by this statement because for me, compression is an IIS function, not a SharePoint function. Therefore, as long as IIS received something to compress, it does not care and does the job. The truth (which is still out there) is different: none of use was right, the reality is more complex than that!

Getting the resources stored in content database compressed

For extensions considered as “dynamic”, it is pretty simple: configure them as usual in IIS and it will work

For extension considered as “static”, there is more fun. Let take the example of a pdf file stored in a CDB.

Out of the box, it will not get compressed. If you configure it as a “static” file, it won’t compress either.

How to get it compressed then? First you need to configure it as a “dynamic” extension (yeah, sound strange I know), in both compression providers (Deflate and GZip).

Then,you need to enabled Blob Caching for you web application where the content is stored. To do this, edit the matching Web.config as it is below

Finally, perform an IISRESET and attempt to doanload the PDF file. The PDF will get blob-cached then compressed

Configuration Summary Table for IIS Compression to Function

Content Type Content Location Configure IIS to Compress Static Ext. Configure IIS to Compress Dynamic Ext. Configure ASP.Net Blob Cache
Static (html, css, doc, pdf…) File System Yes No No
Dynamic (aspx…) File system No Yes No
Static (html, css, doc, pdf…) Content Database No Yes Yes
Dynamic (aspx…) Content Database No Yes No

Note: there is not use in activation compression for native MS Office 2007 format (docx, xlsx, pptx etc…) since they are already natively compressed, See Introducing the Office (2007) Open XML File Formats.


Content edited with SharePoint Designer can be compressed as long as the appropriate configuration is in place, regardless of their location.

Note: All the tests above were made on a standard WSS3 running on IIS6, it may behave differently on IIS7.

But who (the F or H, depending on your level of politeness) is Eugene Tooms then?

Eugene Victor Tooms is a fictional character appearing in two episodes of the TV series “The X-Files”. Tooms is able to squeeze and elongate his body in ways that are impossible for a normal human. This makes him similar to IIS in some ways 🙂

And Cut!