Marc Lognoul's IT Infrastructure Blog

Cloudy with a Chance of On-Prem


Windows Server: Memory Pressure on Windows Server 2008 R2 File Server due to System Cache

Recent questions in TechNet Forums reminded me of an issue faced when building large file servers running on Windows Server 2008 R2. By large I mean serving a lot of files, from thousand to millions or more.

To improve performance, Windows Server makes intensive use of file system cache. With files located on an NTFS-formatted partition, this also means caching the additional information associated to the file or folder such as attributes, permissions and so on. Since with NTFS everything is a file, those information’s are stored under the form of metafiles, which are hidden to the user. For each file/folder, the matching metafile entry can have a memory footprint equivalent to at least 1K. Multiplied by the number of files cached, it starts counting on larger file servers. Thanks to Sysinternal’s RAMMap Utility, you can witness this behavior by looking at the line Metafile from the tab Use Counts:

RAMMap_Metafile

There is very little you can do to work around this issue except adding more RAM to the server. Since the amount of memory used depends on the size of files served and the number of files (Metadata), the amount of RAM needed can be relatively easily although roughly calculated.

While you can control the amount of memory used by the file system cache, you can’t prevent the metafiles from being cached.

Finally, a safe way not to get caught by surprise by this behavior once your file server is running in production is to benchmark it beforehand using the File Server Capacity Tool (FSCT).

[UPDATE] While File Servers are the most likely to be affected by this issue, web servers serving large amount of files or workstations used for large development projects might be too…

More Information

Advertisements


Leave a comment

Windows Server: Comprehensive Guidance Implementing an End-User Data Centralization Solution with Windows Server 2008 R2

Microsoft has finally releases a truly great document late October: Implementing an End-User Data Centralization Solution. This documents covers in a very comprehensive while practical manner everything related to storing, protecting and and making accessible user data using Windows File Sharing.

Unlike many white papers out there, this one can be directly used for planning, building, implementing and operating with no room for improvisation. It comes with real-life metrics, group policy settings as well as scripts and tools.

Important to mention that it is also 2008 R2/Seven-ready and covers the use of FSCT.

Download

Marc


Windows Server: Tools for Windows Server 2008 R2 You can’t afford to miss: FSCT (Overview)

Since I do not like one-liner posts, I recently started writing an (ambitious?) serie of posts covering enterprise file services based on Windows Server 2008 R2 based on a presentation I gave to certain customers of mine . The first “shot” is dedicated to a new swiss-army knife-like tool from Microsoft: FSCT, standing for File Server Capacity Tool.

Until now, validating a Windows file server setup has always been a difficult task since very few tools were available on the market to adequately simulate a realistic user load. In past, tools like NetBench were considered as references. Nowadays, if you’re lucky, you can rely on your own scripting toolkit, if you’re not, you may have use Intel’s NAS Performance Toolkit, which is not bad, but farm from being “enterprise” ready. I’ve even seen some people trying to benchmark file services using SQLIO…

Architecturally speaking, FSCT is similar to other load-tests tools you would use of web application, for example: it is made of a controller, a server (the one to be benchmarked) and one or multiple clients. Optionally, you can also include an AD domain controller in the picture in order to simulate AD-based authentication. Nevertheless, FSCT is also compatible with workgroup environments, but in a degraded manner.

Note: Combining roles is a possibility but as you expect, it may negatively affect the tests. So if you’re short on machines, combine wisely and keep the other roles off the “server” role.

On the other hand, you can conduct test campaigns from more that one client simultaneously, that’s where the architectural choice is paying: up to my knowledge, no other tool can do that today.

Plan and deploy your test environment carefully…

  • First, take time to read the whole paper included in the package carefully. Everything you need to know about the tool is in there
  • Practise before conducting the “real” tests: since the tool is command-line based and due to the way it is distributed among systems, you may not get the result you expect from your first try (it’s not point-and-click)
  • Make sure all components involved are healthy: server & clients (network configuration, drivers…, but also network components (switches and if applicable, routers or access points…). A single component improperly working may severly affect the result of the tests (argh, hard-coded duplexing/link speed)
  • Copy FSCT to all systems involved (unless the server is not Windows-based) and build your own batch files to speed-up the configuration, the execution and finally the cleanup
  • Unless you want to reach the limits of an HP Proliant DL 58x, do not bump the client + user count to the maximum, plan then realistically. An in any case, it is not advisable to conduct your test in production, especially, for the network in general as well as for the sanity of the AD you would populate users in…

Don’t be afraid of command-line based execution

Okay, there is no UI but who cares? Me? Yes I’ve made my own little WinForm apps to save me time (I’ll post it in the coming weeks) but frankly, once your config files and batches are ready (it take 2 hours max), command-line rules supreme over the mouse;)

And before you ask, no, there is no PowerShell support, it sounds a little old-fashion I admit

Plan multiple test scenario’s keeping in mind important factors such as:

  • CIFS/SMB Version: depending on the client and server/Configuration OS version, the usage of SMB2 will greatly improve performances under virtually any circumstances. If you plan to use pre-Vista client OS or maybe have a mix of them, take this into account in your scenario’s
  • SMB-related security settings like signing and so on also affect performances
  • Other security configuration like TCP/IP stack hardening or IPSec
  • The presence of a file-based Anti-virus: it is wise to test with and without. You might be surprised by the performance loss an A-V implies, particularly on heavily used servers of course. BTW, since most of (if not all) A-V are implemented as file system filter driver, do not simply disable it during the tests, uninstall it, for certainty
  • Take into account the other side-activities, particularly server-side like: backups (using shadow copies or third party solutions), monitoring or other background processing tasks that may affect tests (reporting and so on…)
  • So called “performance-boost” tweaks like cache manager, NTFS tweaks, disk alignment, cluster sizes… All-in all, they may greatly affect the results. BTW, I will dedicate another post to those tweaks and debunk some myths at the same time as well

What do you get from the tests?

Besides generating the load itself, FSCT, assuming you’re working in a standard setup, will provide detailed tests results containing the following useful information’s retrieved from the server and client(s):

Data collected from the following performance counters:

  • Processor(_Total)% Processor Time
  • PhysicalDisk(_Total)Disk Write Bytes/sec
  • PhysicalDisk(_Total)Disk Read Bytes/sec
  • MemoryAvailable Mbytes
  • Processor(_Total)% Privileged Time
  • Processor(_Total)% User Time
  • SystemContext Switches/sec
  • SystemSystem Calls/sec
  • PhysicalDisk(_Total)Avg. Disk Queue Length
  • TCPv4Segments Retransmitted/sec
  • PhysicalDisk(_Total)Avg. Disk Bytes/Read
  • PhysicalDisk(_Total)Avg. Disk Bytes/Write
  • PhysicalDisk(_Total)Disk Reads/sec
  • PhysicalDisk(_Total)Disk Writes/sec
  • PhysicalDisk(_Total)Avg. Disk sec/Read
  • PhysicalDisk(_Total)Avg. Disk sec/Write

As well as the following metrics (correlated to the number of users simulated):

  • % Overload
  • Throughput
  • # Errors
  • % Errors
  • Duration in ms

Once the % Overload is higher than 0%, it will indicate the threshold above which your file server infrastructure does not scale anymore for the given number of users

Special Cases

Using FSCT against DFS-N

Can FSCT work against DFS-N: yes it does. But it will not allow you to stress-test the DFS part of your design since it has no knowledge of it and does no embark any technology to capture DFS’ behavior during the load test. Moreover, it may require to configure the “server” part as if it was a non-Microsoft file server (see below for details). Moreover, capturing performance counters on the server using FSCT itself might be an issue, the workaround being the good old “manual” capture using perform or logman.

Using FSCT against a failover cluster

Using FSCT against a failover cluster works perfectly but with one limitation identical as above: the tool will ne be able to collect performance counters directly. Instead, you will have to plan for manual capture on the node designated as owner of the file share resource, or on both if you wish to perform failovers during the tests.

Using FSCT against non-Microsoft File Server or NAS

Assuming you can leave with the same limitations as stated above, FSCT will work like a charm against non-MS file server, including SOHO devices. Depending on the server or device’ capabilities, you might be able to collect a reduced set of performance indicators using SNMP polling for example. Of course, FSCT itself does not include any SNMP but there are plenty of too
ls available and during the test I lead Cacti was very helpful.

Ready, Go?

Well, not 100% ready yet. The only workload scenario available at RTM release time being “Home Folders”, you might not be able to validate your setup realistically. But according to MS, an SDK is on the way in order to allow the creation of custom workloads. In the mean time, you can already start playing with the tool itself and with the customization of “HomeFolder” profile config file but you will not go far with that.

Additional Resources

In a coming post I will cover practical usage of FSCT.

Marc