Mostly applicable to Windows 7 and Server 2008 R2 and sometimes to earlier Windows versions…
Defragmentation vs. SSD Drive
Defragmenting is actually useless due to the nature of the SSD technology: there is no mechanical or moving part and the storage geometry is totally different. Therefore, making blocks contiguous does not make a lot of sense. On the other hand, recent SSD drives (as well as Windows) introduced the support for the TRIM command, which is actually responsible for “proactively” cleaning up clutter left by deleted files in order to reduce the effort required at actual file write-time. Look at my blog post over the subject for extra details: Checklist for Getting the TRIM Command to Work with SSD drive on Windows 7.
Some claim out loud that defragmenting an SSD drive is dangerous and may therefore shorten its life: that’s not totally untrue. Depending upon the vendor and underlying SSD technology, the life-cycle of an SSD, which is mostly determined by the number of write access, will vary. since defragmentation is likely to provoke a high amount of writes, therefore it may shorten the SSD’s life. Of course, it also highly depends on the number of years you wish to use the drive: you may not always notice the SSD performance degradation due to excessive writes. By the way, if Windows detects an SSD drive at set-up time, it will automatically disable the scheduled defragmentation.
If you find the time, try to have a look at the WD Whitepaper listed under the section More Information, it contains useful information about estimating the SSD life duration.
Defragmentation vs. SAN-based Storage
There are a lot of theories and claims around defragmenting or not SAN-based storage. For me, it depends on the SAN solution that is actually implemented:
- Some SAN solution simply behave like any other DAS-based storage. Therefore they will benefit from defragmentation like any other disk
- Other SAN solutions include built-in storage defragmentation (as well as de-duplication) technology, there is there no need to act at Windows level
- Finally, some SAN solution are fragmented by design. Attempting to go against is simply counterproductive
You got the point: the only way to know what to do is to ask your SAN vendor (and hope you get smart answers in return).
Defragmentation Efficiency vs. Impact on Usability during Defrag
Whether using defragmentation initiated thanks to the windows Scheduled Tasks or Manually, it will run by default in low-priority therefore reducing the impact on usability. Obviously, the process will take longer. In all case, it will impact usability much less than, for example, a full drive scan by any anti-virus software.
If you wish to force defragmentation in high-priority mode on a Windows 7 system, use the command-line defrag with parameter /H.
Defragmentation vs. Windows Page File
Like any other file and due to its usually large size, the page file(s) may also get fragmented over time. The issue with the page file is that it is difficult to touch since it is required by the OS to work. Although there are various workarounds, they all have pro’s and con’s.
- If you run a Windows version prior Windows Vista: You’re lucky since you can still use Sysinternals’ PageDefrag mighty tool which will defrag the page files as well as other system-controlled files during boot time
- If PageDefrag is not an option and you don’t have a second volume available: Reduce the size of the page file to a minimum (requires a reboot), defrag the whole system volume in order to reclaim contiguous free disk space, then set back the page file to its original size then reboot again. There is no guarantee to drastically reduce the number of fragments since the whole file is not completely relocated
- If you have a second partition at hand: Reconfigure the page file on that partition and remove the one located on the system volume (requires reboot) then once the page file is located on the second volume, defrag the system volume to reclaim contiguous free space. Once done, reconfigure the page file to use the system volume exclusively (requires reboot as you might guess…). In this case,
Note: to fight against page file fragmentation and if you’ve got a good idea of its ideal size (by experience and measures), you can set up a logical volume dedicated to page file and set it to an appropriate fixed size. Of course, it will not boost page file IO’s like it would do if you placed it on a dedicated physical disk.
Note 2: Configuring Windows to clear the page file at shutdown will not help since this (security not performance) feature will write pages full of zeros and will not modify the file size at all, unlike many still think. All you will get by enabling this is a much slower shutdown (although it improved compared to previous Windows versions).
Defragmentation vs. Disk Reliability
Some are afraid of defragmentation because it my break their file system integrity if a power outage occurs right in the middle of a defragmentation process. This is very unlikely to happen because: 1) When defragmenting, Windows takes care of copying blocks and updating the pointers afterwards 2) NTFS is a transactional file system with automatic recovery capabilities dramatically reducing the like hood of data loss during any type of write operation.
Other fear the fact that performing over a relatively short period a high amount of disk IO (writes in particular) may render their disk unusable by increasing the number of bad blocks: this is only true if the disk is already unreliable or already signs of age. In this case, the only actions you can takes are 1) Backing up everything ASAP hoping all your data are still readable 2) Replace the drive 3)Reinstall/restore
Logical vs. Physical Disks
Although it reorganizes blocks, defragmentation operates up to the boundaries of the logical volumes. Take this into account if you have multiple logical disks on the same physical disk.
Disk vs. File Fragmentation
While most of us focus on disk fragmentation, there a circumstances where file fragmentation matter most. for example, you may use Outlook at OST/PST files massively on the save computer and now your PST files are getting bigger and bigger. It’s likely they are fragmented. In order to save you time, you can can Sysinternal’s Contig command to defrag only selected files
The NTFS Master File Table (MFT, like any other NTFS metadata file may get fragmented. Similarly to single file defragmentation, Contig is your friend for such tasks:
You can also proactively increase the space reserved for the MFT, preferably at disk initialization stage: About the Master File Table zone reservation in Windows Vista and Windows Server 2008.
“I should never care about defragmenting since windows does it for me”: Windows can only defragment files that are not in used by the system or by applications. Therefore, if you wish to efficiently defrag files – and not solely free space – you always should have a minimal set of applications running: ideally only the console or the command-prompt.
- “With a disk large enough, I will never suffer from fragmentation”: This statement is true of you consider only the fragmentation of free space. For some files and under some circumstances, fragmentation will still occur. Of course, you can decide to defragment only those files instead of the whole disk
- “My Database is configured for scheduled maintenance, there is therefore no need to defrag at disk-level”: Here there is confusion between defragmenting at database-level (data structure) and at disk-level (file system). In both case, take time to analyze fragmentation and depending upon the results, defrag at one or at both levels.
- “Fragmentation only occurs on Microsoft Windows, not on %REPLACE_WITH_ANY_OTHER_OS_YOU_MAY_LIKE%”: All file systems are likely to experience fragmentation, more or less. It depends more on the type of workload than on the technology itself. Maybe one difference to mention: NTFS has been including a Defrag API for years and years while some have just recently adopted it….
- “To defrag (or repair) disks faster, I just unplug them and them plug them as extra disk on another system then run the process ‘offline’: Okay, why not. The advice here is to use strictly identical versions of Windows (OS, service pack and patch levels) in order to avoid adverse effects implied by unequal Windows versions.
A Quick(er) while Not-so-Dirty Whole Data Volume Defrag Method
A straightforward way to defrag a data volume is to 1) Copy all files to another volumes 2) Perform Quick Format on the source volume 3) copy the files back to the original volumes. It often takes less time than performing API-based defrag. Note: do not forget to include permissions and metadata as needed as part of the copy…
Defragmentation is actually rescheduling IO’s at a more appropriate time 🙂
- MS MSDN Blogs: Engineering Windows 7 – Disk Defragmentation – Background and Engineering the Windows 7 Improvements
- MS TechNet Magazine: Tip:Defrag from the Command-Line for More Complete Control
- Western Digital White Paper: NAND Evolution and its Effects on Solid State Drive (SSD) Useable Life
- MS TechNet Sysinternals: PageDefrag Tool
- MS TechNet Sysinternals: Contrig Tool
- MS Support KB Article: About the Master File Table zone reservation in Windows Vista and Windows Server 2008
- Intel SSD FAQ