SOURCE: Bruce Boyers Marketing Services

November 11, 2008 19:27 ET

As Data Storage Increases, So Does the Fragmentation

BURBANK, CA--(Marketwire - November 11, 2008) - It would only make sense: the more of something you have, the more of its inherent problems come along with it. For example, if a farmer has worms in his corn, the bigger the cornfield, the more worms he's going to have. He's not going to defeat the worms simply by planting an extra 5 acres; the pests are simply going to spread from the existing crop to the new crop.

The same could be said about file fragmentation. Fragmentation exists, for example, on a 20 GB drive. But fragmentation isn't simply isolated to that drive, it's a function of the same NTFS file system that is going to be used to access a 100 GB drive or even a 1 TB drive. The files will be saved the same way, in a fragmented state, and fragmentation will quickly escalate the more the drive is used. Solutions such as larger amounts of on-board caching will help, but they will fall well short of defeating the problem -- in the end, fragmentation is going to slow access drastically and lead to reliability issues. And the larger the drive, the more fragmentation you will have.

Today's mammoth-capacity disks present a further problem, that of defragmenting them. Going back to our 20 GB drive, a defragmenter was made that would probably defragment it. As drives grew in size, it took longer and longer for this same defragmenter to get the job done, but it did. At some point, though, the same defragmentation technology that was applied to those smaller drives is going to fail when applied to a 1 TB drive. It really is a question of sheer volume -- a defragmentation engine designed to deal with x amount of drive space and y number of files is going to stop working when those limits are far exceeded. The result will be a defrag process that simply grinds on and on for days with no end in sight.

Fortunately, defragmentation technology has now been developed to address these issues. First, it utilizes a defragmentation engine actually adequate to the job, specifically designed to defragment large amounts of files on high-capacity disks in a timely manner. Second, it works completely automatically, in the background, utilizing otherwise-idle resources. These large drives are commonly found in extremely busy server environments, where times to scheduled defragmentation are few to nonexistent -- hence, the only way to really tackle the problem is with defragmentation that is consistent and requires no scheduling.

Such technology is the only way to ensure that high-capacity drives perform to their utmost and remain reliable. When implementing such drives, ensure that your defragmentation technology is adequate to the job.

Contact Information