Posts Tagged ‘raid 5’

RAID 1 Disadvantages

October 23rd, 2012 Comments off

I am in the process of building a new PC and since my data is important. I am considering using RAID. I currently have an external HD which is being backed up using Norton Ghost, but I would feel much more comfortable with real-time protection I’ve read that with on-board RAID controllers, the performance hit for RAID 5 is enormous, so I’m leaning towards RAID 1. I will be using WD 750 Gb Black hard discs on a GIGABYTE GA-P55A-UD4P motherboard using Windows 7. Is the only disadvantage to RAID 1, the ‘loss’ of a hard disc and a slightly more complicated O/S install with the advantages of data protection and a potentially slightly better read performance?


The big advantage or RAID1 is “instant recovery” from HDD failure. That is, if a member of the array fails, the RAID system immediately should detect that situation and convert the operation to using only the remaining good drive so that you can keep on functioning normally right away. It should also immediately send out a warning message so that you know of the problem and can plan its repair as soon as possible. The “downside” of this is that it can work so smoothly that the warning message goes un-noticed or is ignored by untrained users and the tech guys are unaware a problem needs attention. That’s probably not your situation.

The RAID1 systems I have used have very good tools for fixing a drive failure. Basically they will pinpoint exactly which drive is faulty so you can replace it. Then they will allow you to control re-establishing the array by copying everything from the good drive to the replacement unit. There is no need to re-install an OS or restore data from a backup dataset. They even can do this while the system is in use, although my preference would be to do the re-establishment as a separate operation on a system that is NOT being used for anything at the time.

My wife runs a retail store with a POS software package on a dedicated computer. The data files for that operation are kept in one subdirectory and amount to about 60 to 70 MB of data that are updated with every sale. The files are generally in ASCII character strings with some numerical data, so they compress well to .zip files. I set up the machine with a pair of drives in RAID1 as the only drive system. I installed WnZip Pro and set up a scheduled task that runs every day at 10 minutes before midnight (store is closed). It zips all the files in the specific subdirectory into a daily .zip file named with a date string and puts them in a designated subdirectory. This guards against data file corruption by providing end-of-day archived versions. Once a month (probably should be more often) I simply copy the end-of-month .zip file to a USB drive and take it home where I put it on my home computer – thus an off-site backup monthly. Then I delete all the daily .zips at the store, except for that month-end one. (So the store computer has on its RAID1 array an end-of-month .zip file (for every month since its start), each containing a snapshot of all the data that changes over time.) Small important step: the POS computer normally runs 24/7, so when I do the monthly .zip file copy I also reboot the machine and watch the POST messages to be sure there are no errors in the RAID system that I have not heard about.

We had a failure, but not of a hard drive. The mobo failed and had to be replaced. That can be a big problem with any RAID array based on mobo built-in “controllers” because there is no real universal RAID standard. That means often a RAID array written in one system cannot be read by another. In choosing the original mobo (by Abit) I deliberately chose one that had an nVidia chipset because their website claimed that they guarantee that ALL of their mobo chipset RAID systems use the same RAID algorithms and would continue to do so, so that any yet-to-come nVidia chipset could handle any older RAID disks made with their chips. When the mobo failed I selected a Gigabyte replacement mobo with a similar (but not identical) nVidia mobo chipset. Swapped everything, plugged it all together, and booted expecting maybe I’d have to do a Repair Install at least. It just booted and ran perfectly first time – no trouble at all! WOOHOO! I never had to reconfigure or re-install anything, other than updating the mobo device drivers from the Gigabyte CD.

So if you plan for possible changes to the RAID controller system as well as for changes to a hard drive that fails, a RAID1 system can give you some data security and continued operation through disk failure. You just have to recognize the need for real data backups and do them (AND VERIFY, as you say), probably more often than I do it.

Which RAID Mode Should You Choose?

July 10th, 2009 Comments off

1. Speed (RAID 0)

Set in high-performance mode (also called striped mode or RAID 0) the storage system gives you the power you need when you’re:raid 0

  • Designing huge graphics and need a lightning-fast Photoshop scratch space.
  • Recording large DV files while maintaining clean audio performance.
  • Editing DV or HD video and want a smooth work flow with no dropped frames.
  • Rendering complex 3D objects or special effects.
  • Performing disk-intensive database operations.
  • Driven to be the first geek on your block with a computer so fast it blows your
    socks off.
      Why is RAID 0 so fast? It’s a bit complicated, but suffice it to say that two or more heads, or in this case, drives, are better than one. Picture multiple hoses filling a bucket at the same time or several men bailing a boat and you can understand why two drives striped are


      than one. Data is saved (striped) across both drives and accessed in parallel by all the drives so you get

higher data transfer rates

      on large data accesses and

higher input/output rates

      on small data accesses.

Raid Mode

2. Data protection (RAID 1)

Set the system to data protection mode (also known as mirrored mode or RAID 1) and the capacity is divided in half. Half of the capacity is used to store your data and half is used for a duplicate copy.

Why do I want that kind of redundancy? It’s your data, your family pictures, your movie of baby’s first steps, your first novel. Is it important? You decide. If it is, then RAID mirroring is for you.

3. Data protection and speed (RAID 5)

In systems with three or more drives. we recommend that you set the system to RAID 5. This gives you the best of both worlds: fast performance by striping data across all drives; data protection by dedicating a quarter of each drive to fault tolerance leaving three quarters of the system capacity available for data storage.

RAID 6 — Do you really like it?

June 5th, 2009 Comments off

RAID 6For several years now RAID 5 has been one of the most popular RAID implementations. Just about every vendor that supplies storage to enterprise data centers offers it, and in many cases it has become — deservedly – a well-trusted tool. RAID 5 stripes parity and blocks of data across all disks in the RAID set. Even though users now must devote about 20% of their disk space to the parity stripe, and even though read performance for large blocks may be somewhat diminished and writes may be slower due to the calculations associated with the parity data, few managers have questioned RAID 5’s usefulness.

There are however, two major drawbacks associated with using RAID 5. First, while it offers good data protection because it stripes parity information across all the discs within the RAID set, it also suffers hugely from the fact that should a single disc within the RAID set fail for any reason, the entire array becomes vulnerable — lose a second disc before the first has been repaired and you lose all your data, irretrievably.

This leads directly to the second problem. Because RAID 5 offers no protection whatsoever once the first disc has died, IT managers using that technology have faced a classic Hobson’s choice when they lose a disc in their array. The choices are these. Do they take the system off-line, making the data unavailable to the processes that require it? Do they rebuild the faulty drive while the disc is still online, imposing a painful performance hit on the processes that access it? Or, do they take a chance, hold their breath, and leave the drive in production until things slow down during the third shift when they can bring the system down and rebuild it without impacting too many users?

This choice, however, is not the problem, but the problem’s symptom.

The parity calculations for RAID 5 are quite sophisticated and time consuming, and they must be completely redone when a disk is rebuilt. But it’s not the sophistication of all that math that drags out the process, but the fact that when the disk is rebuilt, parity calculations must be made for every block on the disk, whether or not those blocks actually contained data before the problem occurred. In every sense, the disk is rebuilt from scratch.

An unfortunate and very dirty fact of life about RAID 5 is that if a RAID set contains, say, a billion sectors spread over the array, the demise of even a single sector means the whole array must be rebuilt. This wasn’t much of a problem when disks were a few gigabytes in size. Obviously though, as disks get bigger more blocks must be accounted for and more calculations will be required. Unfortunately, using present technology RAID recovery speed is going to be constant irrespective of drive size, which means that rebuilds will get slower as drives get larger. Already that problem is becoming acute. With half-terabyte disks becoming increasingly common in the data center, and with the expected general availability of terabyte-sized disks this fall, the dilemma will only get worse.

The solution offered by most vendors is RAID 6.

The vendors would have you believe that RAID 6 is like RAID 5 on steroids: it eliminates RAID 5’s major drawback – the inability to survive a second disk failure – by providing a second parity stripe. Using steroids of course comes with its own set of problems.

RAID 6 gives us a second parity stripe. The purpose of doing all of the extra math to support this dual parity is that the second parity stripe operates as a “redundancy” or high availability calculation, ensuring that even if the parity data on the bad disk is lost, the second parity stripe will be there to ensure the integrity of the RAID set. There can be no question that this works. Buyers should, however, question whether or not this added safety is worth the price.

Consider three issues. RAID 6 offers significant added protection, but let’s also understand how it does what it does, and what the consequences are. RAID 6’s parity calculations are entirely separate from the ones done for the RAID 5 stripe, and go on simultaneously with the RAID 5 parity calculations. This calculation does not protect the original parity stripe, but rather, creates a new one. It does nothing to protect against first disk failure.

Because calculations for this RAID 6 parity stripe are more complicated than are those for RAID 5, the workload for the processor on the RAID controller is actually somewhat more than double. How much of a problem that turns out to be will depend on the site and performance demands of the application being supported. In some cases the performance hit will be something sites will live with, however grudgingly. In other cases, the tolerance for slower write operations will be a lot lower. Buyers must balance the increased protection against the penalty of decreased performance.

Issue two has to do with the nature of RAID 5 and RAID 6 failures.

The most frequent cause of a RAID 5 failure is that a second disk in the RAID set fails during reconstruction of a failed drive. Most typically this will be due to either media error, device error, or operator error during the reconstruction – should that happen, the entire reconstruction fails. With RAID 6, after the first device fails the device is running as a RAID 5, deferring but not removing the problems associated with RAID 5. When it is time to do the rebuild, all the RAID 5 choices and rebuild penalties remain. While RAID 6 adds protection, it does nothing to alleviate the performance penalty imposed during those rebuilds.

Need a more concrete reason not to accept RAID 6 at face value as the panacea your vendor says it is?  Try this.

When writing a second parity stripe, we of course lose about the same amount of disk space as we did when writing the first (assuming the same number of disks are in each RAID group). This means that when implementing RAID 6, we are voluntarily reducing disk storage space to about 60% of purchased capacity (as opposed to 80% with RAID 5). The result: in order to meet anticipated data growth, in a RAID 6 environment we must always buy added hardware.

This is the point at which many readers will sit back in their chairs and say to themselves, “So what?  Disks are cheap!” And so they are — which naturally is one of the reasons storage administrators like them so much. But what if my reader is not in storage administrator? What if the reader is a data center manager, or an MIS director, or a CIO, or a CFO? In other words, what if my reader is as interested in operational expenditures as in the CAPEX?

In this case, the story becomes significantly different. Nobody knows exactly what the relationship between CAPEX and OPEX is in IT, but a rule of thumb seems to be that when it comes to storage hardware the OPEX will be 4-8 times the cost of the equipment itself. As a result, everybody has an eye on the OPEX. And these days we all know that a significant part of operational expenditures derives from the line items associated with data center power and cooling.

Because of the increasing expense of electricity, such sites are on notice that they will have to make do with what they already have when it comes to power consumption. Want to add some new hardware?  Fine, but make sure it is more efficient than whatever it replaces.

When it comes to storage, I’m quite sure that we will see a new metric take hold. In addition to existing metrics for throughput and dollars-per-gigabyte, watts-per-gigabyte is something on which buyers will place increased emphasis. That figure, and not the cost of the disk, will be a repetitive expense that managers will have to live with for the life of whatever hardware they buy.

If you’re thinking of adding RAID 6 to your data protection mix, consider the down-stream costs as well as the product costs.

Does RAID 6 cure some problems? Sure, but it also creates others, and there are alternatives worth considering. One possibility is a multilevel RAID combining RAID 1 (mirroring) and RAID 0 (striped parity), usually called either RAID 10 or RAID 1+0. Another is the “non-traditional” RAID approach offered by vendors who build devices that protect data rather than disks. In such cases, RAID 5 and 6 would have no need for all those recalculations required for the unused parts of the disk during a rebuild.

the advantages and disadvantages of RAID 5E

June 4th, 2009 Comments off

RAID 5E with an E that stands for Enhanced, RAID 5E is a RAID 5 array with a hot spare drive that is actively used in the array operations. In a traditional RAID 5 configuration with a hot spare, the hot spare drive sits next to the array waiting for a drive to fail, at which point the hot spare is made available and the array rebuilds the data set with the new hardware. There are some advantages to this operational method:

  • You know for a fact that the drive that would have been used as a hot spare is in working order.
  • There is an additional drive included in the array, thus further distributing the array’s I/O load. More spindles equals better performance in most cases. RAID 5E can perform better than typical RAID 5.

There are a few disadvantages associated with RAID 5E as well:

  • There is not wide controller support for RAID 5E.
  • A hot spare drive cannot be shared between arrays.
  • Rebuilds can be slow.

The capacity of a RAID 5E array is exactly the same as the capacity of a RAID 5 array that contains a hot spare. In such a scenario, you would “lose” two disks’ worth of capacity — one disk’s worth for parity and another for the hot spare. Due to this fact, RAID 5E requires that you use a minimum of four drives, and up to eight or 16 drives can be supported in a single array, depending on the controller. The main difference between RAID 5 and RAID 5E is that the drive that would have been used as a hot spare in RAID 5 cannot be shared with another RAID 5 array; so that could affect the total amount of storage overhead if you have multiple RAID 5 arrays on your system. Figure A gives you a look at a RAID 5E array consisting of five drives. Take note that the “Empty” space in this figure is shown at the end of the array.

A RAID 5E array with five drives

A RAID 5E array with five drives

When a drive in a RAID 5E array fails, the data that was on the failed drive is rebuilt into the empty space at the end of the array, as shown in Figure B. When the failed drive is replaced, the array is once again expanded to return the array to the original state.



A RAID 5E array that has been rebuilt into the hot spare space

Categories: Storage News Tags: , ,

RAID Recovery – Don’t Increase the Level of Difficulty

June 1st, 2009 Comments off

RAID Recovery More and more enthusiast users encounter the destroyed RAID arrays. Generally, data recovery from such a RAID array is possible, but keep in mind that the effort increases disproportionately. First of all, data has to be copied from a RAID drive onto a server, and the data set has to be put back together. The distribution of data into smaller blocks across one or more drives makes RAID 0 the worst possible type to recover. Increasing performance doesn’t necessarily do your data any good here! If a drive is completely defective, only small files, which ended up on only one of the RAID drives (despite the RAID stripe set), can be recovered (at 64 kB stripe size or smaller). RAID 5 offers parity data, which can be used for recovery as well.

RAID data configuration is almost always proprietary, since all RAID manufacturers set up the internals of their arrays in different ways. However, they do not disclose this information, so recovering from a RAID array failure requires years of experience. Where does one find parity bits of a RAID 5, before or after the payload? Will the arrangement of data and parity stay the same or will it cycle? This knowledge is what you are paying for.

Instead of accessing drives on a controller level, the file system level (most likely NTFS) is used, as logical drives will provide the basis for working on a RAID image. This allows the recovery specialist to put together bits and bytes after a successful recovery using special software. The recovery of known data formats is an important approach in order to reach towards a complete data recovery. Take a JPEG file for example – will you be able to recognize a picture after recovery? Or will you be able to open Word.exe, which is found on almost every office system? The selected file should be as large as possible, so it was distributed across all drives and you can know for sure that its recovery was successful.

Two dead hard drives in a RAID 5 are more likely to be restored than two single platters, since RAID still provides parity data.