Portable Backup Drives Certified to Hold 12TB

Portable Backup Drives,12TB,RAIDHighly Reliable Systems lately certified 4TB drives to be used within their RAIDPac detachable drives. Each RAIDPac consists of 3 SATA drives along with a RAID controller, supplying 12TB of detachable storage in each bay from the RAIDFrame backup solution.

Highly Reliable Systems of Reno, NV introduced their RAIDFrame DAS (Direct Attached Storage) backup system will support Hitachi and Seagate 4TB hard disk drives once they ship (expected first quarter of 2012). In RAID0 mode each RAIDPac holds 12TB of information, which makes it the earth’s biggest detachable backup cartridge. A RAIDPac consists of 3 drives as well as an integrated controller that travels using the backup cartridge. The machine attaches straight to a fileserver or NAS having a single eSATA or USB 3. cable

The organization intends to to produce niche NAS with detachable drive support according to Ubuntu Linux within the first quarter of 2012. The RAIDFrame will come in 1 bay, 2 bay, and 5 bay enclosures. The Two bay systems (pictured) allow further redundancy by permitting two RAIDPacs to become shown a configuration the organization calls “RAID 51”. Using hardware automatic reflecting technology implies that changing inside a new RAIDPac initiates one more block level copy from the primary backup. The RAIDFrame is made for high capacity and rapid server recovery and offers a method to supplement or replace cloud and/or tape methods.

RAIDPacs could be set up with RAID0 for optimum speed or with RAID5 to safeguard against single drive failures. Once the Seagate and Hitachi 4TB drives ship they’ll increase RAIDPac capabilities to 12TB/8TB (RAID0/5). Based on Tom Hoops, the organization CTO, no special motorists are essential since each RAIDPac sometimes appears through the host system like a single detachable hard disk. Unlike tapes a RAIDPac could be utilized directly in desperate situations with no RAIDFrame enclosure. Additionally towards the standard self-centering connector created for daily swap within the RAIDFrame, each RAIDPac has standard SATA and USB3 connections around the back.

Redundancy is accomplished using RAID5 and/or by reflecting to a different RAIDPac. By utilizing multiple tubes (just like tapes – a replacement every day in rotation), past full backup copies could be maintained with time. The whole line continues to be upgraded to permit faster RAID5, faster reflecting, and software management. Backup and restore performance surpasses 400Gigabytes each hour using imaging software on modern servers.

Highly Reliable Systems sells the merchandise through merchants. 6TB RAIDPacs (that contains three 2TB drives and RAID controller) are $1208 MSRP.

For more information call 877-384-6838.

Read More

Data Backup Glossary (Letter Q)

Data Backup Glossary (Letter Q)RAID
See redundant array of independent (or inexpensive) disks.

RAIN
See redundant array of independent nodes.

Raised floor
A type of flooring supported by a metal grid and typically used in data centers. Raised flooring can be removed in pieces to allow for cabling, wiring, and cooling systems to run under the floor space. When the floor is raised, it usually can accommodate space for walking or crawling in.

Recovery
The recreation of a past operational state of an entire application or computing environment. Recovery is required after an application or computing environment has been destroyed or otherwise rendered unusable. It may include restoration of application data, if that data has been destroyed as well.

Recovery point objective
Recovery point objective (RPO) is the maximum acceptable time period prior to a failure or disaster during which changes to data may be lost as a consequence of recovery. Data changes preceding the failure or disaster by at least this time period are preserved by recovery. Zero is a valid value and is equivalent to a “zero data loss” requirement.

Recovery time objective
Recovery time objective (RTO) is the period of time after an outage in which the systems and data must be restored to the predetermined recovery point.

Red Hat Global File System
Red Hat Global File System (GFS) is an open source cluster file system and volume manager that executes on Red Hat Enterprise Linux servers attached to a storage area network (SAN). It enables a cluster of Linux servers to share data in a common pool of storage to provide a consistent file system image across server nodes. Red Hat Global File System works on all major server and storage platforms supported by Red Hat.

Redundancy
The inclusion of extra components of a given type in a system (beyond those required by the system to carry out its function) for the purpose of enabling continued operation in the event of a component failure.

Redundant array of independent (or inexpensive) disks
Redundant array of independent (or inexpensive) disks (RAID) is a category of disk drives that employ two or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on servers but aren’t generally necessary for personal computers. RAID allows you to store the same data redundantly (in multiple places) in a balanced way to improve overall performance.

Redundant array of independent nodes
Redundant array of independent nodes (RAIN) is a data storage and protection system architecture. It uses an open architecture that combines standard computing and networking hardware with management software to create a system that is more distributed and scalable. RAIN is based on the idea of linking RAID nodes together into a larger storage mechanism. In a RAIN setup, there are multiple servers, each with disk drive and RAID functionality, all working together as a RAIN, or a parity or mirrored implementation. RAIN may also be called storage grid.

Remote offices/branch offices (ROBOs)
Refers to corporate offices externally connected to a WAN or a LAN. These offices will often have one or more servers to provide branch users with file, print, and the other services required to maintain the daily routine.

Replicate
(n.)  A copy of a collection of data.
(v.) The action of making a replicate as defined above.

Restore
To bring a desired data set back from the backup media.

Rotational latency
Also called rotational delay, the amount of time it takes for the desired sector of a disk (for example, the sector from which data is to be read or written) to rotate under the read-write heads of the disk drive. The average rotational latency for a disk is half the amount of time it takes for the disk to make one revolution. The term typically is applied to rotating storage devices, such as hard disk drives and floppy drives (and even older magnetic drum systems), but not to tape drives.

RPO
See recovery point objective.

RTO
See recovery time objective.

Run length limited
Run length limited (RLL) is an encoding scheme used to store data on newer PC hard disks. RLL produces fast data access times and increases a disk’s storage capacity over the older encoding scheme called MFM (modified frequency modulation).

Read More

Tips For Replacing A Hard Drive From A Failed RAID

Tips For Replacing A Hard Drive From A Failed RAIDThere are some items to consider when replacing a hard drive from a failed RAID. If you are building a new RAID, then all hard drives in the array should be the identical model if at all possible. However, if you must replace a failed hard drive, it can sometimes be difficult to find the same model if that model is out of production.

Below are some tips to follow when selecting a replacement:

Keep in mind that the controller may or may not allow different models in a RAID, so check the RAID controller documentation.

Product life: What is the expected life of the remaining drives? If the other drives are approaching the end of their useful life, then it may be time to replace the entire RAID.

Capacity: The replacement drive should be the same or higher capacity than the original drive. Do not just look at the capacity on the box, since a few megabytes could make the difference between whether the drive will work or not.

(You should check the number of LBAs (or sectors) on the hard drive. Some RAID controllers will allow you to substitute larger drives if the exact capacity is not available, while other controllers require an exact match. Check with the controller manufacturer if the documentation doesn’t make it clear!)

Performance: The replacement drive should match the performance of the remaining drives as closely as possible. If your failed drive was 15,000 RPM, avoid replacing it with a 10,000 RPM drive. RAID arrays depend on the timing between drives to write data. Thus, if one drive doesn’t keep up, it may cause the entire array to fail or at least experience irritating problems.

Interface: Make sure the replacement drive uses the same type of interface connection as the failed drive. If the failed drive used a SCSI SCA (80-Pin) interface then don’t try to replace it with a 68-pin SCSI interface. With Seagate products the last two digits of the model number indicate the interface. For example: LW = 68-Pin, LC = 80-Pin.

The 80-pin LC drives are hot-swappable with backplane connections.

Cache Buffer: It is recommended that the cache buffer for each drive be the same value.  Most RAID controllers will consider drives with mismatching cache buffers to be ineligible for addition to a striped or parity array.

Read More

Glossary of Western Digital Hard Disk Drive (Letter R)

radial path
A straight-line path from the center of a disk to the outer edge.

RAFF™
Rotary Acceleration Feed Forward. WD technology that maintains the highest possible data transfer performance in the high rotational vibration environments commonly found in servers and storage arrays.

RAID
Redundant array of independent disks. A grouping of hard drives in a single system to provide greater performance and data integrity.

RAID 0
RAID protocol in which data is striped across multiple hard drives, enabling the accelerated reading and recording of data by combining the work of two or more drives to increase performance. See also striping.

RAID 1
RAID protocol in which two copies of the data are instantaneously recorded – each on separate hard drives. RAID 1 ensures the protection of users’ data because in the event that one of the hard drives fails, the other hard drive(s) will continue to read and write data until the faulty hard drive is replaced and rebuilt to once again safely mirror the data. See also mirroring.

RAID 5
For systems with three or more drives, RAID 5 offers fast performance by striping data across all drives; data protection by dedicating a quarter of each drive to fault tolerance leaving three quarters of the system capacity available for data storage.

RAM
Random access memory. Memory that allows any storage location to be accessed randomly.

Ramp Load/Unload (LUL)
Ramp load parks the recording head off the media when the drive is idle and on spin up, maximizing available disk space and minimizing power usage, which results in lower heat and long-term drive reliability.

RE
RAID edition. A WD drive engineered to thrive in a high-intensity RAID system while still offering traditional desktop value.

read channel
The channel that performs data encoding and conversion that a drive requires to write computer generated information onto a magnetic medium and read back that information with a high degree of accuracy.

read verify
A data accuracy check performed by having a disk read data from a controller, which in turn checks for errors but does not pass data to the system.

read/write head
See head.

recoverable error
A read error that can be corrected by ECC or by re-reading data.

RLL
Run length limited. An encoding scheme used during write operations to facilitate reading that data.

RoHS
Restriction of Hazardous Substances. This compliance Directive 2002/95/EC of the European Parliament, which is effective in the EU beginning July 1, 2006, aims to protect human health and the environment by restricting the use of certain hazardous substances such as lead, mercury, hexavalent chromium, cadmium, polybrominated biphenyl flame retardants, and polybrominated diphenyl ether flame retardants in new equipment.

ROM
Read-only memory. An integrated circuit memory chip containing programs and data that the computer or host can read but cannot modify. A computer can read instructions from ROM but cannot store data in ROM.

rotational latency
The amount of delay in obtaining information from a disk due to disk rotation. For a disk rotating at 5400 RPM, the average rotational latency is 5.5 milliseconds. See also mechanical latency.

RPM
Rotations per minute. Also known as spindle speed. Rotational speed of a medium (disk). Hard drives typically spin at a constant speed. The lower the RPM, the higher the mechanical latency. Disk RPM is a critical component of drive performance, as it directly affects rotational latency.

RPS™
Reduced power spinup. The WD-optimized spinup feature specifically designed for the external hard drive and consumer electronics markets.

Read More

Tips for Replacing a Drive from a Failed RAID

Replace a drive from a Failed Raid There are several items to consider when replacing a drive from a failed RAID. If you are building a new RAID, then all drives in the array should be the identical model if at all possible.

However, if you must replace a failed drive, it can sometimes be difficult to find the same model if that model is out of production. Below are some tips to follow when selecting a replacement.

Note: Keep in mind that the controller may or may not allow different models in a RAID, so check the RAID controller documentation.

1. Product life: What is the expected life of the remaining drives? If the other drives are approaching the end of their useful life, then it may be time to replace the entire RAID.

2. Capacity: The replacement drive should be the same or higher capacity than the original drive. Do not just look at the capacity on the box, since a few megabytes could make the difference between whether the drive will work or not.

You should check the number of LBAs (or sectors) on the drive. Some RAID controllers will allow you to substitute larger drives if the exact capacity is not available, while other controllers require an exact match. Check with the controller manufacturer if the documentation doesn’t make it clear.

3. Performance: The replacement drive should match the performance of the remaining drives as closely as possible. If your failed drive was 15,000 RPM, avoid replacing it with a 10,000 RPM drive. RAID arrays depend on the timing between drives to write data. Thus, if one drive doesn’t keep up, it may cause the entire array to fail or at least experience irritating problems.

4. Interface: Make sure the replacement drive uses the same type of interface connection as the failed drive. If the failed drive used a SCSI SCA (80-Pin) interface then don’t try to replace it with a 68-pin SCSI interface. With Seagate products the last two digits of the model number indicate the interface.
For example:
LW = 68-Pin
LC = 80-Pin
The 80-pin LC drives are hot-swappable with backplane connections.

5. Cache Buffer: It is recommended that the cache buffer for each drive be the same value.  Most RAID controllers will consider drives with mismatching cache buffers to be ineligible for addition to a striped or parity array.

Read More

RAID 6 Areca ARC-1120

areca-card-pers

Fierce competition dominates the market for professionally-equipped Serial ATA RAID controllers. Shortly after manufacturers HighPoint and Promise became the first to launch their PCI-based products on the market, more well-known names like Adaptec and LSI Logic followed suit. A year ago, RAIDCore and NetCell also debuted their products, and made a good impression from the word go.

All of these manufacturers concentrated primarily on making their products better suited to the professional market, and are focusing particularly on offering devices on the PCI-X interface . Now, Taiwanese manufacturer Areca hopes to go them one better by supporting RAID 6.

RAID controllers are used most often in business settings, particularly for servers. The point of RAID is to increase the performance of the storage subsystem when using numerous hard drives simultaneously, and also protecting against data loss due to hard drive crashes. Even if regular backups are used, constant availability of storage systems is invaluable for business workflows, and this is what RAID provides.

A RAID Level 5 array is a common type, used in most normal business situations. In this arrangement, when data is written to the array, it is distributed to all drives but one. The controller generates a checksum (parity information) for the data set written, and writes the checksum to the final hard drive. This can be used to reconstruct the data if any one drive is lost. At the same time, performance is improved because data is being written to (or read from) many drives in parallel.

In RAID5, the drive chosen for the checksum changes for each data block written. Thus, it is an enhancement of RAID3, where a single dedicated drive is used for all checksums. RAID5 improves performance because in RAID3 the dedicated parity drive can create a bottleneck.

But there are also cases in which higher reliability is needed than can be met by RAID 5. Areca addresses this by offering the option of setting up a RAID 6 array. RAID 6 is like RAID 5 but uses two drives for parity data, which means two drives can fail without data loss. Naturally, this requires another hard drive to be put in the array. We took a close look at how well this RAID level functions, and how well it performs.

Read More

RAID 6 — Do you really like it?

RAID 6For several years now RAID 5 has been one of the most popular RAID implementations. Just about every vendor that supplies storage to enterprise data centers offers it, and in many cases it has become — deservedly – a well-trusted tool. RAID 5 stripes parity and blocks of data across all disks in the RAID set. Even though users now must devote about 20% of their disk space to the parity stripe, and even though read performance for large blocks may be somewhat diminished and writes may be slower due to the calculations associated with the parity data, few managers have questioned RAID 5’s usefulness.

There are however, two major drawbacks associated with using RAID 5. First, while it offers good data protection because it stripes parity information across all the discs within the RAID set, it also suffers hugely from the fact that should a single disc within the RAID set fail for any reason, the entire array becomes vulnerable — lose a second disc before the first has been repaired and you lose all your data, irretrievably.

This leads directly to the second problem. Because RAID 5 offers no protection whatsoever once the first disc has died, IT managers using that technology have faced a classic Hobson’s choice when they lose a disc in their array. The choices are these. Do they take the system off-line, making the data unavailable to the processes that require it? Do they rebuild the faulty drive while the disc is still online, imposing a painful performance hit on the processes that access it? Or, do they take a chance, hold their breath, and leave the drive in production until things slow down during the third shift when they can bring the system down and rebuild it without impacting too many users?

This choice, however, is not the problem, but the problem’s symptom.

The parity calculations for RAID 5 are quite sophisticated and time consuming, and they must be completely redone when a disk is rebuilt. But it’s not the sophistication of all that math that drags out the process, but the fact that when the disk is rebuilt, parity calculations must be made for every block on the disk, whether or not those blocks actually contained data before the problem occurred. In every sense, the disk is rebuilt from scratch.

An unfortunate and very dirty fact of life about RAID 5 is that if a RAID set contains, say, a billion sectors spread over the array, the demise of even a single sector means the whole array must be rebuilt. This wasn’t much of a problem when disks were a few gigabytes in size. Obviously though, as disks get bigger more blocks must be accounted for and more calculations will be required. Unfortunately, using present technology RAID recovery speed is going to be constant irrespective of drive size, which means that rebuilds will get slower as drives get larger. Already that problem is becoming acute. With half-terabyte disks becoming increasingly common in the data center, and with the expected general availability of terabyte-sized disks this fall, the dilemma will only get worse.

The solution offered by most vendors is RAID 6.

The vendors would have you believe that RAID 6 is like RAID 5 on steroids: it eliminates RAID 5’s major drawback – the inability to survive a second disk failure – by providing a second parity stripe. Using steroids of course comes with its own set of problems.

RAID 6 gives us a second parity stripe. The purpose of doing all of the extra math to support this dual parity is that the second parity stripe operates as a “redundancy” or high availability calculation, ensuring that even if the parity data on the bad disk is lost, the second parity stripe will be there to ensure the integrity of the RAID set. There can be no question that this works. Buyers should, however, question whether or not this added safety is worth the price.

Consider three issues. RAID 6 offers significant added protection, but let’s also understand how it does what it does, and what the consequences are. RAID 6’s parity calculations are entirely separate from the ones done for the RAID 5 stripe, and go on simultaneously with the RAID 5 parity calculations. This calculation does not protect the original parity stripe, but rather, creates a new one. It does nothing to protect against first disk failure.

Because calculations for this RAID 6 parity stripe are more complicated than are those for RAID 5, the workload for the processor on the RAID controller is actually somewhat more than double. How much of a problem that turns out to be will depend on the site and performance demands of the application being supported. In some cases the performance hit will be something sites will live with, however grudgingly. In other cases, the tolerance for slower write operations will be a lot lower. Buyers must balance the increased protection against the penalty of decreased performance.

Issue two has to do with the nature of RAID 5 and RAID 6 failures.

The most frequent cause of a RAID 5 failure is that a second disk in the RAID set fails during reconstruction of a failed drive. Most typically this will be due to either media error, device error, or operator error during the reconstruction – should that happen, the entire reconstruction fails. With RAID 6, after the first device fails the device is running as a RAID 5, deferring but not removing the problems associated with RAID 5. When it is time to do the rebuild, all the RAID 5 choices and rebuild penalties remain. While RAID 6 adds protection, it does nothing to alleviate the performance penalty imposed during those rebuilds.

Need a more concrete reason not to accept RAID 6 at face value as the panacea your vendor says it is?  Try this.

When writing a second parity stripe, we of course lose about the same amount of disk space as we did when writing the first (assuming the same number of disks are in each RAID group). This means that when implementing RAID 6, we are voluntarily reducing disk storage space to about 60% of purchased capacity (as opposed to 80% with RAID 5). The result: in order to meet anticipated data growth, in a RAID 6 environment we must always buy added hardware.

This is the point at which many readers will sit back in their chairs and say to themselves, “So what?  Disks are cheap!” And so they are — which naturally is one of the reasons storage administrators like them so much. But what if my reader is not in storage administrator? What if the reader is a data center manager, or an MIS director, or a CIO, or a CFO? In other words, what if my reader is as interested in operational expenditures as in the CAPEX?

In this case, the story becomes significantly different. Nobody knows exactly what the relationship between CAPEX and OPEX is in IT, but a rule of thumb seems to be that when it comes to storage hardware the OPEX will be 4-8 times the cost of the equipment itself. As a result, everybody has an eye on the OPEX. And these days we all know that a significant part of operational expenditures derives from the line items associated with data center power and cooling.

Because of the increasing expense of electricity, such sites are on notice that they will have to make do with what they already have when it comes to power consumption. Want to add some new hardware?  Fine, but make sure it is more efficient than whatever it replaces.

When it comes to storage, I’m quite sure that we will see a new metric take hold. In addition to existing metrics for throughput and dollars-per-gigabyte, watts-per-gigabyte is something on which buyers will place increased emphasis. That figure, and not the cost of the disk, will be a repetitive expense that managers will have to live with for the life of whatever hardware they buy.

If you’re thinking of adding RAID 6 to your data protection mix, consider the down-stream costs as well as the product costs.

Does RAID 6 cure some problems? Sure, but it also creates others, and there are alternatives worth considering. One possibility is a multilevel RAID combining RAID 1 (mirroring) and RAID 0 (striped parity), usually called either RAID 10 or RAID 1+0. Another is the “non-traditional” RAID approach offered by vendors who build devices that protect data rather than disks. In such cases, RAID 5 and 6 would have no need for all those recalculations required for the unused parts of the disk during a rebuild.

Read More

The advantages and disadvantages of RAID 5EE

RAID 5EE is very similar to RAID 5E with one key difference — the hot spare’s capacity is integrated into the stripe set. In contrast, under RAID 5E, all of the empty space is housed at the end of the array. As a result of interleaving empty space throughout the array, RAID 5EE enjoys a faster rebuild time than is possible under RAID 5E.

RAID 5EE has all of the same pros as RAID 5E but enjoys a faster rebuild time than either RAID 5 or RAID 5E. On the cons side, RAID 5EE has the same cons as RAID 5E, with the main negative point being that not a lot of controllers support the RAID level yet. I suspect that this will change over time, though.

As is the case with RAID 5E, RAID 5EE requires a minimum of four drives and supports up to eight or 16 drives in an array, depending on the controller. Figure C shows a sample of a RAID 5EE array with the hot spare space interleaved throughout the array.

 

A RAID 5EE array with five drives A RAID 5EE array with five drives

When a drive fails, as shown in Figure D, the empty slots are filled up with data from the failed drive.

empty slots are filled up with data from the failed drive.

Read More

the advantages and disadvantages of RAID 5E

RAID 5E with an E that stands for Enhanced, RAID 5E is a RAID 5 array with a hot spare drive that is actively used in the array operations. In a traditional RAID 5 configuration with a hot spare, the hot spare drive sits next to the array waiting for a drive to fail, at which point the hot spare is made available and the array rebuilds the data set with the new hardware. There are some advantages to this operational method:

  • You know for a fact that the drive that would have been used as a hot spare is in working order.
  • There is an additional drive included in the array, thus further distributing the array’s I/O load. More spindles equals better performance in most cases. RAID 5E can perform better than typical RAID 5.

There are a few disadvantages associated with RAID 5E as well:

  • There is not wide controller support for RAID 5E.
  • A hot spare drive cannot be shared between arrays.
  • Rebuilds can be slow.

The capacity of a RAID 5E array is exactly the same as the capacity of a RAID 5 array that contains a hot spare. In such a scenario, you would “lose” two disks’ worth of capacity — one disk’s worth for parity and another for the hot spare. Due to this fact, RAID 5E requires that you use a minimum of four drives, and up to eight or 16 drives can be supported in a single array, depending on the controller. The main difference between RAID 5 and RAID 5E is that the drive that would have been used as a hot spare in RAID 5 cannot be shared with another RAID 5 array; so that could affect the total amount of storage overhead if you have multiple RAID 5 arrays on your system. Figure A gives you a look at a RAID 5E array consisting of five drives. Take note that the “Empty” space in this figure is shown at the end of the array.

A RAID 5E array with five drives

A RAID 5E array with five drives

When a drive in a RAID 5E array fails, the data that was on the failed drive is rebuilt into the empty space at the end of the array, as shown in Figure B. When the failed drive is replaced, the array is once again expanded to return the array to the original state.

 

Fig_B_Lowe052307

A RAID 5E array that has been rebuilt into the hot spare space

Read More

Backup VS Data Safety

data backup

Entry-level RAID controllers allow administrators to create secure multi-drive storage arrays to host a server’s operation system and vital data.

Many people don’t appreciate the value of backups and data safety until they experience what it means to lose data. Whether it is music, videos or photos at home or project files, customer data or other digital assets in the office, people don’t think of data safety – until it’s too late.

Imagine how you would feel if your vacation photos or videos of your wedding and daughter’s birth were destroyed? Such scenarios can lead to divorce court when your better half finds out. Or how would your boss react if his or her email and project files were lost due to a faulty hard drive? Your life might be spared, but you could still get fired.

No warranty in the world protects you from such an incident. Make no mistake about it: these things happen every day! If it’s your data, it’s your responsibility to protect it. And even if you’ve got a boss who eventually is responsible, he or she might still blame the loss of data on you. In the end, you can do no wrong by developing awareness of threats and paying attention to data safety.

Backup Vs. Data Safety

At this point we have to differentiate between a backup and basic data safety. Both mean something different and every business should rely on both regular backups and a safe data repository.

Performing a backup means copying files or complete system images from your hard drive onto another storage device, where the data is safe from hardware malfunction, viruses or accidental modification. If anything happens to your primary data, you can access the backup “snapshot” and restore whatever you need.

Any type of drive can be used for backups, but you should pay attention to data safety offered by the solution you pick. A hard drive, for example, cannot be considered a safe medium, as it uses mechanical components that may fail. A perfect backup is performed frequently, is written onto alternating media that are partially stored off-site and should be written onto media that is widely available.

When we talk about data safety, we specifically address the issue that every computer stores all key data on hard drives, and that every hard drive will eventually fail. The challenge is to create a storage subsystem that is unsusceptible to hard drive failures. This is where the five RAID storage controllers come into play.

AMCC, Areca, HighPoint, LSI Logic and Promise Technology offer PCI Express add-on cards that run up to four hard drives to create fast and secure storage array.

Read More