Some Considerations Before Choosing A Data Protection Solution

Many network administrators are considering an all-in-one data protection solution because it can greatly simplify data protection process. These solutions also tend to be less expensive than buying each component separately, and they provide a single point of contact for all their support needs.

In a recent survey it was found that over half of the respondents indicated that they would prefer to rely on a single vendor for their data protection solutions whenever possible.

An all-in-one Data Protection solution the purposes of this article includes the hardware components and necessary backup software to backup an organization’s critical data, maintain the backup data both on-site and at secure off-site locations, monitor the system 24X7, provide server virtualization, and data restoration so the data network can be up and running as soon as possible.

There are a few things to consider before making a decision about going with a particular data protection solution.

1. What software is included?

There is client software loaded on the server for the backup process. The resident NAS software is used for compression, encryption, bare-metal recovery, and continuous data protection.

2. What type of hardware is being used?

A network attached storage (NAS) server with internal storage capacity is attached to the network and the server hard drive is mirrored in the NAS storage.

3. Is the solution secure?

Most solutions offer encryption. Encryption is critical component in backup data protection. The solution should support AES-level encryption. The backup data should be encrypted on site and while in transit to the off-site location and at the hosted site.

4. Is offsite backup included in the solution?

Most companies today need not only onsite backup, but they also need the capability to have a redundant copy of their backup data stored off-site so the data can be recovered in the event their site was unusable. Both on-site storage and remote storage should be part of the solution.

5. Can the solution scale to data storage demands?

The data protection solution should be capable of running efficiently and scale without a lot of specialized hardware

6. How often is the data backed up?

The frequency of the data backup determines how much data will be lost. If the data is only backed up once a day then 24 hours of data will be lost. Most businesses could not accept that kind of loss. They require a shorter recovery point. Most solutions provide backups take place every 15 minutes. The best practice is to determine an acceptable recovery point and pick a solution that meets that objective.

There are a lot of all-in-one data protection solutions. Some companies prefer to purchase deploy, and maintain the solution themselves. An in-house also requires the labor and expertise to administer the system. Some solution providers bundle the data protection equipment and software into a bundle that also includes the labor to monitor the system 24X7 as well as the labor to maintain and respond to notifications and outages. The system should be flexible enough to scale as needs change and provide a one-point-of contact when there is a need for service.

Terry Mayfield is a Business Continuity expert with 19 years experience in the field. Mr. Mayfield has helped his clients evaluate potential data loss threats and formulate data protection and recovery strategies. He is available by phone (205-290-8424) or email (terrym@askbts.com). Visit http://www.gosleepez.com to download a complimentary copy of Mr. Mayfield’s whitepaper “What Every Business Must Know About Protecting And Preserving Their Critical Data”

Read More

Top 10 worst computer viruses (Storm & Melissa)

worst computer viruses6. Storm
Shaun Nichols: Before Conficker came around and got everyone worked into a lather, Storm was the big bad botnet on the block. First appearing in early 2007 as a fake news video on European flooding, the Storm malware menaced users for more than a year.

The huge botnet was also influential for its continued use of social engineering tactics. The malware disguised itself as everything from video files to greeting cards, and attacks were continuously refreshed to coincide with holidays and current news events.

While Storm has since been eclipsed by newer botnets, the name still brings to mind one of the most menacing attacks seen in recent years.

Iain Thomson: When extreme weather hit Europe the damage was bad enough, but the Storm code made things much worse. At a time when many were seriously concerned about the health and safety of friends and family, the last thing anyone needed was an infection.

But Storm was a classic piece of social engineering. At a time when people are concerned they don’t always  think of the consequences, be it approving torture or opening an email attachment.

This kind of social networking is nothing new, of course, but the Storm malware did it very well indeed and proved very effective as a result.

5. Melissa
Shaun Nichols: It was a classic love story. Boy meets girl, girl dances for money, boy goes home and writes computer virus for girl, computer virus gets out of hand and causes millions of dollars in damage. It’s the Romeo and Juliet of our time.

When a New Jersey hacker wrote a small bit of code named after a stripper he met in Florida, he had no idea of the chaos that would ensue. The Melissa virus, as it came to be known, got way, way out of hand.

The virus spread like wildfire throughout the net, and an unintended effect of the worm led to a glut of email traffic that overflowed servers and caused tons of damage and lost work time to corporate IT systems.

The hacker himself was later caught and sentenced to a year and half in prison. Next time he wants to impress a girl, hopefully he’ll stick to chocolates and jewelery.

Iain Thomson: Now, I’ve done some stupid things to impress girls, things that cause me to bite my fist with embarrassment nowadays and one that left me with a small amount of scar tissue, but writing a computer virus makes these pale by comparison.

The real damage of Melissa was not in the code itself, but in its spamming capabilities. The software caused a massive overload of email systems and generated enough traffic to make it highly visible. Current computer malware writers have taken note of code like Melissa and now fly much lower under the wire to attract less attention.

Read More

Avoiding storage system failures

There are many ways to reduce or eliminate the impact of storage system failures. You may not be able to prevent a disaster from happening, but you may be able to minimize the disruption of service to your clients.

There are many ways to add redundancy to primary storage systems. Some of the options can be quite costly and only large business organizations can afford the investment. These options include duplicate storage systems or identical servers, known as ‘mirror sites’. Additionally, elaborate backup processes or file-system ‘snapshots’ that always have a checkpoint to restore to, provide another level of data protection.

Experience has shown there are usually multiple or rolling failures that happen when an organization has a data disaster. Therefore, to rely on just one restoration protocol is shortsighted. A successful storage organization will have multiple layers of restoration pathways.

We has heard thousands of IT horror stories of initial storage failures turning into complete data calamities. In an effort to bring back a system, some choices can permanently corrupt the data. Here are several risk mitigation policies that storage administrators can adopt that will help minimize data loss when a disaster happens:

Offline storage system: Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption.

Rebuilding a failed drive: When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services involved. During a rebuild, replacing a second failed drive will change the data on the other drives.

Storage system architecture: Plan the storage system’s configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it.

During an outage: If the problem escalates up to the OEM technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved.

Read More

Unique data protection schemes

Storage system manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster. Some large SAN units incorporate intricate device block-level organization, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the control processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.

Some computer scientists specializing in the storage system field are proposing adding more intelligence to the RAID array controller card so that it is ‘file system aware.’ This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.

Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependant on a specific system’s file system. In organizations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.

Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.

Read More

Data Protection Schemes to Storage System

Storage System manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster. Some large SAN units incorporate intricate device block-level organization, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the Control Processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.

Some computer scientists specializing in the storage system field are proposing adding more intelligence to the RAID array controller card so that it is ‘file system aware.’ This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.

Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependant on a specific system’s file system. In organizations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.

Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.

Reviewing the University of California, Berkeley report about the amount of digital information that was produced 2003 is staggering. You or your client’s site may not have terabytes or pet bytes of information, yet during a data disaster, every file is critically important.

Avoiding Storage System Failures
There are many ways to reduce or eliminate the impact of storage system failures. You may not be able to prevent a disaster from happening, but you may be able to minimize the disruption of service to your clients.

There are many ways to add redundancy to primary storage systems. Some of the options can be quite costly and only large business organizations can afford the investment. These options include duplicate storage systems or identical servers, known as ‘mirror sites’. Additionally, elaborate backup processes or file-system ‘snapshots’ that always have a checkpoint to restore to, provide another level of data protection.

Experience has shown there are usually multiple or rolling failures that happen when an organization has a data disaster. Therefore, to rely on just one restoration protocol is shortsighted. A successful storage organization will have multiple layers of restoration pathways.

Here are several risk mitigation policies that storage administrators can adopt that will help minimize data loss when a disaster happens:

Offline storage system — Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption.

Rebuilding a failed drive — When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services involved. During a rebuild, replacing a second failed drive will change the data on the other drives.

Storage system architecture — Plan the storage system’s configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it.

During an outage — If the problem escalates up to the OEM technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved.

Read More