Test: How Secure Is Your Data?

With the increasing reliance on today’s computer systems and networks for the day to day running of businesses, there is an imminent threat to business continuity. Computer systems can be affected by a variety of sources: power outages, water leaks, systems failures, etc. Most companies have some sort of backup system in place, example UPS for power failure, but fail to take into account other hidden factors. It is no longer a question of if you will experience system or environment failures, but when. The 10-question quiz that follows can assist in assessing your company’s risk of experiencing downtime due to system or environment failures.

1. How many hours of continual data processing does your business do over a 24 hour period?
Threat: The average company’s hourly downtime accounts for $78,000 in lost revenue?
8 hours or less (10 points)
8 to 16 hours (75 points)
16 to 24 hours (100 points)

2. How much downtime can your business afford?
Threat: Computer downtime cost US businesses $4 billion a year, primarily through lost revenue.
1 week to 1 month (10 points)
2 days to 1 week (75 points)
1 day or less (100 points)

3. What is your business system or data worth?
Threat: 43% of U. S. Business never re-open after a disaster experience and 29% close with in 2 years.
$10,000 or less (10 points)
$10,000 to 100,000 (75 points)
$100,000 or more (100 points)

4. How many users does your computer system support?
Threat: The manufacturing industry lost an average of $421,000 per incident of on-line computer systems downtime.
1 to 10 users (10 points)
10 to 100 users (75 points)
100 or more users (100 points)

5. How much down time have you experienced over the last year?
Threat: The average company’s computer system was down 9 times per year for an average of 4 hours each time.
20 hours or less (10 points)
20 to 150 hours (75 points)
150 or more hours (100 points)

6. How many hours is your data center unattended?
Threat: The average company’s hourly downtime costs an average of $330,000 per outage.
1 hour or less (10 points)
1 hour to 8 hours (75 points)
8 hours or more (100 points)

7. What percentage of your systems and environmental conditions (temperature, water, and smoke) are you monitoring with an early detection system?
Threat: Environmental incident’s accounted for 10. 3% of business interruptions in the past 5 years.
90% or more (10 points)
70 to 90% (75 points)
70% or less (100 points)

8. How many hours has your UPS had to back up your system this year?
Threat: Power problems accounted for 29. 48% of U.S. computer outages.
3 or less hours (10 points)
3 to 8 hours (75 points)
8 or more hours (100 points)

9. If your system went down on Friday at midnight, how long would it be before you are notified?
Threat: A 1993 Gallup/GRN survey reported that Fortune 1000 companies average 1.6 hours of LAN downtime per week [that is over 2 weeks per year].
3 or less hours (10 points)
3 to 8 hours (75 points)
8 or more hours (100 points)

10. How many people have access to your main computer room ?
Threat: Human error accounted for 34. 4% of business interruptions in the past 5 years
3 or less (10 points)
3 to 10 (75 points)
10 or more (100 points)

Scoring :

165 and under: Your computer room is either very well protected or computer room down time will not affect your business.
165-799: You have trouble spots in your computer room; proactive steps taken now will help you avoid trouble in the future.
800 and over: Your computer room and quite possibly your job are in serious jeopardy. Look into ways of securing your computer room before disaster strike’s time is ticking.

Read More

The Hidden Costs of Increasing Data Storage

data storage costLarge-scale IT environments have the resources to manage all aspects of a network expansion, including the initial analysis, equipment installation and wiring, and proper access management to users. In smaller environments the planning may not go beyond the immediate reaction to the user’s needs—that is, “we’re out of space!” While the size of the environment may determine how storage needs are addressed and managed, such things as proper equipment cooling, storage management software that allows for scalable growth (SRM), disaster recovery (including backup contingencies), and data recovery concerns apply to IT environments of every size.

In one scenario, picture a small business with five desktop machines. Despite following careful data compression procedures and rigorous archiving of old files, their system is running out of space. They have a small file server sitting near the users’ desks. Can the business owner upgrade the file server with a bigger hard drive or should he add a separate rack of inexpensive drives? How much space will they need? Will a terabyte be enough? What if they need to upgrade in the future? How hard will it be? What other hidden costs are they going to run into?

In another scenario, a business that uses 30-40 desktop machines has a file server located in a separate room with adequate cooling, user access management, and a solid network infrastructure. But they too are running out of space. When they plan for an expansion, what hidden costs will they need to consider?

In addition to equipment investment, there are many hidden costs to consider when determining storage needs and subsequent management. Following are some hidden costs identified when it comes to storage.

How can you get the most out of existing storage space, not allowing it to fill up so quickly? In conjunction, how do you prevent your storage space from running out before the full life expectancy is realized? This is where storage management software, such as SRM and ILM, enters the picture. Storage Resource Management (SRM) software provides storage administrators the right tools to manage space effectively. Information Lifecycle Management (ILM) software helps the management of data through its lifecycle.

While a viable solution, SRM and ILM software may not cover all the needs of a business environment. SRM and ILM software are designed to manage files and storage effectively, and with a level of automation. Beyond this is where good old-fashioned space management is required. Remember the days when space was at a premium and there were all sorts of methods to make sure that inactive files were stored somewhere else—like on floppies? Remember when file compression utilities came out and we were squeezing every duplicate byte out of files? Those techniques are not outdated just because the cost per MB has dropped, or tools exist to help us manage data storage. Prudent storage practices never go out of style.

Power consumption
Manufacturers are working hard to optimize the performance of their machines, yet server power consumption remains on the increase. What will be the power requirement of your company’s new storage solution? Luiz André Barroso at Google reports that if performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin.

Power consumption can be a hidden fixed cost that may not have been expected with the expansion of storage space. Especially when consider the fluctuating costs of energy, unanticipated power usage increases can be an expensive budget buster affecting the entire enterprise.

Cooling requirements
Closely related to power consumption is the need to keep cool the more powerful processors found in the latest machines. Both the performance and life expectancy of the equipment are related to the component temperature of the equipment. Ever since the Pentium II processor in 1997, proper heat dissipation using heat sinks and cooling fans has become a standard for computer equipment. Today’s high performance processors, main boards, video cards, and hard drives require reliable temperature management in order to effectively and efficiently work day in, day out.

If you or your client’s storage requirements grow, proper ambient server room temperature settings are going to be required. Adding such a room or creating the necessary environment may add build-out costs, not to mention increase those power consumption and energy costs mentioned about earlier.

Noise
With proper heat dissipation and cooling comes noise. All those extra fans and cooling compressors can create a noticeable amount of decibels. A large-scale IT environment has the luxury of potentially keeping its noisy machines away from the users. However, in a smaller-scale business or home business, some have found the sound levels generated by their storage equipment to be intolerable or at minimum concentration breaking. Such noise makes surrounding areas non-conducive to work and productivity, hindering employee’s ability to simply think. When increasing your data storage, make sure the resulting noise generated is tolerable. Be sure, too, that noise suppression efforts don’t interfere or defeat heat dissipation or cooling solutions.

Administrative cost
The equipment investment for the expansion may be significant, but how does this increased storage relate to administrative needs? Should management hire a network consultant to assess user needs, then install, setup, and test the new equipment? Or can the company’s in-house network administrator do the work? A small company has a risk because although they might not be able to afford to have a professional assessment and installation, they may learn the hard way with an inexpensive solution the old adage of “you get what you pay for.”

A non-professional might misdiagnose storage usage needs, set up the equipment incorrectly, or buy equipment that isn’t a good fit for the environment. Such unintentional blunders are why there are certifications for network professionals. Storage management is not as simple as adding more space when needed, it is a complicated, multi-layered endeavor affecting every aspect and employee of a business.

Although using the skills of a professional greatly increases the success of the storage expansion, it will raise the final cost. When considering the monetary expense, businesses must also remember to consider how much other ‘costs’ – overall risk, loss of data availability, system downtime if the implemented solution fails – they can afford.

Backup management
How does your business currently manage backup cycles and corresponding storage needs? Do you store your backups on-site, or do you have a safe alternate location at which to store this precious data? Natural disasters such as fires and floods, and extreme disasters like Hurricane Katrina are wakeup calls to many resistant to the idea of offsite data storage. Offsite data storage may be as simple as storing backup tapes off site or archiving data with data farms for a monthly space rental fee, or as complex as having a mirrored site housing a direct copy of all your data (effective but costly).

Whatever backup management and storage process utilized, backups created should be tested, as well as the backup system with the expanded storage to make sure it’s actually backing everything up. There is nothing worse than relying on a backup that doesn’t work, was improperly created, or doesn’t contain the vital data your business needs.

Database storage
Databases created as a result of daily business activities can be staggering (as referenced in the earlier example of one large retail corporation’s generation of a billion rows of sales data daily). This activity can result in large amounts of data being stored. One way to optimize database performance is by separating the database files and storing them in three separate locations. In this process, data files are stored in one location, transaction files or logs in a second location, and backups in a completely different location. This not only makes data processing more efficient but prevents having an “all the eggs in one basket” scenario, beneficial when experiencing a process disruption such as equipment failure.

Undertaking this type of database optimization involves the aforementioned planning and equipment costs. But keep in mind how database information has reached into all areas of the business – customer information, billing information, and inventory management information – and how vital it is that this information be protected. Hidden costs associated with protecting database information can escalate quickly.

Installation and cabling
The old trend was a standalone unit where the processor and storage were one system. Now the trend is to build a separate networked storage system that can be accessed by many users and servers. In general, there are two types of separate storage systems, the storage area network (SAN), and the network attached storage (NAS).

The separate storage system offers a number of advantages, including easier expansion. The consideration however, is that you will need the network infrastructure to support a separate storage system. In other words, if your storage system is in a separate building, you will need faster network connectivity to avoid a “bottleneck” in communication between the server and the storage device.

Disaster recovery
A disaster recovery plan encompasses everything that could happen if there is a system failure due to destruction, natural disaster, fire, theft or equipment failure. Part of a good disaster recovery plan includes a business continuation plan, that is, how to keep the business going and doing business despite the disaster. When planning for a data storage expansion, the disaster recovery plan should be reviewed to make sure the company’s data is accessible in the event of a contingency, and be closely aligned to business continuity planning and efforts.

Data recovery
Data recovery can become a hidden cost if not planned for. Every business continuity plan and disaster plan should include professional data recovery services as part of their overall solution.

As you can see, there is much more to scalable growth than just adding more storage space. Although prudent planning and every precaution in instigating and undertaking an effective storage management solution has been enacted, failures and unforeseen circumstances can and do occur. Simply put, despite the best preparation disasters do.

Read More

Avoide the Backup Tape Graveyard?

Many businesses could find that when in need of a data recovery, their data is not retrievable because it is stored on old tape formats.

Many organizations have electronic data dating back decades and the chances of it being retrieved or rendered inaccessible over time are fairly high. Furthermore, retrieving data that is stored on out-of-date tapes can be costly and may require special equipment.

It is important for a company to look at its past as well as its future. Information that may need to be accessed must be transferred to modern media formats in order to be compliant with current legislation and recoverable in the event of data loss. By maintaining up-to-date records and data on modern media formats, extraction can be quick and painless. Furthermore, storage costs will decrease and the organization will be better aligned with compliance regulations.

In addition to ensuring backups are stored on modern media formats, the following tips may help minimize the chance of backup failure/data disaster.

1.Verify your backups. Backups, regardless of age, are known to fail or not work. This often goes undiscovered until they are needed.

2.Store backup tapes off site. This will ensure your files are preserved if your site experiences a fire, flood or other disaster.

3.Create a safe “home” for your backup tapes. Keep backup tapes stored in a stable environment, without extreme temperatures, humidity or electromagnetism.

4.Track the “expiration date.” Backup tapes are typically rated to be used from 5,000 to 500,000 times, depending on the type of tape. Tape backup software typically will keep track of the tapes, regardless of the rotation system.

5.Maintain your equipment. Clean your tape backup drive periodically, following directions in its manual regarding frequency. Most businesses just send the drive back to the manufacturer when it begins to have problems, but if a drive has problems, so can the backup tapes.

Read More

Protecting Data from Severe Weather

You can protect your data by following some simple precautions. With that said, even the most well-protected hard drives can crash, fail, quit, click, die… you get the picture. So here are a few tips for how to respond when extreme weather does damage your computer equipment.

1. Summer heat can be a significant problem as overheating can lead to drive failures can result. Keep your computer in a cool, dry area to prevent overheating.

2. Make sure your servers have adequate air conditioning. Increases in computer processor speed have resulted in more power requirements, which in turn require better cooling – especially important during the summer months.

3. To prevent damage caused by lightning strikes, install a surge protector between the power source and the computer’s power cable to handle any power spikes or surges.

4. Invest in some form of Uninterruptible Power Supply (UPS), which uses batteries to keep computers running during power outages. UPS systems also help manage an orderly shutdown of the computer – unexpected shutdowns from power surge problems can cause data loss.

5. Check protection devices regularly: At least once a year you should inspect your power protection devices to make sure that they are functioning properly.

Responding to Data Loss Caused by Severe Weather

1. Do not attempt to operate visibly damaged computers or hard drives.

2. Do not shake, disassemble or attempt to clean any hard drive or server that has been damaged – improper handling can make recovery operations more difficult which can lead to valuable information being lost.

3. Never attempt to dry water-damaged media by opening it or exposing it to heat – such as that from a hairdryer. In fact, keeping a water-damaged drive damp can improve your chances for recovery.

4. Do not use data recovery software to attempt recovery on a physically damaged hard drive. Data recovery software is only designed for use on a drive that is fully functioning mechanically.

5. Choose a professional data recovery company to help you.

Read More

Computer Virus

computer virusHow to protect from getting a virus?
In today’s world having anti-virus software is not optional.  A good anti-virus program will perform real-time and on-demand virus checks on your system, and warn you if it detects a virus.  The program should also provide a way for you to update its virus definitions, or signatures; so that your virus protection will be current (new viruses are discovered all the time).  It is important that you keep your virus definitions as current as possible.

Once you have purchased an anti-virus program, use it to scan new programs before you execute or install them and new diskettes (even if you think they are blank) before you use them.

You can also take the following precautions to protect your computer from getting a virus:

  • Always be very careful about opening attachments you receive in an email — particularly if the mail comes from someone you do not know. Avoid accepting programs (EXE or COM files) from USENET news group postings. Be careful about running programs that come from unfamiliar sources or have come to you unrequested. Be careful about using Microsoft Word or Excel files that originate from an unknown or insecure source.
  • Avoid booting off a diskette by never leaving a floppy disk in your system when you turn it off.
  • Write protect all your system and software diskettes when you obtain them. This will stop a computer virus spreading to them if your system becomes infected.
  • Change your system’s CMOS Setup configuration to prevent it from booting from the diskette drive. If you do this a boot sector virus will be unable to infect your computer during an accidental or deliberate reboot while an infected floppy is in the drive. If you ever need to boot off your Rescue Disk, remember to change the CMOS back to allow you to boot from diskette!
  • Configure Microsoft Word and Excel to warn you whenever you open a document or spreadsheet that contains a macro (in Microsoft Word check the appropriate box in the Tools | Options | General tab).
  • Write-protect your system’s NORMAL.DOT file. By making this file read-only, you will hopefully notice if a macro virus attempts to write to it.
  • When you need to distribute a Microsoft Word file to someone, send the RTF (Rich Text Format) file instead. RTF files do not suport macros, and by doing so you can ensure that you won’t be inadvertently sending an infected file.
  • Rename your C:\AUTOEXEC.BAT file to C:\AUTO.BAT. Then, edit your C:\AUTOEXEC.BAT file to the following single line:
    auto. By doing this you can easily notice any viruses or trojans that try to add to, or replace, your AUTOEXEC.BAT file. Additionally, if a virus attempts to add code to the bottom of the file, it will not be executed.
  • Finally, always make regular backups of your computer files. That way, if your computer becomes infected, you can be confident of having a clean backup to help you recover from the attack.

What types of files that can scan and set for auto-protection?
Here’s a list of file extensions that you should make sure your anti-virus software scans and auto protects:

386, ADT, BIN, CBT, CLA, COM, CPL, CSC, DLL, DOC, DOT, DRV, EXE, HTM, HTT, JS, MDB, MSO, OV?, POT, PPT, RTF, SCR, SHS, SYS, VBS, XL?

What are some good indications that the computer has a virus?
A very good indicator is having anti-virus software tell you that it found several files on a disk infected with the same virus (sometimes if the software reports just one file is infected, or if the file is not a program file — an EXE or COM file — it is a false report).

Another good indicator is if the reported virus was found in an EXE or COM file or in a boot sector on the disk.

If Windows can not start in 32-bit disk or file access mode your computer may have a virus.

If several executable files (EXE and COM) on your system are suddenly and mysteriously larger than they were previously, you may have a virus.

If you get a warning that a Microsoft Word document or Excel spreadsheet contains a macro but you know that it should not have a macro (you must first have the auto-warn feature activated in Word/Excel).

What are the most common ways to get a virus?
One of the most common ways to get a computer virus is by booting from an infected diskette.  Another way is to receive an infected file (such as an EXE or COM file, or a Microsoft Word document or Excel spreadsheet) through file sharing, by downloading it off the Internet, or as an attachment in an email message.

What should do when get a virus?
First, don’t panic! Resist the urge to reformat or erase everything in sight. Write down everything you do in the order that you do it.  This will help you to be thorough and not duplicate your efforts.  Your main actions will be to contain the virus, so it does not spread elsewhere, and then to eradicate it.

If you work in a networked environment, where you share information and resources with others, do not be silent.  If you have a system administrator, tell her what has happened.  It is possible that the virus has infected more than one machine in your workgroup or organization.  If you are on a local area network, remove yourself physically from it immediately.

Once you have contained the virus, you will need to disinfect your system, and then work carefully outwards to deal with any problems beyond your system itself (for example, you should meticulously and methodically look at your system backups and any removable media that you use).  If you are on a network, any networked computers and servers will also need to be checked.
Any good anti-virus software will help you to identify the virus and then remove it from your system.  Viruses are designed to spread, so don’t stop at the first one you find, continue looking until you are sure you’ve checked every possible source.  It is entirely possible that you could find several hundred copies of the virus throughout your system and media!

To disinfect your system, shut down all applications and shut down your computer right away.  Then, if you have Fix-It Utilities 99, boot off your System Rescue Disk.  Use the virus scanner on this rescue disk to scan your system for viruses.  Because the virus definitions on your Rescue Disk may be out of date and is not as comprehensive as the full Virus Scanner in Fix-It, once you have used it and it has cleared your system of known viruses, boot into Windows and use the full Virus Scanner to do an “On Demand” scan set to scan all files.  If you haven’t run Easy Update recently to get the most current virus definition files, do so now.
If the virus scanner can remove the virus from an infected file, go ahead and clean the file.  If the cleaning operation fails, or the virus software cannot remove it, either delete the file or isolate it.  The best way to isolate such a file is to put it on a clearly marked floppy disk and then delete it from your system.

Once you have dealt with your system, you will need to look beyond it at things like floppy disks, backups and removable media.  This way you can make sure that you won’t accidentally re-infect your computer.  Check all of the diskettes, zip disks, and CD-ROMs that may have been used on the system.

Finally, ask yourself who has used the computer in the last few weeks.  If there are others, they may have inadvertently carried the infection to their computer, and be in need of help.  Viruses can also infect other computers through files you may have shared with other people.  Ask yourself if you have sent any files as email attachments, or copied any files from your machine to a server, web site or FTP site recently.  If so, scan them to see if they are infected, and if they are, inform other people who may now have a copy of the infected file on their machine.

Read More

Dealing with the Complexity of Storage Systems

In fact, even with all the advancements in storage technology, only about 20%* of back-up jobs are successful (*according to Enterprise Strategy Group).

Each year hundreds of new data storage products and technologies meant to make the job faster and easier are introduced, but with so many categories and options to consider, the complexity of storage instead causes confusion – which ultimately leads to lost time and the loss of the very data such new enhancements are meant to avoid.

Hence the question for most IT professionals who have invested hundreds of thousands of dollars in state-of-the-art storage technology remains, “How can data loss still happen and what am I supposed to do about it?”

Why Backups Still Fail
In a perfect world, a company would build their storage infrastructure from scratch using any of the new storage solutions and standardize on certain vendors or options. If everything remained unchanged, some incredibly powerful, rock-solid results could be achieved.

However, in the real world storage is messy. Nothing remains constant – newly created data is added at an unyielding pace while new regulations, such as Sarbanes-Oxley, mandate changes in data retention procedure. Since companies can rarely justify starting over from scratch, most tend to add storage in incremental stages – introducing new elements from different vendors at different times – hence the complexity of storage.

All this complexity can lead to a variety of backup failures that can catch companies unprepared to deal with the ramifications of data loss. One reason why backups fail is due to bad media. If a company has their backup tapes sitting on a shelf for years, the tapes could become damaged and unreadable. This is a common occurrence if backup tapes are not stored properly. Another reason why backups fail has to do with companies losing track of the software with which those backups were created. For a restore to be successful, most software packages require that the exact environment still be available. Finally, backups fail due to corruption in the backup process. Many times companies will change their data footprint but not change their backup procedure to keep up – so they are not backing up what they think they are. Without regular testing, all of these reasons are likely sources of failure.

What to Do When Your Backup Fails
No matter how much a company tries to speed operations and guard against problems with new products and technology, the threat of data loss remains and backup and storage techniques do not always provide the necessary recovery. When an hour of down time can result in millions of dollars lost, including data recovery in your overall disaster plan is critical, and may be the only way to restore business continuity quickly and efficiently. When a data loss situation occurs, time is the most critical component. Decisions about the most prudent course of action must be made quickly, which is why administrators must understand when to repair, when to restore and when to recover data.

When to Repair
This is as simple as running file repair tools (such as fsck or CHKDSK – file repair tools attempt to repair broken links in the file system through very specific knowledge of how that file system is supposed to look) in read-only mode first, since running the actual repair on a system with many errors could overwrite data and make the problem worse. Depending on the results of the read-only diagnosis, the administrator can make an informed decision to repair or recover. If they find a limited amount of errors, it is probably fine to go ahead and fix them as the repair tool will yield good results.

Note: if your hard drive makes strange noises at any point, immediately skip to the recovery option.

When to Restore
The first question an admin should ask is how fresh their last backup is and will a restore get them to the point where they can effectively continue with normal operations. There is a significant difference between data from the last backup and data from the point of failure, so it is important to make that distinction right away. Only a recovery can help if critical data has never been backed up. Another important question is how long it will take to complete the restore – if the necessary time is too long they might need to look at other options. A final consideration is how much data are they trying to restore. Restoring several terabytes of data, for example, will take a long time from tape backups.

When to Recover
The decision to recover comes down to whether or not a company’s data loss situation is critical and how much downtime they can afford. If they don’t have enough time to schedule the restore process, it is probably best to move forward with recovery. Recovery is also the best method if backups turn out to be too old or there is some type of corruption. The bottom line is, if other options are attempted and those options fail, it is best to contact a recovery company immediately. Some administrators will try multiple restores or repairs before trying recovery and will actually cause more damage to the data.

Read More

Data Protection Schemes to Storage System

Storage System manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster. Some large SAN units incorporate intricate device block-level organization, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the Control Processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.

Some computer scientists specializing in the storage system field are proposing adding more intelligence to the RAID array controller card so that it is ‘file system aware.’ This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.

Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependant on a specific system’s file system. In organizations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.

Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.

Reviewing the University of California, Berkeley report about the amount of digital information that was produced 2003 is staggering. You or your client’s site may not have terabytes or pet bytes of information, yet during a data disaster, every file is critically important.

Avoiding Storage System Failures
There are many ways to reduce or eliminate the impact of storage system failures. You may not be able to prevent a disaster from happening, but you may be able to minimize the disruption of service to your clients.

There are many ways to add redundancy to primary storage systems. Some of the options can be quite costly and only large business organizations can afford the investment. These options include duplicate storage systems or identical servers, known as ‘mirror sites’. Additionally, elaborate backup processes or file-system ‘snapshots’ that always have a checkpoint to restore to, provide another level of data protection.

Experience has shown there are usually multiple or rolling failures that happen when an organization has a data disaster. Therefore, to rely on just one restoration protocol is shortsighted. A successful storage organization will have multiple layers of restoration pathways.

Here are several risk mitigation policies that storage administrators can adopt that will help minimize data loss when a disaster happens:

Offline storage system — Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption.

Rebuilding a failed drive — When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services involved. During a rebuild, replacing a second failed drive will change the data on the other drives.

Storage system architecture — Plan the storage system’s configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it.

During an outage — If the problem escalates up to the OEM technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved.

Read More

Avoiding Storage System Failure?

Reduce or Eliminate the Impact of Storage System Failures
Storage systems have become their own unique and complex computer field and can mean different things to different people. So what is the definition of these systems? Storage systems are the hardware that store data.

For example, this may be a small business server supporting an office of ten users or less—the storage system would be the hard drives that are inside of that server where user information is located. In large business environments, the storage systems can be the large SAN cabinet that is full of hard drives and the space has been sliced-and-diced in different ways to provide redundancy and performance.

The Ever-Changing Storage System Technology
Today’s storage technology encompasses all sorts of storage media. These could include WORM systems, tape library systems and virtual tape library systems. Over the past few years, SAN and NAS systems have provided excellent reliability. What is the difference between the two?

  • SAN (Storage Area Network) units can be massive cabinets—some with 240 hard drives in them! These large 50+ Terabyte storage systems are doing more than just powering up hundreds of drives. These systems are incredibly powerful data warehouses that have versatile software utilities behind them to manage multiple arrays, various storage architecture configurations, and provide constant system monitoring.
  • NAS (Network Attached Storage) units are self-contained units that have their own operating system, file system, and manage their attached hard drives. These units come in all sorts of different sizes to fit most needs and operate as file servers.

  • For some time, large-scale storage has been out reach of the small business. Serial ATA (SATA) hard disk drive-based SAN systems are becoming a cost-effective way of providing large amounts of storage space. These array units are also becoming mainstream for virtual tape backup systems—literally RAID arrays that are presented as tape machines; thereby removing the tape media element completely.

    Other storage technologies such as SCSI, DAS (Direct Attached Storage), Near-Line Storage (data that is attached to removable media), and CAS (Content Attached Storage) are all methods for providing data availability. Storage Architects know that just having a ‘backup’ is not enough. In today’s high information environments, a normal nightly incremental or weekly full backup is obsolete in hours or even minutes after creation. In large data warehouse environments, backing up data that constantly changes is not even an option. The only method for those massive systems is to have storage system mirrors—literally identical servers with the exact same storage space.

    How does one decide which system is best? Careful analysis of the operation environment is required. Most would say that having no failures at all is the best environment—that is true for users and administrators alike! The harsh truth is that data disasters happen every day despite the implementation of risk mitigation policies and plans.

    When reviewing your own or your client’s storage needs, consider these questions:

  • What is the recovery turn-time? What is your client’s maximum time period allowed to be back to the data? In other words, how long can you or your client survive without the data? This will help to establish performance requirements for equipment.
  • Quality of data restoredIs original restored data required or will older, backed up data suffice? This relates to the backup scheme that is used. If the data on your, or your client’s storage system changes rapidly, then the original data is what is most valuable.

  • How much data are you or your client archiving?Restoring large amounts of data will take time to move through a network. On DAS (Direct Attached Storage) configurations, time of restoration will depend on equipment and I/O performance of the hardware.

  • Read More

    The Ever Growing Challenges of Data Storage

    Electronic data storage needs continue to grow. As your client’s organization produces more information in electronic format, storage space is becoming increasingly important.

    Managing data storage for performance, integrity, and scalability is the next summit in Information Technology management and planning.

    It wasn’t long ago that having a single volume in the terabyte size range was rare for extremely large organizations. With the advent of IDE RAID and SATA RAID capabilities, large storage systems are within reach of medium to small businesses.

    Let’s put things into perspective – how much space is 1TB?

    Number of Bytes

    What that relates to

    1 Byte

    One character (letter or number)

    1KB (Kilobyte) 1000 bytes

    3 or 4 typed manuscript style pages

    1MB (Megabyte) 1,000,000 bytes

    Average size of a novel (300-400pgs); 1 diskette

    1GB (Gigabyte) 1,000,000,000 bytes

    Approximately 20 sets of encyclopedias

    1TB (Terabyte) 1,000,000,000,000 bytes

    A small library (approx. 5,000 books)

    Number of Bytes What that relates to

    To get the best performance and reliability from any storage space, strategic storage planning is essential. This month’s technical article will review the importance of the file system and planning considerations.
    The File System’s Role

    The file system’s role is a layer above the storage device(s) itself. The file system manages the individual allocation units of the volume and provides hierarchical organization for the files. Managing the allocation units of the files requires algorithms that will know where to write file data and have a method of verifying that the data was written correctly.

    Hierarchical organization is the logical formation of directories and underlying structures. For instance, a storage volume that has millions of files on it will have specific data that describes the directory or folder structure of where these files belong. This directory or folder structure has integrity checks and balances to ensure that the indices reliably point to the user data.

    Today’s file systems track more than just the name of the file or directory structure. Additional information called Metadata is also stored. Metadata is data about data. Essentially, the file system is saving more details about your files and is storing this along with attributes of the file. Some file systems record only the minimum of metadata (file name, size, time and date, start address), while other file systems record more information (file name, size, multiple time and dates, security details such as Read/Write/Execute/Delete privileges).

    Some file systems are designed for specific hardware and storage media. For instance, the file systems used for CD-ROMs are quite different than those for floppy diskettes. Forcing these file systems on other media may be possible, but not practical. So while specific storage media, such as CD-ROM, DVD-ROM, magnetic-optical disks, and tape, have unique file systems, hard disk and hard disk storage systems can work with many different file systems.

    Understanding these extra features of file systems will help in choosing the best one for the needs of the volume.

    File System Considerations

    During server planning, more time and research is spent on hardware, data space requirements, and application specifications than on how the data will be stored. The file system can become a low priority during the planning stages of a file or data server because the file system is inherent to the operating system. Sometimes it is assumed that this is best fit. However, your storage requirements may call for a more robust method of data organization on the hard disk(s). Investigate whether the operating system you are planning to use allows the other file systems to be used.

    If you have a choice of file systems, here are some requirements to consider:

    • Volume Size
    • Estimated number of files on the volume
    • Estimated size of files on the volume
    • Shared volume requirements
    • Backup Requirements

    Volume Size

    Volume size is an important place to start for planning. However, this is only the start since strategic planning involves scalability—can it grow as the need arises without interruption of service to the users? The axiom of filling free space is all too true for data volumes. It is not uncommon to add a terabyte of storage and in six months it’s already half full.

    Two terabytes (2TB) has become the initial hurdle for many file systems. This limit starts with the SCSI command set being limited to 32-bit logical block addressing. Therefore, a single SCSI LUN using 512 byte block size cannot access over 2TB. File systems that have been used on these systems have been ‘adjusted’ to handle extremely large volumes. However, volumes that are nearing the 2TB limit may be stressing the limits of the file system.

    Estimated Number of Files on the Volume

    The next item to plan for is the number of files that could potentially be stored on the volume. Earlier we discussed Metadata and how the file system uses this to describe the files that are stored. This means there is going to be a certain amount of volume space used by the file system just to manage the files that are there.

    File systems that are not built for excessively large directories will slow down applications that access them. This can adversely affect users that have thousands of files on a volume that has millions of files.
    Estimated Size of Files on the Volume

    The next consideration is the sizes of the files that will be on the volume. Organizations that are running large database servers usually have the need to be able to pre-allocate very large files in the gigabyte range of sizes. The file system and operating system need to be able to handle this level of input and output. For these types of enterprises’ systems, expectations are high for performance and integrity. Will the file system be able to handle those extremely large files?

    Shared Volume Requirements

    There are mixed environments in many organizations today. Some organizations may have three or four different platforms of computer systems; from mainframe systems to 64-bit Sun machines, from Apple desktops to Intel based machines. Some of these systems may share storage space. Will the volume support mixed data types? Additionally, will the operating system that manages the file system allow for different types of data streams to be accessed simultaneously?
    Backup Requirements

    Large volumes present a challenge for backup procedures. Due to the amount of data, restorations can take days. There are some file systems that have ‘Snap-shot’ technology incorporated into the backup software. This technology saves critical file system metadata. This, along with incremental file backups, is part of entire system scheme of data archiving.

    These considerations should be matched with hardware specifications to get the best performance, integrity, and growth capability.

    Read More

    What Are the Risks of Using Portable Storage?

    As far as reliability goes, USB flash drives are very durable. They are “hot-swappable” (that is, removable without shutting down the computer) and “solid-state” (that is, no moving parts). They’re great for transferring data between computers. A British television program (“The Gadget Show”) decided to put flash drives to the test. They ran over them with a car, blasted them out of cannon, and baked them in a soufflé at 400° F! What was the result? The flash drives shot out the cannon suffered because they were broken into little pieces; the rest worked just fine and retained their data.

    For the majority of USB flash drive users, their drives will never go through that type of punishment. It seems that the biggest risk in using these devices is simply losing them! They are so small and compact that it would be easy to misplace a USB flash drive. Most of them come with neck strap or keychain clip that allow them to be with you constantly.

    Most industry sites define flash drives as a compact storage and transporting device whereas most dictionaries define flash memory as a computer chip with a read-only memory that can be electronically erased and reprogrammed without being removed from the circuit board. By definition, using a flash drive as an active storage area could pose a risk. For instance, one user used a flash drive like a second document folder. The user was creating and editing documents on the device with their word processor re-saving active documents every five minutes. This constant writing wore out the flash memory. Just like EEPROM chips, flash devices have a lifespan (this depends on the number of write cycles, check with the manufacturer find out the expectancy rates of your particular model), however, there is no limit to the number of times data can be read.

    Security is the final risk. A common use for a flash drive is to transfer files from work to home. If the flash drive was lost or stolen during the transport, proprietary company information would be compromised. In fact, most small to large companies have strict policies of what types of information can leave the premises. This highlights the importance of data encryption.

    There are a number of software encryption products that will maintain data security even if the flash device falls into the wrong hands. In fact, most USB flash drives come with some sort of free encryption software; however the free software may not meet your data protection requirements. If you use your flash drive for your company’s information or for your own personal information, be sure to purchase quality encryption software. The manufacturer of the flash device should have a recommendation of software on their Web site.

    Read More