10 Storage Technology Forecasts From Toshiba for 2012

Toshiba has developed in the forefront of digital data storage and protection because the start from the microcomputer revolution within the eighties. It practically invented NAND expensive, that is increasingly pervasive each day within the IT world, because of its ubiquitous use within mobile phones, tablet Computers, portable storage drives, servers, storage arrays, hubs and various other connected products.

Storage Technology,Toshiba Hard Disk Drives

Toshiba is another major disk drive and data center systems maker. The Japan-based company also offers a firsthand perspective around the IT world in the Off-shore Rim perspective, one which is also growing in importance.

HDDs Still the Mainstay Media
Spinning-disk hard disk drives will still be the mainstay from the storage niche for the expected future, in line with the development of data and also the petabytes of capacity only HDDs can offer.

Total Data Storage Rising to Zettabyte Scales in 2012
The quantity of content produced and duplicated in 2012 will exceed 2 zettabytes (2 trillion gb).

Greater Understanding of Disaster Recovery Needs
Recent disasters in Southeast Asia?anamely, flooding in Thailand, Cambodia and Vietnam?awill drive elevated understanding of the requirement for backup and disaster recovery for companies and customers.

Even Thinner Hard Disk Drives in route
Laptop producers will incorporate more 7mm z-height HDDs to aid thin and lightweight notebook Computers.

More Storage Tiering Visiting Data Centers
Enterprise data centers will expand their utilization of storage tiers to support large data growth, control operating costs, and make more flexible and resilient IT infrastructures.

Solid-Condition Drives Getting Good Play in 2012
Consumer SSD adoption continues to grow and earn a location within the greatest-performance segments of consumer Computers.

More Products for additional Storage Needs
Exterior storage demand continues to grow to supply backup solutions for a lot of products so that as “off-board” storage for all the different limited-storage products, for example media pills.

Self-File encryption Gets To Be More Common
Safeguarding private data remains a significant concern and using self-encrypting storage will expand make it possible for a far more seamless method of safeguarding privacy through the data’s lifecycle.

Cloud IT Concentrates on Scalability, Versatility
The still-emerging ways to use cloud computing inside it continues to pay attention to versatility and scalability with virtualized shared storage fundamentally of cloud architecture and value delivery.

SMBs Look Closer at Shared Storage Environments
Shared storage will have an growing role in small , midsize business conditions SMBs will implement shared storage to assist access data faster and their companies running more effectively.

Read More

Dealing with the Complexity of Storage Systems

In fact, even with all the advancements in storage technology, only about 20%* of back-up jobs are successful (*according to Enterprise Strategy Group).

Each year hundreds of new data storage products and technologies meant to make the job faster and easier are introduced, but with so many categories and options to consider, the complexity of storage instead causes confusion – which ultimately leads to lost time and the loss of the very data such new enhancements are meant to avoid.

Hence the question for most IT professionals who have invested hundreds of thousands of dollars in state-of-the-art storage technology remains, “How can data loss still happen and what am I supposed to do about it?”

Why Backups Still Fail
In a perfect world, a company would build their storage infrastructure from scratch using any of the new storage solutions and standardize on certain vendors or options. If everything remained unchanged, some incredibly powerful, rock-solid results could be achieved.

However, in the real world storage is messy. Nothing remains constant – newly created data is added at an unyielding pace while new regulations, such as Sarbanes-Oxley, mandate changes in data retention procedure. Since companies can rarely justify starting over from scratch, most tend to add storage in incremental stages – introducing new elements from different vendors at different times – hence the complexity of storage.

All this complexity can lead to a variety of backup failures that can catch companies unprepared to deal with the ramifications of data loss. One reason why backups fail is due to bad media. If a company has their backup tapes sitting on a shelf for years, the tapes could become damaged and unreadable. This is a common occurrence if backup tapes are not stored properly. Another reason why backups fail has to do with companies losing track of the software with which those backups were created. For a restore to be successful, most software packages require that the exact environment still be available. Finally, backups fail due to corruption in the backup process. Many times companies will change their data footprint but not change their backup procedure to keep up – so they are not backing up what they think they are. Without regular testing, all of these reasons are likely sources of failure.

What to Do When Your Backup Fails
No matter how much a company tries to speed operations and guard against problems with new products and technology, the threat of data loss remains and backup and storage techniques do not always provide the necessary recovery. When an hour of down time can result in millions of dollars lost, including data recovery in your overall disaster plan is critical, and may be the only way to restore business continuity quickly and efficiently. When a data loss situation occurs, time is the most critical component. Decisions about the most prudent course of action must be made quickly, which is why administrators must understand when to repair, when to restore and when to recover data.

When to Repair
This is as simple as running file repair tools (such as fsck or CHKDSK – file repair tools attempt to repair broken links in the file system through very specific knowledge of how that file system is supposed to look) in read-only mode first, since running the actual repair on a system with many errors could overwrite data and make the problem worse. Depending on the results of the read-only diagnosis, the administrator can make an informed decision to repair or recover. If they find a limited amount of errors, it is probably fine to go ahead and fix them as the repair tool will yield good results.

Note: if your hard drive makes strange noises at any point, immediately skip to the recovery option.

When to Restore
The first question an admin should ask is how fresh their last backup is and will a restore get them to the point where they can effectively continue with normal operations. There is a significant difference between data from the last backup and data from the point of failure, so it is important to make that distinction right away. Only a recovery can help if critical data has never been backed up. Another important question is how long it will take to complete the restore – if the necessary time is too long they might need to look at other options. A final consideration is how much data are they trying to restore. Restoring several terabytes of data, for example, will take a long time from tape backups.

When to Recover
The decision to recover comes down to whether or not a company’s data loss situation is critical and how much downtime they can afford. If they don’t have enough time to schedule the restore process, it is probably best to move forward with recovery. Recovery is also the best method if backups turn out to be too old or there is some type of corruption. The bottom line is, if other options are attempted and those options fail, it is best to contact a recovery company immediately. Some administrators will try multiple restores or repairs before trying recovery and will actually cause more damage to the data.

Read More