Posts Tagged ‘back up’

Data Backup Glossary (Letter N)

August 29th, 2011 Comments off

Near-line storage
Data Backup Glossary (Letter N)Near-line storage is used by corporations, including data warehouses, as an inexpensive, scalable way to store large volumes of data. Near-line storage devices include DAT and DLT tapes (sequential access); optical storage such as CD-ROM, DVD, and Blu-ray; magneto-optical which utilize magnetic heads with an optical reader; and also slower P-ATA and SATA hard disk drives. Retrieval of data is slower than SCSI hard disk which is usually connected directly to servers or in a SAN environment. Near-Line implies that whatever media the information is stored on, it can be accessed via a tape library or some other method electronically as opposed to off-line which signified some human intervention is required, such as retrieving and mounting a tape. Near-line can be slower, but the type of data (historical archives, backup data, video, and others) dictates that the information will not require instant access and high throughput that SAN and SCSI can provide and is less expensive per byte.

Network-attached storage(NAS)
A network-attached storage (NAS) device is a server that is dedicated to nothing more than file sharing. NAS does not provide any of the activities that a server in a server-centric system typically provides, such as e-mail, authentication, or file management. NAS allows more hard disk storage space to be added to a network that already utilizes servers without shutting them down for maintenance and upgrades. With a NAS device, storage is not an integral part of the server. Instead, in this storage-centric design, the server still handles all of the processing of data but a NAS device delivers the data to the user. A NAS device does not need to be located within the server but can exist anywhere in a LAN and can be made up of multiple networked NAS devices.

Nyquist’s Law
Also called Nyquist’s Theorem. Before sound as acoustic energy can be manipulated on a computer, it must first be converted to electrical energy (using a transducer such as a microphone) and then transformed through an analog-to-digital converter into a digital representation. This is all accomplished by sampling the continuous input waveform a certain number of times per second. The more often a wave is sampled the more accurate the digital representation. Nyquist’s Law, named in 1933 after scientist Harry Nyquist, states that a sound must be sampled at least twice its highest analog frequency in order to extract all of the information from the bandwidth and accurately represent the original acoustic energy. Sampling at slightly more than twice the frequency will make up for imprecisions in filters and other components used for the conversion.

Categories: Data Backup Tags: , ,

Data Backup Glossary (Letter L)

August 12th, 2011 Comments off

Data Backup Glossary (Letter L)Light archive
In reference to data storage, an archive that can be accessed by many authorized users. Access to the data is open to all the members of the “community” that have a need for the data.

In digital asset management (DAM) systems, an area within the web site (or web service) or other internal DAM where users can create and store a list of assets they want to reference or use at a later time. Lightboxes are common on stock photo web sites where registered users can store images until they are ready to download them.

Linear tape open
Linear tape open (LTO) is a technology that was developed jointly by HP, IBM, and Certance (Seagate) to provide a clear and viable choice in an increasingly complex array of tape storage options.  LTO technology is an “open format” technology, which means that users have multiple sources of product and media. The open nature of the technology also provides a means of enabling compatibility between different vendors’ offerings.

Local area network
A local area network (LAN) is a communications infrastructure—typically Ethernet—designed to use dedicated wiring over a limited distance (typically a diameter of less than five kilometers) to connect a large number of intercommunicating nodes.

Lost cluster
Also called a lost allocation unit, or a lost file fragment. A data fragment that does not belong to any file, according to the system’s file management system, and, therefore, is not associated with a file name in the file allocation table. Lost clusters can result from files not being closed properly, from shutting down a computer without first closing an application, or from ejecting a storage medium, such as a floppy disk, from the disk drive while the drive is reading or writing.

Low-level format
(n.) A formatting method that creates the tracks and sectors on a hard disk. Low-level formatting creates the physical format that dictates where data is stored on the disk. Modern hard drives are low-level formatted at the factory for the life of the drive. A PC cannot perform an LLF on a modern IDE/ATA or SCSI hard disk, and doing so would destroy the hard disk. A low-level format is also called a physical format.

(v.) The process of performing low-level formatting.

Categories: Data Backup Tags: , ,

Data Backup Glossary (Letter I)

August 6th, 2011 Comments off

The Internet Fibre Channel Protocol (iFCP) allows an organization to extend Fibre Channel storage networks over the Internet by using TCP/IP. TCP is responsible for managing congestion control as well as error detection and recovery services. iFCP allows an organization to create an IP SAN fabric that minimizes the Fibre Channel fabric component and maximizes use of the company’s TCP/IP infrastructure.


  • In computer science an image is an exact replica of the contents of a storage device (a hard disk drive or CD-ROM for example) stored on a second storage device.
  • Often used in place of the term digital image, which is an optically formed duplicate or other reproduction of an object formed by a lens or mirror.

Incremental backup
Any backup in which only the data objects that have been modified since the time of some previous backup are copied. Incremental backup is a collective term for cumulative incremental backups and differential incremental backups. Contrast with an archival, or full, backup, in which all files are backed up regardless of whether they have been modified since the last backup.

Information classification and management
Information classification and management (ICM) is a class of application-independent software that use advanced indexing, classification, policy and data access capabilities to automate data management activities above the storage layer.

The combined set of hardware, software, networks, facilities, and other components (including all of the information technology) necessary to develop, test, deliver, monitor, control, or support IT services. Associated people, processes, and documentation are not part of an infrastructure.

Intelligent information management
Intelligent information management (IIM) is a set of processes and underlying technology solutions that enables organizations to understand, organize, and manage all sorts of data types (for example, general files, databases, and e-mails). Key attributes that define an IIM solution include the following:

Interrecord gap
The space between two consecutive physical blocks on a data recording medium, such as a hard drive or a magnetic tape. Interrecord gaps are used as markers for the end of data and also as safety margins for data overwrites. An interrecord gap is also referred to as an interblock gap.

Internet small computer systems interface
Internet small computer systems interface (iSCSI) is a transport protocol that enables the SCSI protocol to be carried over a TCP-based IP network. iSCSI was standardized by the Internet Engineering Task Force and described in RFC 3720.

IP storage
A technology being standardized under the IP Storage (IPS) IETF Working Group. Same as SoIP.

Categories: Data Backup Tags: , ,

Data Backup Glossary (Letter H)

August 3rd, 2011 Comments off

Heterogeneous environment
An IT environment that includes computers, operating systems, platforms, databases, applications, and other components from different vendors.

Hierarchical storage management
Hierarchical storage management (HSM) is a data storage system that automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drives, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise’s data on slower devices, and then copy data to faster disk drives when needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage devices. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the hard disks.

High availability
The availability of resources in a computer system in the wake of component failures in the system. High availability can be achieved in a variety of ways—from solutions that use custom and redundant hardware to ensure availability to solutions that provide software solutions using off-the-shelf hardware components. The former class of solutions provides a higher degree of availability, but is significantly more expensive than the latter class. This high cost has led to the popularity of the latter class, with almost all vendors of computer systems offering various high availability products. Typically, these products survive single points of failure in the system.

High-level format
(n.) A formatting method that initializes portions of the hard disk and creates the file system structures on the disk, such as the master boot record and the file allocation tables. High-level formatting is typically done to erase the hard disk and reinstall the operating system back onto the disk drive.

(v.) The process of performing high-level formatting.

Holographic data storage
A mass storage technology that uses three-dimensional holographic images to enable more information to be stored in a much smaller space. In holographic storage, at the point where the reference beam and the data carrying signal beam intersect, the hologram is recorded in the light sensitive storage medium.

Hosted service
A service in which day-to-day related management responsibilities are transferred to the service provider. The person or organization that owns or has direct oversight of the organization or system being managed is referred to as the offerer, client, or customer. The person or organization that accepts and provides the hosted service is regarded as the service provider. Typically, the offerer remains accountable for the functionality and performance of a hosted service and does not relinquish the overall management responsibility of the organization or system.

Hot backup
A technique used in data storage and backup that enables a system to perform a routine backup of data, even if the data is being accessed by a user. Hot backups are a popular backup solution for multi-user systems as no downtime to perform the backup is required. If a user alters the data during the backup process (for example, makes changes at the exact moment the backup system is processing that data) the final version of the backup may not reflect those changes. Hot backup may also be called a dynamic backup or active backup.

Hot potato routing
A form of routing in which the nodes of a network have no buffer to store packets in before they are moved on to their final predetermined destination. In normal routing situations, when multiple packets contend for a single outgoing channel, packets that are not buffered are dropped to avoid congestion. But in hot potato routing, each packet that is routed is constantly transferred until it reaches its final destination because the individual communication links cannot support more than one packet at a time. The packet is bounced around like a “hot potato,” sometimes moving further away from its destination because it has to keep moving through the network. This technique allows multiple packets to reach their destinations without being dropped.

Hot standby
A method of redundancy in which the primary and secondary (backup) systems run simultaneously. The data is mirrored to the secondary server in real time so that both systems contain identical information.

Categories: Data Backup Tags: , ,

Data Backup Glossary (Letter G)

July 31st, 2011 Comments off

Data Backup Glossary (Letter G)Ghost imaging
Using ghosting software, a method of converting the contents of a hard drive—including its configuration settings and applications—into an image, and then storing the image on a server or burning it onto a CD. When contents of the hard drive are needed again, ghosting software converts the image back to original form. Companies often use ghost imaging when they want to create identical configurations and install the same software on numerous machines.

2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. Gigabyte is often abbreviated as G or GB.

Giant magnetoresistive
Giant magnetoresistive (GMR) is a hard disk drive storage technology. The technology is named for the giant magnetoresistive effect, first discovered in the late 1980s. While working with large magnetic fields and thin layers of magnetic materials, researchers noticed very large resistance changes when these materials were subjected to magnetic fields. Disk drives that are based on GMR head technology use these properties to help control a sensor that responds to very small rotations on the disk. The magnetic rotation yields a very large change in sensor resistance, which in turn provides a signal that can be picked up by the electric circuits in the drive.

Categories: Data Backup Tags: , ,