Reinstalling the system mistakenly formatted the hard disk and merged it into one partition

Case:After reinstalling the system, I found that the previous partitions and data were gone, and it became a large partition.If such a failure occurs, customers should not rush to try to recover and repair.You should contact the company in time. Solution:After testing by engineers, this disk belongs to the error formatting when reinstalling the system….

Read More

Glossary of Samsung Hard Disk Drive (Letter G, H, I)

Samsung Hard Disk Drive Glossary Gigabyte (GB)
1 Gigabyte = 1,073,741,824 digital bytes / 1,000,000,000 decimal bytes, a unit of measure for data

Hard Error
a data error which is not able to be overcome when the disk is reread, it occurs repeatedly at the same location on the media, indicating a defect in the surface

Hardware
computer equipment (as opposed to the computer programs and software)

HC
High-speed CMOS)

HDA
Head Disk Assembly

HDD
Hard Disk Drive, magnetic storage device

Head crash
Occurs when a head and disk accidentally touch, thereby damages the media (and subsequently the data)

Head Landing Zone
Area on the media which is dedicated to head takeoff and landing, no data is recorded in this area

Head positioner
(see Actuator)

Host
the computer system that the drive is integrated into

Hot swap
replacement of hard drives, whilst computer is in operation with zero downtime

Host adapter
a plug-in board or circuitry on the motherboard that acts as the interface between a computer system bus and the disk drive

IC
Integrated Circuits. IC’s are arrays of electronic components such as transistors, diodes, resistors built from a single piece of semiconductor material. Huge numbers of components (millions) can be placed upon a single chip, the external connections being made via the pins on the IC package

IDE
Integrated Drive Electronics, a hardware interface that allows peripheral devices to be attached to a PC

IDE Controller
The controller translates commands from the computer into something that the hard drive understands

Interface
transmitters/receivers that establish links between various parts of a computer and facilitate the exchange of data between these components (e.g. IDE, EIDE, SCSI, and for notebooks PCMCIA). The way the drive and host communicate with each other. It is a standard defined by industry organisations

Interleave factor
the number of sectors that pass beneath the heads before the next sector arrives (a means to optimise performance of a drive)

Internal Transfer rate
the speed at which data is transferred to and from the media, reflected in millions of bits per second (Mbits/second)

Inside Diameter
smallest radius recording track on a platter

Inside Diameter
smallest radius recording track on a platter

I/O
Input/Output

ISA
Industry Standard Architecture

ISO

International Standards Organisation, sets standards for computer industry

ISO-9660
a data format designed by the ISO in 1984

Read More

How does HDD store data?

How does HDD store data?Hard disk drives store data on one or more metal oxide platters. These platters spin at a rate of 3600-10,000 revolutions/minute, hold magnetic charges. A read-write head attached to an actuator arm actually floats on a cushion of air, 1-2 micro-inches (one millionth of an inch) above the surface of the platters. Data flows to and from these heads via electrical connections. Any force alters this process may cause data loss.

Ten years ago hard drives stored 40 Megabytes (MB) of data. Today’s hard drives store data up to 2000 gigabytes (GB) on a smaller surface. Increasing storage capacities amplify the impact of data loss. As more and more data is stored in smaller and denser areas, mechanical precision becomes crucial.

As a part of this advancing technology, the drive tolerance (distance between the read/write head and the platter where data is stored) is steadily decreasing. A slight nudge, an unstable power surge or a dust introduced into the drive may cause the head to touch the platter, resulting in a head crash, PCB burnt, bad sectors, etc. In some situations, the data residing in the area touched by the head may be permanently destroyed.

The current tolerance drives is 1-2 micro-inches (millionths of an inch). Comparatively, a speck of dust is 4-8 micro-inches and human hair 10 micro-inches. These sizes contaminants can cause serious data damage.

Read More

10 Trends in Cloud Data for 2012

Cloud Data,Data Storage,Disaster RecoveryWhen 2010 came to a close, the introduction of new private and hybrid cloud systems was on everybody’s trending-up list. Now, as 2011 plays out, we are able to observe that individuals forecasts were generally correct it had been indeed annually of cloud adoption. 1000’s of recent clouds were architected, built and used throughout the final 12 several weeks, plus they came online in most size marketplaces. Where is trending headed for the following 12 several weeks? The majority of the IT business prognosticators have been in agreement on a minumum of one factor: The bend for cloud-based IT buying will continue “up and right.Inch You will find a lot of cost, deployment and monitoring benefits involved for companies to disregard this. With this particular like a backdrop, eWEEK presents here a couple of forecasts for 2012 within the cloud infrastructure space. Our resource is TwinStrata Boss and co-founder Nicos Vekiarides. TwinStrata offers the CloudArray storage-area network (SAN) which comes either like a cloud service or on the commodity server and may be blocked in to a data center. CloudArray finds data stores wherever they’re and combines them.

Existing Storage Remains Used
For many companies, the idea of moving almost all their data towards the cloud isn’t achievable. However, continuously growing data storage is fueling an excuse for more capacity. Believe to deal with this need compared to cloud storage? The advantages are use of a safe and secure, unlimited pool of storage capacity that never requires upgrade/alternative and reduces capital expense.

Private Clouds Growing in Large Businesses
Businesses searching to leverage the financial systems, efficiencies and scale that cloud companies have accomplished are implementing cloud models in-house for example OpenStack for compute and storage conditions. These private clouds offer scale, agility and cost/performance typically unmatched by traditional infrastructure solutions and, simultaneously, can reside in the company’s firewall.

Disaster Recovery to Cloud Becomes a possible option
Typically, firms that need disaster recovery and business continuity have always needed to depend on devoted duplicated infrastructure in an off-site place to have the ability to get over physical disaster. What this means is having to pay capital expenses for frequently-idle hardware, before the disaster strikes. A DR cloud service means not needing to purchase this infrastructure, except when it’s needed. Trade-off? While zero-down time disaster recovery is going to be unlikely, search for service-level contracts (SLAs) that provide recovery-time objectives (RTOs) within hrs.

DR In the Cloud Can Become essential
What goes on to any or all your computer data that resides within the cloud trapped with a software like a service (SaaS) application within the situation of the disaster? The fact is, most companies will have a tragedy recovery strategy, but how will you create an additional degree of protection that’s beneath your control? Search for a brand new variety of solutions that backup SaaS data either in your area in order to another provider.

Simpler On-Boarding of Programs towards the Cloud
Certain business application could be moved entirely towards the cloud, saving the administration and upkeep of their hardware/software platforms on-site. Companies are searching for tools to create this migration viable, specially the IT-strapped organizations that may help the most. Search for new robust toolsets that may migrate programs to a range of cloud companies.

Nonrelational Databases for Large Data Workloads
NoSQL databases, for example Apache CouchDB, enable great scalability to meet the requirements of terabytes and petabytes of information for countless customers. Large data workloads will pressure a lot of companies to think about these options to traditional databases, and cloud deployment models will simplify the rollout. Search for suppliers supplying supported NoSQL solutions.

Solid-Condition Disk Storage Tiers within the Cloud
Moving greater-performance programs within the cloud does not always guarantee that they’ll get the amount of performance they require using their data storage. By providing high-performance tiers of storage which are SSD-based (mainly NAND expensive), cloud companies will have the ability to address the requirements for foreseeable and faster application response occasions.

Data Reduction Will get Better
With data storage still commanding a per-GB operating expense within the cloud, deduplication and compression technologies have grown to be rather ubiqitous in assisting minimize costs. Although some may argue the capability optimisation game has performed out, there’s still the task of capacity optimisation on the more global scale to lessen aggregate capacity usage across multiple tenants. Furthermore, there remains challenging for wealthy media content, which doesn’t fare particularly well with present day technologies. Search for the development of new data reduction IT that addresses both needs.

More Utilisation of the Cloud for Statistics
Statistics need a scalable compute and storage atmosphere that may be very costly to construct from devoted hardware (i.e., Oracle’s $a million Exalogic database machine). Statistics software may also be an extremely costly area of the proposition. Much like hardware remaining idle for disaster recovery reasons, statistics for a lot of companies might be a periodic exercise that only runs in a nutshell bursts and might not be suited or viable for devoted conditions. Statistics conditions within the cloud turn the periodic expense right into a “pay-per-use” bill, meeting business goals in a cheaper cost point.

‘Cloud-Envy’ Gets To Be More Commonplace
Although companies will adopt clouds in 2012, others can always wait and ponder well past 2012. In most cases, the realization from the financial aspects and efficiencies from the cloud is apparent, although the strategy might be different. In reaction, a few of the laggards could find ideas and applications proven cloud methods that improve IT efficiency on-premise or, regrettably, some may be taken in by cloud-washing by buying traditional IT infrastructure having a cloud title inside a feeble make an effort to satisfy their “cloud-envy.”

Read More

2060-771940-002 WD PCB Circuit Board

HDD Printed circuit board (PCB) with board number 2060-771940-002 is usually used on these Western Digital hard disk drives: WD10SPCX-22HWST0, DCM HBVJBHC, Western Digital 1TB SATA 2.5 Hard Drive; WD10S12X-55JTET0, DCM HVVJBBO, Western Digital 1TB SATA 2.5 Hard Drive; WD7500L12X-55JTET0, DCM HVVJBHO, Western Digital 750GB SATA 2.5 Hard Drive; WD10S12X-55JTET0, DCM HHVJBHO, Western Digital 1TB…

Read More

Linux Help Commands, Job Management, Process management

1. Linux Help Commands

Linux management apropos        apropos keyword – Show all commands with the keyword in their description. The same as the “man -k” command.
help        Bash shell help for the bash builtin command list. The help command gets help for a particular command.
man        Get help from the manual for a command.
man        man -k keyword – Show all commands with the keyword in their description
“man 2 kill” – Display page 2 of the kill command
manpath        Determine user’s searchpath for manpages.
info        Documentation on Linux commands and programs similar to the man pages but navigation is organized different.

2. Linux Job Management

at        Similar to cron but run only once.
atq        Lists the user’s pending jobs. If the user is the superuser, everybody’s jobs are listed.
atrm        Deletes at jobs.
atrun        Run jobs queued for later execution
batch        Executes commands when system load levels drop below 0.8 or value specified in atrun invocation.
cron        A deamon used to set commands to be run at specific times. Starts the commands in the crontab file. Used to clean up temporary files periodically in the /var/tmp and /tmp directories.
nice        Run a program with modified scheduling priority.
nohup        Run a command immune to hangups, with output to a non-tty.
watch       Execute a program periodically showing output full screen.

3. Linux Process management

bg        Starts a suspended process in the background
fg         Starts a suspended process in the foreground
gitps        A graphical process viewer and killer program.
jobs        Lists the jobs running
kill        Ex: “kill 34” – Effect: Kill or stop the process with the process ID number 34.
killall        Kill processes by name. Can check for and restart processes.
pidof        Find the process ID of a running program
ps        Get the status of one or more processes. Options:

* u (more info)
* a (see all)
* -l (technical info)

Meanings:

* PPID-parent process ID
* PID-process ID

ps ax |more to see all processes including daemons
pstree        Display the tree of running processes.
sa        Generates a summary of information about users’ processes that are stored in the /var/log/pacct file.
skill        Report process status.
snice        Report process status.
top        Display the processes that are using the most CPU resources.
CTRL-C        Kills the current job.
&         At the end of the command makes it run in the background.

Read More

Work in Forensics: 5 Key Steps

Work in Forensics: 5 Key StepsJoseph Naghdi, an experienced computer technologist, transitioned to digital forensics in early 2000 because he was intrigued by how data is stored and discovered on computers. Today, he’s a forensics analyst at Computer Forensics Lab, a U.K. consultancy specializing in computer forensic services and advanced data recovery. The high point of his work, he says, is when he solves tough cases, such as a recent phishing attack against a UK bank that almost led to the transfer of 3 million pounds.

With the rise in cyber-fraud and various breach incidents, digital forensics is becoming a growing field with plenty of opportunities. The job involves determining the cause, scope and impact of security incidents; stopping unwanted activity; limiting damage; preserving evidence and preventing other incidents. Digital forensics experts typically investigate networks, systems and data storage devices.

The average salary for digital forensic professionals is about $81,000 in the U.S., according to the salary research and data website PayScale, but specialization in mobile architecture, devices and cloud computing could lead to higher salaries.

Information security professionals interested in making a transition to a career in digital forensics, as Naghdi did, need to take five key steps, experts say.

1. Develop Windows Expertise
Because 90 percent of the systems that forensics experts investigate are Microsoft Windows-based, practitioners need to understand the core technology, says Rob Lee, director and IT forensics expert at Mandiant, a certified forensics instructor at SANS Institute.

“Kind of like in the Army, you need to know how to shoot a rifle – Windows is the rifle of computer forensics,” Lee says. Information security professionals who want to specialize in forensics must understand all aspects of how Windows works, including how information is stored, he contends. He also suggests developing expertise in mobile devices and cloud computing.

2. Obtain Specialized Training
Greg Thompson, security manager at Canada’s Scotia Bank, who is also an (ISC)2 advisory board member, believes the best way to learn about digital forensics is to obtain training at schools or certification bodies, including the International Association of Computer Investigative Specialists, Sans Institute and the International Information Systems Forensics Association.

Thompson recently hired two professionals from community colleges in Canada who were trained in applying forensic investigative techniques and skills. “The main skill is developing a creative mind-set to think like an attacker in responding to the situation,” says Thompson, who oversees the forensics practice at Scotia Bank.

He also recommends security professionals take online courses, seek help from professionals with law enforcement backgrounds and learn on the job. In particular, he encourages developing expertise in forensic investigations of mobile devices, firewalls and malware.

3. Build a Broad Technical Background
When investigating unauthorized data access, for example, forensics experts must know how to recover lost data from systems, analyze log entries and correlate them across multiple systems to understand specific user activity. “This requires a solid understanding of networks, systems and new types of malware intrusions and analysis,” says Marcus Ranum, CSO at Tenable Network Security. “Only a broad IT exposure can help professionals understand the different types of data and what is most critical to capture.”

Naghdi emphasizes the need for good computer programming skills to understand how data is stored and how hard disks operate. “Strong programming skills often help the forensic expert in understanding and discovering the different ways of storing and recovering data,” he says.

4. Gain Legal Knowledge
Forensics specialists need to understand breach notification regulations as well as the legal implications of not maintaining a proper chain of data custody. They also need to understand, for example, how a cloud computing provider will identify, locate, preserve and provide access to information when the need arises, as well as how to legally preserve data for litigation purposes. “More and more practitioners need to understand the legality around data retrieval, storage and protection,” Lee says.

5. Understand Upstream Intelligence
Gathering upstream intelligence involves such steps as observing outgoing messaging patterns or filtering infrastructure for suspicious source rules or inappropriate user behavior. This may provide significant insights into the security posture of an organization.

Forensics goes far beyond relying on recovering pictures, data and e-mails in order to solve a case. “We now require professionals to be engaged in intelligence gathering and analysis and to work across multiple machines, different environments and devices, which could lead to investigating advanced hackers that are moving within the organization,” Lee says.

Complexity of Investigations
Digital forensic investigations are becoming far more complex.

For example, Lance Watson, chief operating officer and forensic investigator for Avensic, a forensics and e-discovery consulting company, tackles such challenges as locating information in the cloud or helping clients track and analyze e-mails and text messages on mobile devices. “It’s become harder to investigate user activity or discover digital evidence quickly because of remote locations and multiple storage devices used,” he says.

The growth in cloud computing and mobile devices has further strengthened the market for forensic pros by increasing demand for eDiscovery services, which involve preserving, collecting, managing and producing electronic evidence relevant for a court case.

The demand for eDiscovery services is leading many companies to establish an internal eDiscovery team rather than relying on an outsourcer. And this is creating new job opportunities. For example, Thompson of Scotia Bank recently transitioned from outsourced eDiscovery to an in-house forensics and data recovery team largely to gain cost savings and get better control of investigations and data.

Naghdi of Computer Forensics Lab says information security professionals can expect demand for forensics experts to grow. “There is definitely an uptake in hires for forensic experts, and this trend will continue,” he says. But to make a successful transition to a role in forensics, Naghdi says, security professionals must “have an inquisitive mindset to find new ways of exploring emerging areas and finding digital evidence.”

Read More