Information on hard-disks

From Notes_Wiki

Home > Hardware > Information on hard-disks

About hard-disks

  • At present following sizes of hard-disks are used:
    • 3.5"
    • 2.5"
    • 1.8"
    and other older sizes are now obsolete.
  • Data transfer rates: As of 2008, a typical 7200 rpm desktop hard drive has a sustained "disk-to-buffer" data transfer rate of about 70 megabytes per second. This rate depends on the track location, so it will be higher for data on the outer tracks (where there are more data sectors) and lower toward the inner tracks (where there are fewer data sectors); and is generally somewhat higher for 10,000 rpm drives. A current widely used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates.
  • The HDD's spindle system relies on air pressure inside the disk enclosure to support the heads at their proper flying height while the disk rotates. Hard disk drives require a certain range of air pressures in order to operate properly. The connection to the external environment and pressure occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air pressure is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about 3,000m.
  • A 2007 study published by Google suggested very little correlation between failure rates and either high temperature or activity level; however, the correlation between manufacturer/model and failure rate was relatively strong. A common misconception is that a colder hard drive will last longer than a hotter hard drive. The Google study seems to imply the reverse—"lower temperatures are associated with higher failure rates". Hard drives with S.M.A.R.T.-reported average temperatures below 27 °C (80.6 °F) had higher failure rates than hard drives with the highest reported average temperature of 50 °C (122 °F), failure rates at least twice as high as the optimum S.M.A.R.T.-reported temperature range of 36 °C (96.8 °F) to 47 °C (116.6 °F).
  • The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above 1.6 Gbit/s.[72] and a sustained transfer rate up to 1 Gbit/s.
  • Types of interface:
    • SATA
    • eSATA
    • eSATAp
    • PATA
    • Firewire
    • USB
    • Infiband
    • Fiber channel
    • SCSI
    • SAS (Serial Attached SCSI)


For complete info refer to http://en.wikipedia.org/wiki/Hard_disk_drive



About SATA interface

  • Uses high speed communication over a pair of two cables instead of parallel transmission over 16 or more lines as in case of PATA.
  • SATA standard supports hot swapping and NCQ (Native command queuing) which were not present in earlier PATA standards. A special hot-plug receptacle which has longer ground pins 4 and 12, is required for hot swapping.
  • SATA controllers use AHCI (Advanced Host controller Interface), If OS do not support AHCI then SATA drives work in IDE emulation mode. Windows XP natively does not supports SATA. Linux kernels 2.6.19+ support AHCI.
  • SATA has three revisions till the time of this writing (Dec, 2010)
    • SATA 1 supporting upto 1.5 gbps
    • SATA 2 supporting upto 3gbps
    • SATA 3 supporting upto 6gbps
    Taking 8b/10b encoding into account
    • SATA 1 supports 1.2gbps or 150MBPS of useful data transfer
    • SATA 2 supports 2.4gbps or 300MBPS of useful data transfer
  • Only Solid State Disks (SSD) or flash disks usually exhaust the original 1.5gbps SATA bandwidth. The conventional hard-disks usually do not transfer data faster than 1.5gbps.
  • SATA drives can be connected to SAS domain as SAS supports control and use of SATA drives using Serial ATA Tunneled Protocol (STP).
  • Internal SATA drives support cable length of upto 1 meter and eSATA drives support cable length of upto 2 meters.
  • Cables and Sockets used for SATA and eSATA ports are different so that users do not mix them by mistake. The eSATA cable has extra layers of shielding to reduce EMI.
  • Since eSATA ports do not supply power, new eSATAp standards have been defined so that drives connected to eSATAp ports can take power from host PC or laptop and do not require separate power connection.
  • SATA used point-to-point architecture and hence controller is directly attached to device.
  • SCSI, SAS, and fibre-channel (FC) drives are typically more expensive so they are traditionally used in servers and disk arrays where the added cost is justifiable. Inexpensive ATA and SATA drives evolved in the home-computer market, hence there is a view that they are less reliable. As those two worlds overlapped, the subject of reliability became somewhat controversial. Note that, in general, the failure rate of a disk drive is related to the quality of its heads, platters and supporting manufacturing processes, not to its interface.


For detailed information visit http://en.wikipedia.org/wiki/Serial_ATA



Fiber channel

  • Fibre Channel, or FC, is a gigabit-speed network technology primarily used for storage networking. Despite its name, Fibre Channel signaling can run on both twisted pair copper wire and fiber-optic cables.
  • Fibre Channel Protocol (FCP) is a transport protocol (similar to TCP used in IP networks) which predominantly transports SCSI commands over Fibre Channel networks.
  • Fiber channel variants
    • 1GFC - 200 MBps on duplex connections
    • 2GFC - 400 MBps on duplex connections
    • 4GFC - 800 MBps on duplex connections
    • 8GFC - 1600 MBps on duplex connections
  • There are three major Fibre Channel topologies, describing how a number of ports are connected together. A port in Fibre Channel terminology is any entity that actively communicates over the network, not necessarily a hardware port. This port is usually implemented in a device such as disk storage, an HBA on a server or a Fibre Channel switch:
    • Point-to-Point (FC-P2P). Two devices are connected directly to each other. This is the simplest topology, with limited connectivity.
    • Arbitrated loop (FC-AL). In this design, all devices are in a loop or ring, similar to token ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted. The failure of one device causes a break in the ring. Fibre Channel hubs exist to connect multiple devices together and may bypass failed ports. A loop may also be made by cabling each port to the next in a ring.
      • A minimal loop containing only two ports, while appearing to be similar to FC-P2P, differs considerably in terms of the protocol.
      • Only one pair of ports can communicate concurrently on a loop.
      • Maximum speed of 8GFC.
    • Switched fabric (FC-SW). All devices or loops of devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. Advantages of this topology over FC-P2P or FC-AL include:
      • The switches manage the state of the fabric, providing optimized interconnections.
      • The traffic between two ports flows through the switches only, it is not transmitted to any other port.
      • Failure of a port is isolated and should not affect operation of other ports.
      • Multiple pairs of ports may communicate simultaneously in a fabric.
Attribute Point-to-Point Arbitrated loop Switched fabric
Max ports 2 127 ~16777216 (224)
Address size N/A 8-bit ALPA 24-bit port ID
Side effect of port failure Link fails Loop fails (until port bypassed) N/A
Mixing different link rates No No Yes
Frame delivery In order In order Not guaranteed
Access to medium Dedicated Arbitrated Dedicated


  • Products based on the 1GFC, 2GFC, 4GFC, 8GFC and 16GFC standards should be inter-operable and backward compatible. The 1GFC, 2GFC, 4GFC, 8GFC designs all use 8b/10b encoding, while the 16GFC standard uses 64b/66b encoding.
  • The 10 Gbit/s standard and its 20 Gbit/s derivative, however, are not backward compatible with any of the slower speed devices, as they differ considerably on FC1 level in using 64b/66b encoding instead of 8b/10b encoding, and are primarily used as inter-switch links.
  • A fabric consisting entirely of one vendor is considered to be homogeneous. This is often referred to as operating in its "native mode" and allows the vendor to add proprietary features which may not be compliant with the Fibre Channel standard.
  • If multiple switch vendors are used within the same fabric it is heterogeneous, the switches may only achieve adjacency if all switches are placed into their interoperability modes. This is called the "open fabric" mode as each vendor's switch may have to disable its proprietary features to comply with the Fibre Channel standard.
  • Each HBA has a unique World Wide Name (WWN), which is similar to an Ethernet MAC address in that it uses an Organizationally Unique Identifier (OUI) assigned by the IEEE. However, WWNs are longer (8 bytes). There are two types of WWNs on a HBA; a node WWN (WWNN), which can be shared by some or all ports of a device, and a port WWN (WWPN), which is necessarily unique to each port.


For more detailed description visit http://en.wikipedia.org/wiki/Fibre_Channel



Solid state drives

  • A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. SSDs are distinguished from traditional hard disk drives (HDDs), which are electromechanical devices containing spinning disks and movable read/write heads. SSDs, in contrast, use microchips which retain data in non-volatile memory chips[1] and contain no moving parts.
  • In 1995, M-Systems introduced flash-based solid-state drives. They had the advantage of not requiring batteries to maintain the data in the memory (required by the prior volatile memory systems), but were not as fast as the DRAM-based solutions. Since then, SSDs have been used successfully as HDD replacements by the military and aerospace industries, as well as other mission-critical applications. These applications require the exceptional mean time between failures (MTBF) rates that solid-state drives achieve, by virtue of their ability to withstand extreme shock, vibration and temperature ranges.
  • Enterprise Flash Drives (EFDs) are designed for applications requiring high I/O performance (IOPS), reliability, and energy efficiency. In most cases an EFD is an SSD with a higher set of specifications compared to SSDs which would typically be used in notebook computers. The term was first used by EMC in January 2008, to help them identify SSD manufacturers who would provide products meeting these higher standards. There are no standards bodies who control the definition of EFDs, so any SSD manufacturer may claim to produce EFDs when they may not actually meet the requirements. Likewise there may be other SSD manufacturers that meet the EFD requirements without being called EFDs.
  • The performance of the SSD can scale with the number of parallel NAND flash chips used in the device. A single NAND chip is relatively slow, due to narrow (8/16 bit) asynchronous IO interface, and additional high latency of basic IO operations (typical for SLC NAND - ~25 μs to fetch a 4K page from the array to the IO buffer on a read, ~250 μs to commit a 4K page from the IO buffer to the array on a write, ~2 ms to erase a 256 KB block). When multiple NAND devices operate in parallel inside an SSD, the bandwidth scales, and the high latencies can be hidden, as long as enough outstanding operations are pending and the load is evenly distributed between devices. Micron and Intel initially made faster SSDs by implementing data striping (similar to RAID 0) and interleaving in their architecture. This enabled the creation of ultra-fast SSDs with 250 MB/s effective read/write speeds.
  • Lower priced drives usually use multi-level cell (MLC) flash memory, which is slower and less reliable than single-level cell (SLC) flash memory. This can be mitigated or even reversed by the internal design structure of the SSD, such as interleaving, changes to writing algorithms, and higher over-provisioning (more excess capacity) with which the wear-leveling algorithms can work.
  • SSDs based on volatile memory such as DRAM are characterized by ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation (similar to the hibernate function used in modern operating systems).
  • A Remote Indirect Memory Access Disk (RIndMA Disk) uses a secondary computer with a fast network or (direct) Infiniband connection to act like a RAM-based SSD, but the new faster Flash memory based SSDs already available in 2009 are making this option not as cost effective.
  • DRAM based solid-state drives are especially useful on computers that already have the maximum amount of supported RAM. For example, some computer systems built on the x86-32 architecture can effectively be extended beyond the 4 gigabyte (GB) limit by placing the paging file or swap file on a DRAM based SSD. Since the paging file is used when there is no space left in main memory and at the expense of speed, placing the paging file on a DRAM based SSD will allow the computer to take advantage of the speed of the SSD when it requires more space in main memory and resorts to using the paging file.
  • A flash-based SSD typically uses a small amount of DRAM as a cache, similar to the cache in Hard disk drives. A directory of block placement and wear leveling data is also kept in the cache while the drive is operating. None of the user's data is typically stored in the cache. One particular SSD controller manufacturer, SandForce, is noted for not using an external DRAM cache on their designs. They still achieve very high performance even without the aid of the cache. Eliminating the external DRAM enables a smaller footprint for the other Flash memory components in order to build even smaller SSDs.
  • Another component in higher performing SSDs is a capacitor or some form of battery. These are necessary to maintain data integrity such that the data in the cache can be flushed to the drive when power is dropped; some may even hold power long enough to maintain data in the cache until power is resumed.
  • The host interface is not specifically a component of the SSD, but it is a key part of the drive. The interface is usually incorporated into the controller discussed above. The interface is generally one of the interfaces found in HDDs. They include:
    • SATA interface
    • SAS interface (generally found on servers)
    • PCIe interface
    • FC interface (almost exclusively found on servers)
    • USB interface
    • PATA (IDE) interface (mostly replaced by SATA)
    • SCSI interface (generally found on servers; mostly replaced by SAS)
  • Performance of flash SSDs are difficult to benchmark. In a test done by Xssist, using IOmeter, 4 KB RANDOM 70/30 RW, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10,000 IOPs, and dropped sharply after 8 minutes to 4,000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3,000 to 4,000 from around the 50th minutes onwards for the rest of the 8+ hours test run.
  • Previously written data blocks that are no longer in use can be reclaimed by TRIM. So OS should understand that underlying drive is SSD and support it properly to be able to use SSDs properly.
  • SSD drives are fully supported by the Linux OS starting with 2.6.33 kernel (available early 2010) which is advertised to fully support SSD detection & alignment as well as the TRIM function. Earlier versions of the Linux kernel do not have full SSD support.


Comparison of HDD and SDD

  • SSDs do not benefit from defragmentation because the fragmentation has minimal effect and any defragmentation process adds additional writes on the NAND Flash that already have a limited cycle life.
  • SSDs have no moving parts and make no sound
  • Solid state drives that use Flash memory have a limited number of writes over the life of the drive. SSDs based on DRAM do not have a limited number of writes.
  • The weight of Flash memory & and the circuit board material are very light compared to HDDs
  • Susceptibility to Shock, Altitude, Vibration, Extreme Temperatures, Magnetic Susceptibility is very high in SSD in comparison to normal HDDs
  • As of October 2010, NAND Flash SSDs cost about (US)$1.40-2.00 per GB. As of October 2010, HDDs cost about (US)$0.10 per GB for 3.5" and $0.20 per GB for 2.5" drives
  • Less expensive SSDs typically have write speeds significantly lower than their read speeds. Higher performing SSDs and those from particular manufacturers have a balanced read and write speed.
  • High performance Flash-based SSDs generally require 1/2 to 1/3 the power of HDDs; High performance DRAM SSDs generally require as much power as HDDs and consume power when the rest of the system is shut down.


For more detailed description on SSDs visit http://en.wikipedia.org/wiki/Solid-state_drive



SAS (Serial Attached SCSI)

  • Serial Attached SCSI (SAS) is a computer bus used to move data to and from computer storage devices such as hard drives and tape drives. SAS depends on a point-to-point serial protocol that replaces the parallel SCSI bus technology that first appeared in the mid 1980s in data centers and workstations, and it uses the standard SCSI command set.
  • Serial Attached SCSI (SAS) is a computer bus used to move data to and from computer storage devices such as hard drives and tape drives. SAS depends on a point-to-point serial protocol that replaces the parallel SCSI bus technology that first appeared in the mid 1980s in data centers and workstations, and it uses the standard SCSI command set.
  • 3.0 Gbit/s at introduction, 6.0 Gbit/s available February 2009
  • 10 m external cable
  • 255 device port expanders (>65k total devices)
  • Software-transparent with SCSI
  • A SAS Domain, an I/O system, consists of a set of SAS devices that communicate with one another by means of a service delivery subsystem. Each SAS device in a SAS domain has a globally unique identifier (assigned by the device manufacturer and similar to an Ethernet device's MAC address) called a World Wide Name (WWN or SAS address). The WWN uniquely identifies the device in the SAS domain just as a SCSI ID identifies a device in a parallel SCSI bus. A SAS domain may contain up to a total of 65,535 devices.
  • The components known as Serial Attached SCSI Expanders (SAS Expanders) facilitate communication between large numbers of SAS devices. Expanders contain two or more external expander-ports. Each expander device contains at least one SAS Management Protocol target port for management and may contain SAS devices itself.
  • SAS 1 defined two different types of expander (edge and fanout). However, the SAS-2.0 standard has dropped the distinction between the two, as it created unnecessary topological limitations with no realized benefit.



SAS vs SCSI

  • The SAS bus operates point-to-point while the SCSI bus is multidrop. Each SAS device is connected by a dedicated link to the initiator, unless an expander is used. If one initiator is connected to one target, there is no opportunity for contention; with parallel SCSI, even this situation could cause contention.
  • SAS has no termination issues and does not require terminator packs like parallel SCSI.
  • SAS eliminates clock skew.
  • SAS supports up to 65,535 devices through the use of expanders, while Parallel SCSI has a limit of 8 or 16 devices on a single channel.
  • SAS supports a higher transfer speed (3 or 6 Gbit/s) than most parallel SCSI standards. SAS achieves these speeds on each initiator-target connection, hence getting higher throughput, whereas parallel SCSI shares the speed across the entire multidrop bus.
  • SAS controllers may support connecting to SATA devices, either directly connected using native SATA protocol or through SAS expanders using SATA Tunneled Protocol (STP).
  • Both SAS and parallel SCSI use the SCSI command-set.



SAS vs SATA

  • Systems identify SATA devices by their port number connected to the host bus adapter, while SAS devices are uniquely identified by their World Wide Name (WWN).
  • SAS protocol supports multiple initiators in a SAS domain, while SATA has no analogous provision.
  • Most SAS drives provide tagged command queuing, while most newer SATA drives provide native command queuing, each of which has its pros and cons.
  • SATA uses the ATA command set; SAS uses the SCSI command set. ATA directly supports only direct-access storage. However SCSI commands may be tunneled through ATA for devices such as CD/DVD drives.
  • SAS hardware allows multipath I/O to devices while SATA (prior to SATA 3Gb/s) does not. Per specification, SATA 3Gb/s makes use of port multipliers to achieve port expansion. Some port multiplier manufacturers have implemented multipath I/O using port multiplier hardware.
  • SATA is marketed as a general-purpose successor to parallel ATA and has become[update] common in the consumer market, whereas the more-expensive SAS targets critical server applications.
  • SAS error-recovery and error-reporting use SCSI commands which have more functionality than the ATA SMART commands used by SATA drives.
  • SAS uses higher signaling voltages (800-1600 mV TX, 275-1600 mV RX) than SATA (400-600 mV TX, 325-600 mV RX). The higher voltage offers (among other features) the ability to use SAS in server backplanes.
  • Because of its higher signaling voltages, SAS can use cables up to 10 m (33 ft) long, SATA has a cable-length limit of 1 m (3 ft) or 2 m (6.6 ft) for eSATA.


For detailed description on SAS visit http://en.wikipedia.org/wiki/Serial_attached_SCSI


Home > Hardware > Information on hard-disks