50 Gb in2 Magnetic Disk Drive System Design Project

background image

1

50 Gb/in

2

Magnetic Disk Drive System Design Project

Todd Leonhardt

Kroum Stoev

Shingo Tamaru

Xia Chen

Ming Ni

18-816 Design of Data Storage Systems

Professor M. H. Kryder

Data Storage Systems Center

Carnegie Mellon University

Pittsburgh, Pennsylvania

May 1, 1998

background image

2

Introduction

Magnetic disk drives are currently commercially available with areal densities as high as
4.1 Gb/in

2

[1]. A recent laboratory spin stand test demonstrated the ability to achieve an

areal density of 11.6 Gb/in

2

using current technology [2]. The areal densities of hard disk

drives have been increasing at a rate of 60% per year since 1991 (see Fig. 1) [3,9]. At
this rate, hard drives will be able to store 100 Gb/in

2

by 2006 [4]. Hence, we can expect

to see 50 Gb/in

2

drives commercially available somewhere in the range 2004-2005. For

purposes of estimating parameters of drive components, we will assume the conservative
estimate of 2005.

Figure 1. IBM Areal Density Growth [9].

There are many factors to consider in designing a disk drive with an areal density of 50
Gb/in

2

. One of the most important is the break down of areal density into linear density

and track density:

D

a

= D

l

x D

t

Where D

a

, D

l

, and D

t

are the areal, linear, and track densities, respectively. The current

state of the art in commercially available hard drives is a linear density of 256.4 kbpi and
a track density of 16 ktpi. This yields a linear density to track density ratio of 16 to 1.
Thus, a simple scaling of current properties to 50 Gb/in

2

would give a track density of 56

ktpi and a linear density of 895 kbpi. However, recently collected data and theoretical
models indicate that it is advantageous to have a squarer bit cell at higher densities [5].
Moreover, track density is projected to increase faster than linear density in the upcoming

background image

3

years [6]. Therefore, we decided on a linear-to-track density ratio of near 8 to 1 and set
our initial design goals at 625 kbpi and 80 ktpi. This assumes that a relatively modest
increase in linear density by a factor of 2.4 will be achieved by the year 2005. Track
density will require a much more aggressive increase by a factor of 5 in the same time
period of 7 years. We explain later in this report why likely advances in actuator and
servo technology should make this a realistic commercial achievement by the target
release date.

The fundamental problem blocking a simple scaling approach to achieving 50 Gb/in

2

is

the superparamagnetic limit. This limit is thought to be near 40 Gb/in

2

for currently used

CoPtCr based media [7]. Many engineering solutions have been proposed to extend
magnetic recording beyond this “limit”. These include alternate high-anisotropy media,
perpendicular recording, patterned media, and keepered media. Patterned media, a
scheme where the medium is lithographically patterned into an array of single-bit
magnetic islands holds great promise for ultra-high density recording in the distant future.
However, no one expects patterned media to be available at a reasonable price and in
sufficient volume in the near future [3]. Another alternative involves adding a film of
soft magnetic material (the so-called “keeper layer”) on top of the magnetic layer to
stabilize the recorded data. The fact that Ampex, one of the key supporters of keepered
media, has recently halted research on this technology makes the future look grim for
keepered media [3]. Perpendicular recording, a long-championed yet never profitably
commercialized alternative to longitudinal recording, still holds much promise as a high-
density recording candidate. However, there is good reason to be skeptical that all of the
problems with this alternative will be ironed out by the time more conventional
technology could be used to achieve 50 Gb/in

2

[8]. A much more conservative design

approach to extending the superparamagnetic limit and reaching 50 Gb/in

2

in the least

possible time involves scaling the current technology and moving to alternate, high-
anisotropy media. Researchers at IBM believe that this materials strategy can push the
superparamagnetic limit by a factor of 10x to higher areal densities [9]. Even if their
estimate is a bit optimistic, we decided that increasing the anisotropy and coercivity of
the media would be the most prudent approach to 50 Gb/in

2

magnetic storage.

Media

Thermal energy causes small random fluctuations in the magnetization of a particle, just
as it causes random Brownian motion of small particles. If the total anisotropy energy of
a single-domain particle, K

u

V, becomes on the order of the thermal energy, kT, then the

magnetization may be reversed as a statistical time-temperature effect. Here K

u

is an

anisotropy energy density constant, V is the particle volume, k is Boltzmann’s constant
and T is the absolute temperature. There is a critical volume v

c

given by

( )

u

o

c

K

kT

tf

v

2

ln

=

below which superparamagnetism exists [10]. Here t is the time period of observation
and f

o

is the Larmor frequency (about 10

9

Hz). In the case of magnetic data storage we

want to be able to reliably store information for many years. Rearranging the above

background image

4

equation we could solve for t in terms of the ratio K

u

V/kT. For a 100-year storage

lifetime with respect to thermal stability, a ratio, K

u

V/kT, of 43 is necessary. For a 5-year

storage lifetime, a ratio of 40 is required.

As we move to higher areal densities, the bit volumes become smaller. The signal to
noise ratio, SNR, is dependent on the number of magnetic grains per bit. Thus to
preserve a reasonable SNR at higher recording densities, the size of each grain must
shrink. Lower grain size results in less thermally stable media. The only ways to
compensate for this effect is to either lower the temperature or raise the media anisotropy.
Lowering the temperature is not economically feasible in the foreseeable future, though
this is an alternative that may resurface in the distant future. Therefore, we need to use a
media that has as large an anisotropy constant as possible while still retaining favorable
values for other magnetic properties.

Prominent candidates for high anisotropy media materials are tetragonal FePtCr alloys,
tetragonal CoPtCr alloys, and SmCo alloys [9]. Of these, we think that the tetragonal
FePtCr alloys show the most promise for 50 Gb/in

2

magnetic recording. Tetragonal FePt

alloys can be fabricated with a K

u

as high as 2 10

8

erg/cm

3

, the largest anisotropy we are

aware of [9]. Another major benefit of using this alloy is the fact that all of its other
magnetic properties can be independently controlled. Experiments on multilayer thin
films have shown that coercivity, H

c

, can be varied monotonically from around 2000 Oe

to near 25,000 Oe by increasing the annealing temperature [11]. These experiments show
that shorter annealing times produce smaller grain sizes and that coercivity is relatively
insensitive to anneal time. The experimenters found a grain size of around 50nm for a
sample with coercivity larger than 20 kOe that had been annealed at a high temperature
for 30 min. Theoretically, we expect that the grain size will drop as the medium
coercivity drops, to a minimum grain size of about 2nm [9]. By using relatively low
annealing temperatures and short annealing times we should be able to achieve grain
sizes in the 3-4 nm range. The film texture can be set as either longitudinal or
perpendicular, depending on the underlayer used. Finally the magnetization of FePt
media can be tuned by varying the Fe/Pt ratio. Remanent magnetization as large as 1000
emu/cm

3

can be reached.

By using FePt media, desirable values of all magnetic properties can be achieved.
However, this is not the whole picture. Iron tends to rust through an oxidation reaction.
Corrosion properties also have to be taken into account. Vast experience with CoPt
based media has shown that the addition of Cr to the alloy greatly retards corrosion. We
expect that it will be necessary to add Cr to the alloy in the 12-20 atomic percent range to
control corrosion [10]. The addition of Cr has a couple of secondary effects. First, it
dilutes the magnetization of the media, which lowers the signal, but has the advantage of
lowering the transition parameter. Second, the addition of Cr helps to aid in noise
reduction by allowing for the precipitation of other crystalline phases at grain boundaries
[6].

Another important consideration in media design is the substrate used. The kind of fly
height necessary for 50 Gb/in

2

makes it imperative that media, and therefore substrate,

background image

5

roughness is kept to an absolute minimum. Due to the extreme sensitivity of MR/GMR
heads to thermal asperities, media manufacturers are moving towards super polished
surfaces, with R

a

values of 5 Å and lower [12]. Thus, a good choice for a substrate would

be a super polished NiP coated AlMg substrate. However, with the increasing number of
portable computers on the market, it might be better to base a design around a shock
resistant substrate. Therefore, we choose a super polished glass or ceramic substrate
where an extremely smooth finish is the primary concern and shock resistance the
secondary. With advances in chemical mechanical polishing, we can reasonably expect a
surface roughness of R

a

= 3 Å by 2005.

Grain size considerations are extremely important in designing high density media. A
lower limit on grain size is set by the requirements of thermal stability and an upper limit
is set by the requirement of having a large number of grains per bit cell to get a good
SNR. A thermal stability lifetime of 75 years, corresponding to K

u

V/kT = 43, was

chosen. This should greatly exceed the expect lifetime of the product and provide wide
safety margins for instigating factors such as use in a relatively high-temperature
environment. Assuming the grains are spherical, the minimum grain size (diameter) for
thermal stability is 2.6 nm. In actuality the grains would probably be acicular cylinders,
in which case minimum grain diameter depends on the cylinder aspect ratio. An upper
limit is set on grain size by the requirement of having a large signal-to-noise ratio (SNR).
The SNR increases with increasing number of grains per bit cell. Thus is we want a 25
dB SNR, a minimum of 317 grains per bit cell is required, resulting in a maximum grain
size of 9.4 nm. Thus our media must be bounded by grain sizes of 2.6 nm and 9.4 nm.
Using an average grain size midway between these two values (6.nm) guarantees an SNR
of greater than 25 dB. This requires extremely precise control of grain size. However, it
is reasonable to expect that with further optimization of the deposition process, such
control will be possible.

As mentioned above, the coercivity of this media can be varied over a wide range. We
choose to optimize the media deposition process for a coercivity of H

c

= 5000 Oe. In the

next section, we will show that our head design is capable of writing on 5000 Oe media.
The transition parameter is calculated to be 8.1 nm from [10]

where

δ

is medium thickness (10 nm), Mr is remanent magnetization (760 emu/cc), d is

sensor to medium separation (13 nm), and d

eff

= sqrt[d(d+

δ

)]=17 nm. This transition

parameter is a small fraction of the minimum distance between magnetic transitions (50.4
nm) for our given choice of linear density and channel.

We choose a FePtCr film thickness of 10 nm. This thickness is the result of a
compromise between large readback signal and small sensor-to-medium separation. It is
very likely that some form of Cr underlayer will be used in order to help foster the
desired properties in the magnetic film.

2

/

1

2

16

4

ö

ççè

æ

+

+

=

c

eff

r

x

H

d

M

a

π

δ

δ

δ

background image

6

In order to realize 625 kbpi in the FePtCr recording medium, a (1,7) run-length limited
(RLL) code was used in addition to an error correcting code (ECC), as described in the
“Channel” section of this report. This corresponds to 504 kfci (50.4 nm minimum
separation between flux changes) in the recording medium. Using a sensor-to-medium
separation of 13 nm, the transition parameter for the medium was calculated to be 8.1 nm
(only a small fraction of the total flux reversal length).

Write Head

In the past few years MR sensors, which have much higher flux sensitivity than
conventional inductive heads, have been introduced and adopted into actual commercial
drives. This MR technology has given drive designers the freedom to optimize the
reading sensor and the recording transducer independently and has resulted in faster
technological advancement.

Although the write head design is basically the same as that of conventional thin film
inductive read/write heads, the requirements imposed on write-only heads are somewhat
different. Moreover, the requirements will change as areal density continues to increase
in the future. These requirements are as follows:
1. Media coercivity is increasing to meet the demands for higher linear recording

densities. Head fields in the media need to be sufficiently larger than the coercivity
(by a factor of ~2.5) in order to switch the bit magnetization directions at the high
frequencies used. To have large head fields at the media, the deep-gap head field
must be maximized. Large gap fields call for future write heads with the largest
possible saturation magnetization.

2. The data transfer rate is also increasing steadily. The signal frequency of the current

state of the art hard disk drives exceeds 100MHz. In such a high frequency operation,
the eddy current loss is the biggest problem. To suppress the eddy currents and
achieve high efficiency, the write head should be made of a material with high
resistivity and/or laminated with insulation layers.

3. Low coercivity along the flux path is desirable in order to reduce heat generation due

to hysteresis loss.

4. To avoid gradual data erasure, the core material should have small remanent

magnetization.

5. Magnetic heads are subject to stresses during the fabrication process, which can lead

to undesirable magnetic properties due to magnetostriction. Magnetic heads may also
experience large impact stresses due to collision with particles or asperities during
operation. This impact stress causes temporal magnetization change and erases data
accidentally. Thus a zero magnetostriction constant,

λ

, is desirable.

6. The flying height has a great impact on the linear recording density. In this project,

we set the physical flying height at 8 nm. Therefore, the head is allowed to have only
very small pole tip recession – 2nm. To achieve small pole tip recession, the core
material should be mechanically hard.

7. The core material is heated during the fabrication process, sometimes up to about

250

°

C. Commercial products are expected to have a mean time to failure (MTF) of at

background image

7

least several years. Therefore, the core material should be chemically stable,
unaffected by high temperature, moisture, oxidization, etc.

Properties of several core materials, including the currently used permalloy and future
candidates are summarized in Table 1 [13,14].

Ni

80

Fe

20

Ni

45

Fe

55

CoTaZr

CoFeVB

CoFeB

FeAlN

Fe

16

N

2

Deposition

plating

plating

plating

plating

plating

sputtering sputtering

4

π

Ms

[kGauss]

10

16

14

18

16

19-20

22-24

H

c

[Oe]

0.3

0.4

1.3

8.1

6.5

0.5

0.3

*

λ

s (*10

-6

)

-1

20

2

-5

*

H

k

[Oe]

2.5

9.5

16.5

12.5

16.4

5.5

3.5

*

ρ

[

µΩ

-cm]

24

48

118

34

30

40

(*values for the material with 4

π

Ms=22KGauss)

Table 1: Magnetic properties of core materials.

In this list,

α

-Fe

16

N

2

is reported to have the highest saturation magnetization

(4

π

Ms=24kG) [14]. However, the properties of

α

-Fe

16

N

2

are strongly dependent on the

deposition conditions and quite unstable after deposition. Therefore it is expected to be a
long time before this material is perfected enough to be used in commercial products.
For this reason, different compositions of FeN [15] and various compounds using Zr, Al,
and Ta as additional elements [16,17] have been studied for their ability to stabilize the
material structures and properties. These other compounds have a little bit smaller
saturation magnetization (typically around 20kG), but they can be tailored to have
reasonably good properties other than 4

π

Ms by controlling the content of the additional

compound(s) and/or deposition conditions. Therefore, these alternative compounds have
been intensively studied and some researchers have already fabricated prototype
recording heads with these materials and have gotten some good results [13,18].
Therefore, we assume FeAlN (which is studied in the DSSC) as our head core material.

As mentioned above, suppression of eddy current loss is one of the biggest issues for high
frequency operation. For this reason, the incorporation of a laminated film structure is
mandatory. Some non-magnetic insulation materials have been proposed as candidates
for the interlayer dielectric [18,19]. One of the most promising laminated structures
involves FeN based magnetic layers with SiN interlayers. A SiN interlayer provides high
thermal stability because it acts as a diffusion barrier – the metals cannot diffuse through
it and it will not diffuse into the metals until very high temperatures are reached.
Therefore, we adopt SiN as the interlayer material in our write head.

In this report, we set the track density as 80 ktpi, which corresponds to a 0.32

µ

m track-

to-track separation distance. Typically about 10% of the track width is used for a guard
band. Therefore, the actual data track width which is defined by the width of P2 is 0.29

µ

m. Most of the current head configurations have much wider P1 than P2 to deal with

misregistration. However, if the P1 width is exactly the same as P2 and they are
completely aligned, the side fringe field is minimized and consequently the sharpest track

background image

8

edge is obtained [20]. Therefore, we will define P1 width as the same as P2 to achieve a
high track density of 80 ktpi by using focused ion beam (FIB) etching techniques.

Next we design the gap length to fulfill two requirements, 1) to give the sharpest field
gradient at the center surface of the media, 2) to generate a strong enough field at the
bottom surface of the media to achieve good overwrite capability. For this purpose, we
tried to find the optimum gap length using more accurate expression of the field
distribution than Karlqvist approximation [21]. That is

Where x: distance from the gap center, y: distance from the ABS, g: gap length. By using
this expression, the gap length dependence of the field gradient is calculated at the point
where the field is twice the media coercivity. The result is shown in Fig. 2.

According to this result, about 80nm of gap length gives the sharpest field gradient for
the parameters assumed in this report. The field strength at the bottom surface of the
magnetic layer of the media is calculated as about 0.65 (=13000Oe) using (1), which is
strong enough to fully magnetize the media. From these results, an 80nm gap length is
expected to satisfy both requirements discussed above.

The fabrication process sets limits on some head properties. Throat height is largely
dependent on the tolerance of the lapping process and a typical value for current MR
heads is 1.0

±

0.5

µ

m. Shorter throat height gives higher efficiency and we can expect

progress in the lapping tolerance in the near future. Thus we determined the nominal
value of this dimension as 0.5

µ

m. Pole thickness is another parameter affected by the

fabrication process. A thicker pole is preferred for a couple of reasons. First, a thicker

0.05

0.1

0.15

0.2

-26

-25

-24

-23

-22

N

ormalized field

gradien

t

Gap Length [

µ

m]

Figure 2. Normalized field gradient at the center of the media at the point
where H=2Hc with the following parameters (total magnetic spacing=13nm,
media thickness=10nm, deep gap field=20000Oe,Hc=5000Oe)

{

}

{

}

ïï

ý

ü

ï

ï

î

ïï

í

ì

+

+

ýü

îí

ì

+

+

+

+

÷÷ø

ö

ççè

æ

÷÷ø

ö

ççè

æ

+

÷÷ø

ö

ççè

æ

+

=

2

2

2

2

2

2

2

/

1

2

2

2

2

2

2

2

2

2

1

1

0

)

2

/

(

4

)

2

/

(

)

2

/

(

4

)

2

/

(

2

2

2

/

tan

2

/

tan

2

g

x

g

y

x

g

y

x

y

x

g

y

x

g

y

x

g

y

x

g

H

H

x

π

π

background image

9

pole gives lower reluctance of the flux path and, as a consequence, higher efficiency.
Second, the vicinity of the gap should be magnetized up so that near saturation it can take
full advantage of the saturation magnetization of the core material. If pole thickness is
too thin, core saturation occurs at the neck of the pole, before enough flux reaches the
gap. For this reason, pole thickness should be roughly equal to or thicker than the throat
height. Actually, pole thickness in thin film inductive heads has shown a tendency to get
thicker as time goes by [22]. However, the incorporation of laminated film structure is
mandatory to suppress the eddy currents in our design as stated before. Therefore, a too
thick film leads to higher fabrication cost. Thus, we determined the pole thickness of our
recording head as 1

µ

m.

From these parameters, some other parameters are calculated. First, the head efficiency
is calculated to be 0.72 using the expression derived from the two-section transmission
line model [6] and assuming a relative permeability of 2000:

Where

µ

0

is permeability of vacuum,

µ

r

is relative permeability of the head core, w is

track width, p

1

, p

2

, g

1

, g

2

, l

1

and l

2

are dimensions as shown in Fig. 3 and k

1

, k

2

are

Figure 3. Two-section transmission line model of a thin film head.

But this permeability is taken from the initial permeability measurement on a flat sheet of
laminated FeAlN. It has been reported that the sloping region of a thin film head made of
this material has significantly lower permeability, and therefore leads to quite low
efficiency [13]. This problem is currently being studied and will likely be solved in the
near future [23].

We estimated the inductance of our write head to be 0.18 nH using the expression derived
from the two-section transmission line model [6].

ù

ê

ê

ë

é

+

=

)

sinh(

)

tanh(

/

)

cosh(

1

)

tanh(

2

2

1

1

1

2

2

2

1

1

1

1

l

k

l

k

k

k

l

k

l

k

l

k

E

1

1

1

/

1

g

p

k

r

µ

=

2

2

2

/

1

g

p

k

r

µ

=

background image

10

Where n is the number of turns and the other parameters are as defined above. This
extremely small value is mainly due to small number of turns (n=10), narrow track width
and relatively thin pole thickness. Due to approximations in the model, the actual
inductance may be a bit higher, but still much lower than in currently available heads (50-
100 nH). The resonant frequency of the LC circuit formed with the head inductance and
amplifier input capacitance is far beyond 1 GHz if the above value is used for the
calculation. Thus, it is safe to say that the inductance of the core, which is one factor that
limits the bandwidth, is not going to be any problem.

Read Head

The best magnetic flux sensor for such a high areal density is a type of giant
magnetoresistive (GMR) device called a spin-valve head. This is a multilayered physical
device (See Fig. 4) in which the resistance varies depending on the direction of the
magnetization vector inside the free layer [27]. Thus, a change in direction of
magnetization in the media corresponds to a change in resistance in the spin valve. A
constant current is applied across the spin valve, so the change in resistance is effectively
detected as a change in voltage. Baibich et. al. first discovered the GMR effect in 1988
[24]. The GMR effect was quickly developed into a product and introduced on the
market by IBM in 1997. When compared to an AMR head, the GMR head has two major
advantages: 1) it is intrinsically linear and 2) it has a higher percentage

R/R. The

linearity of the sensor is of utmost importance because it simplifies the design of the
channel and it is not necessary to apply a transverse bias, as is required for AMR sensors.
GMR heads in manufacturing nowadays have a

R/R ratio of about 4%. However,

current laboratory research shows that a much higher

R/R can be achieved. Values for

R/R as large as 20% for a single GMR head and 25% for a symmetric GMR head have

been reported in the literature [25].

Figure 4. Multilayered spin valve structure.

ù

ê

ê

ë

é

ï

ï

ý

ü

ïî

ï

í

ì

+

=

)

tanh(

)

tanh(

/

)

cosh(

1

)

tanh(

1

2

2

2

1

1

1

2

2

2

1

1

1

1

1

2

1

0

l

k

l

k

k

k

l

k

l

k

l

k

l

n

wp

L

r

µ

µ

background image

11

In this project, we decided to use the technique of Egelhoff et. al. [25], which produces a
GMR head with 20%

R/R at the wafer level. It is important to point out that this

technique requires extremely good control of the oxygen content in the gas during
sputtering (see Fig. 5). Oxygen plays the important role of a surfactant at the interfaces
of the deposited layers and increases the GMR effect. Therefore, these heads will require
manufacturing in an ultra-high vacuum (UHV) sputtering system. This will result in a
lower throughput of wafers. However, the miniaturization of head design and the
steadily increasing wafer size should result in a net gain in throughput of heads. Another
important consideration is the fact that when incorporated in the head structure, spin
valves have a lower

R/R than at the wafer level. For the current generation of spin

valves, a 6%

R/R sensor at the wafer level results in a 3.5%

R/R for the actual head.

Assuming similar scaling for more advanced spin valves, we expect that the reported
20%

R/R at the wafer level would decrease to about 11% in our head. This is the

number we will use in the following calculations.

Figure 5. GMR effect dependence on oxygen exposure during sputtering.

We decided to use a shielded GMR head with a shield spacing of 70 nm, for an effective
gap of 35 nm (or 0.035 µm). We are aware of the fact that the scaling of a GMR head is
far from trivial, since if the layers become too thin, the magnetoresistance effect itself
may disappear. The structure that we choose is 3.0 nm Co/2.0 nm Cu/2.0 nm Co/14.0 nm
IrMn. Here, we have replaced the NiO pinning layer suggested by ref. [25] due to the
superior pinning effect of the antiferromagnetic IrMn [26], which will require a smaller
thickness than NiO. The rest of the gap, from the sensor to the shields, has to be filled
with a dielectric material. Manufacturing of dielectric layers of 24.5 nm thickness with
no pinholes should not pose a major problem. If necessary, the gap could be even
widened a little. That will introduce some bit interference but with 11%

R/R effect and

good encoding we can achieve the required BER. Thus, we think that such a design will
be possible somewhere between 5-7 years from now.

Finally, we need to estimate the peak to peak voltage for this GMR sensor. We assume a

nominal current density of about 1.0x107 A/cm2 and a sensor resistivity of about 50µ

cm. Thus, given a read track width of 0.25 µm (the size of the sensor), a sensor height of

background image

12

0.5 µm, and the sensor thickness of 21 nm, we can calculate a peak-to-peak voltage of
397

µ

V using an equation for a shielded MR head [6]:

where J is current density; w is read head width; g is half the gap between shields, a is the
transition parameter, and d is the head-to-medium spacing. If we take gap loss into
account:

2

/

)

2

/

(

kg

kg

Sin

factor

loss

Gap

=

where k = 2

π/λ

, a loss factor of 0.81 is calculated, for a net output signal of 323

µ

V.

Current disk drives require a minimum peak-to-peak output voltage of 400

µ

V.

However, it is reasonable that an output voltage of 323

µ

V will be sufficient 7 years from

now in 2005

Head-Disk Interface

An important part in building a hard drive is the design of the head-media interface. In
our design, we pay special attention to the flying height, the overcoat and the lubrication.
The flying height ultimately defines our ability to read and write data. The lubrication
protects the disk and the slider from friction and wear.

The nominal flying height of the slider for such a high areal density has to be extremely
small. We chose a 13 nm head-to-medium separation in our design. This height is
composed of a 2 nm overcoat thickness, 1 nm of lubricant, 2 nm pole tip recession, and 8
nm nominal fly height (the physical distance between the ABS and the lubricant). Given
current progress, we project that these values can be reached within 7 years.

The fly-height in most current disk drives is about 50 nm, but state of the art drives are
available with much lower flying heights. An enabling technology that will allow ultra-
low flying heights is the scaling down of sliders and suspensions. In the March issue of
Data Storage, Silmag (in collaboration with Hutchinson, MN) show their new "atto"
slider - one quarter the size of the current pico slider (Fig. 6).

It is manufactured using

Figure 6. Triangular atto slider.

(

)

V

Jw

M

M t

g

d

a

d

a

g

g

d

a

d

a

p p

m

r

s

=

æ
è

ç

ö
ø

÷

æ
è

ç

ö
ø

÷

+

+

+

+

+

é
ë

ê

ù

ú

4 2

2

1

2

2

2

π

ρ

δ

tan

ln

(

)

(

)

background image

13

advanced photolithographic techniques. To ensure that drives using the new sliders can
withstand a 900-G shock, they must match the 3.5-gram load of pico sliders. In order for
them to match this load, a taper is etched into the leading edge of the air-bearing surface

(usable area of only 0.25 mm2). This taper creates negative pressure and the flying head
is in pseudo contact with the disk. The air-bearing surface is optimized for a 0.5 µin fly
height, which is about 12nm. On the basis of this manufacturing success, we base our
prediction that in 5 years a slider will be able to fly at about 8 nm from the disk surface.
Negative pressure slider design will still be necessary in order to guarantee uniform fly
height in the inner and outer radii of the disk. If necessary, the industry may even move
from pseudo contact to contact mode. In that case, a head spacing controller may be
necessary. The head-and-disk spacing could be measured with a capacitance bridge and
the position information could be fed into a feedback circuit that controls a piezoelectric
transducer. Such a system has been used to record at a linear density greater than 2000
fc/mm [28].

Lubricants serve to reduce friction and wear between the carbon overcoat and the
recording head. At such low flying heights, we will need a good quality lubricant. Some
of the currently available perfluoropolyether polymers might suffice, or a new type could
be synthesized if necessary. Important properties for any lubricant used are chemical
inertness, low vapor pressure to prevent evaporative loss, a low contact angle allowing
uniform wetting of the carbon overcoat surface, and a chemical affinity for the overcoat,
preventing spin-off and desorption. In our design, we provide about 1.0 nm of lubricant.
A 1.0 nm (10 Å) lubricant film provides about 2 molecules lying flat, on top of each
other, on the surface of the disk. Lubricant thickness in current state of the art drives is
around 1 nm. Therefore, it is reasonable to project two layers of lubricant (about 10 Å in
thickness) for an advanced lubricant available in 2005.

Due to its many favorable properties, primarily hardness and density, diamond-like
carbon (DLC) films are used as the overcoat in today’s drives. Today, continuous DLC
films can be precisely deposited with thickness below 5 nm using high deposition ion
beam technology. With advances in this process and possibly the use of chemical vapor
deposition, we expect that 2 nm continuous DLC films will be available by our target
release data. We believe that such a small thickness overcoat will still be able to fully
protect the magnetic layer because of the much smaller size and mass of the atto slider
compared to today’s pico sliders.

In order to minimize noise due to thermal asperities, it is necessary to displace the spin
valve sensor from the ABS. The pole tip recession was chosen to be 2 nm. This
represents a very tight tolerance in manufacturing. The major problem is that the
materials of the slider and the gap (SiN) are very hard but the pole materials and the
overcoat (Al2O3) are softer. This is a serious engineering challenge but we consider it
achievable within the next 5-7 years.

When the slider is not flying over the disk it may rest on a laser textured zone (about 0.4
inches radius on the inner-diameter of the platter). If contact start-stop presents danger of
head wear, then we could resort to dynamic loading of the head. For dynamic loading,

background image

14

the head normally flies at high flying height and is controllably lowered soon before the
read/write operations. In this case we wouldn’t need a textured zone.

One of the most difficult engineering challenges for the HDI is realizing extremely low
roughness. Due to the extreme sensitivity of GMR heads to thermal asperities, media
manufacturers are already moving to super polished surfaces, with R

a

values of 5 Å and

lower. Due to the fact that atoms tend to be about 1 Å in size, we won’t be able to lower
these average roughness values much more (maybe to 3 Å). However, the relevant
parameter is the peak roughness, which today is as low as 20 Å (2 nm). A 2 nm peak
roughness could very well prove disastrous for a flying height as low as we are using.
Fortunately, there are some other factors that make us feel comfortable with our design
decisions. First, the extremely rapid progress in chemical mechanical polishing (CMP)
technology should make surfaces with a peak roughness of 1 nm or less achievable by
2005. Secondly, self-healing lubricant films should be able to “fill-in the holes” and
create a smooth surface.

Channel

RLL Coding

In this part, we discuss several codes and equalization/detection methods and determine
which one is most suitable for our system. First , we use NRZI coding to convert user
bits to the switching pattern of magnetic transitions.

Currently, there are several commonly used codes, such as 8/9(0,4,4), 2/3(1,7), and
½(2,7). We compare the performance of different codes in Table 2.

8/9 (0,4,4)

2/3 (1,7)

1/2 (2,7)

Unit

Code Rate

8/9=0.889 2/3=0.667 1/2=0.500

Total Rate

0.827

0.62

0.465

Channel Bit Density

755.74

1008.06

1344.09

kbpi

Channel Bit Length

33.60

25.18

18.90

nm

Flux Change Density

756.05

504.03

448.03

kfci

Flux Change Length

33.60

50.39

56.69

nm

PW50

61.97

61.97

61.97

nm

PW50/Flux Change Length

1.85

1.23

1.09

PW50/Channel Bit Length

1.85

2.46

3.28

PW50/User Bit Length

1.52

1.52

1.52

Recording Frequency

321.64

214.43

190.60

MHz

Total SNR

20.62

22.82

23.15

dB

Table 2.

The formulas used above are:

Total rate = Code rate * ECC rate
Channel bit density = user bit density/total rate (1)
Flux change density = user bit density/[(d+1)*(total rate)] (2)
Flux change length = 1/flux change density (3)

background image

15

Channel bit length = 1/channel bit density (4)
Flux change/PW50 = PW50/flux change length (5)
Number of channel bits/PW50 = PW50/channel bit length (6)
We determine the total SNR = 20 log(V

0-p

/E

nt

), (7)

using

2

2

2

nm

np

nh

nt

E

E

E

E

+

+

=

(8)

where V

0-p

is the zero-to-peak voltage of the readout signal, E

nt

is the total noise voltage

and E

nh

, E

np

, E

nm

are noise voltages due to the head, pre-amp and media, respectively.

The thermal head noise (Johnson noise) is given by

f

kTR

E

nh

=

4

(9)

where R is read head resistance and

f is the bandwidth. We choose a state of the art IC

preamplifier for our design, with a noise value of 0.5 nV/Hz

1/2

.

Another import noise source is media noise, our media SNR is 25 dB (316 grains sensed
by head at an instant in time) and readout voltage of the head is 198.68

µ

V (zero to peak),

so the noise voltage due to media can be calculated.

Using equation (8), we calculate the total SNR for the various coding schemes that are
listed in Table 2.

From the parameters calculated in Table 2, we see that the 8/9(0,4,4) code, has the
highest flux change density and PW50/Flux change length which are desirable for high
density recording. However, it has the lowest SNR and the necessary recording
frequency is the highest (321.64 MHz), which would greatly increase the difficulty of
head design. On the other hand, ½(2,7) code has the highest SNR, which would help to
reduce the bit error rate (BER). However, the flux change density in this coding scheme
is too low and the PW50/flux change length is only 1.09. Most importantly, the required
sampling frequency for the ½(2,7) code is greater than 1 GHz, which would dramatically
increase the complexity of the electronics in the read channel. Thus, the ½(2,7) code is
impractical due to the required sampling frequency. Therefore, we choose the 2/3(1,7)
code as the optimum one for our system. It has a comparatively high PW50/flux change
length value, high data rate, and a reasonably high SNR. Please note that the SNR
calculated here is just the value before equalization/detection and ECC. Although the
SNR of the 2/3 (1,7) code is not as high as that of the ½(2,7) code, we can further
improve it and the consequent final BER by using the appropriate detection method and
error correction code (ECC) which will be discussed shortly.

Equalization and detection

The analog readback signal from the read head first goes through the equalizer, during
which the signal is formed to a specific shape (target), aiming at high reliability and low
BER. Then this waveform is sampled (digitized) and detected. Finally, after ECC
decoding, the data is converted back to its original form in which it can be used by the

background image

16

computer. All of this signal processing is very important through its influence on the
final BER.

Researchers have mostly focused on partial response maximum likelihood (PRML)
detection using the Viterbi algorithm (VA) [29-36], decision feedback equalization
(DFE), and fixed-delay tree search with decision feedback (FDTS/DF)[29,31,37]. Thapar
et al [38] suggested equalizing a digital recording channel to one of a class of partial
response (PR) systems of the form (1-D)(1+D)

n

, in which D is the delay operator. An

optimal value of n exists for a given recording density and the optimum n increases with
increasing density. The common names associated with the various partial response
polynomials are: PR4 for n=1, EPR4 for n=2 and EEPR4 for n=3. The implementation
complexity of a Viterbi detector matched to the PR channel grows exponentially with n
and becomes impractical for n greater than 1 or [30,32,35]. Actually, in order to meet
future requirements for a high data transfer rate, an ultra-high-speed analog-to-digital
(A/D) converter is also needed. Furthermore, PRML channels exhibit inferior
performance compared to decision feedback channels in the presence of nonlinear
distortion and channel response variations [29, 39]. Some researchers [40] have also
found that DFE exceeds the detection ability of ML in high density recording with RLL
(1,7), since the ten taps of DFE can remove more inter-symbol interference (ISI) than
ML’s eight taps.

Based on the facts mentioned above, we choose a DFE channel instead of a VA/PR
channel. The decision feedback equalizer makes decisions on a symbol-by-symbol basis,
unlike the Viterbi algorithm which uses a sequence detection method. In DFE, the
channel ISI is almost totally eliminated by a combination of forward and feedback
equalizers. The forward equalizer removes the precursor ISI, but allows a long tail of
postcursor ISI. The flexibility of this tail is exploited in the forward equalizer design to
minimize the noise variance at its output. The feedback equalizer uses the past decisions
to locally generate the postcursor ISI. This is then subtracted from the output of the
forward equalizer to almost perfectly cancel the actual postcursor ISI. The resulting ISI-
free signal is simply hard-limited to remove the noise and to make binary decisions on
the sequence b

k

, in which, the –1 and +1 levels represent the polarities of the magnetized

bit cells on the recording track.

We choose a zero-forcing decision feedback equalizer for be our detector. The output
and input SNR relation is [41]:

where, 1/N

o

is the input SNR to the equalizer which is equal to the total SNR in Table 2,

so 1/N

o

=22.82dB. S

h

is determined by the following inequality[41]:

0

N

S

SNR

h

DFE

ZF

=

)

10

(

)

2

/(

1

2

/

2

)

2

/(

2

ln

2

/

2

4

4

S

S

S

S

e

S

S

e

S

π

π

π

π

π

π

+

+

)

11

(

background image

17

where, S is flux changes per PW50 - 1.23 in our case. Using this value, we obtain that S
is between the value of 0.4809 and 0.5003. The probability of an error in a DFE channel
is [41].

We calculate that the possibility of error, i.e. the raw bit error rate (RBER), is on the
order of 10

-4

. The final BER will be greatly reduced by the ECC.

Error Correction Coding (ECC)

Error correction is necessary after equalization to further reduce the BER in order to meet
the design requirement. Contemporary magnetic disk drives use error correction coding
schemes that correct one burst of error in a variable-length block of up to one full track.
Earlier products used the Fire code, in which the processing is slow, one bit at a time.
More recent products use byte-oriented structures using the algebra of Reed-Solomon
codes [42,43].

We choose a Reed-Solomon two-level coding architecture for our design with some
modifications in order to improve its performance. The first-level capability is designed
to provide a specific reliability performance with on-the-fly processing. The second-level
capability is the “reserve” error protection that increases the performance by providing
additional error correction in the case of a weaker (or failing) device.

The data format of a disk track is designed around Reed-Solomon code and a two-level
coding arrangement consisting of subblocks within a block (see Fig. 7). The data is
stored in the form of user-defined variable-length blocks (records). Each block is
partitioned into fixed-length subblocks, except that the last subblock may be shorter with
fewer user bytes or may include pad bytes.

Figure 7. Two-level coding data format.

)

2

(

2

1

)

(

)

Pr(

DFE

ZF

DFE

SNR

erfc

SNR

Q

E

=

)

12

(

background image

18

In our design, each block consists of 50 subblocks and 10 additional check bytes at the
end of the block which are able to correct 5 errors in any one of the subblocks provided
that no errors occur in other subblocks. Each subblock consists of 95 user bytes and
another 7 first-level check bytes. Therefore, the ECC rate=95x50/(102x50+10)=0.93.

The error correction ability is represented by k1 (k1=3). The probability of k error bytes
in a subblock (95+7=102 bytes) can be determined by:

where p is the byte error rate, given by:

In our design, BER is in the order of 10

-4

as calculated above.

Then the probability of no error and all k1=3 correctable errors in each of the 50
subblocks can be obtained:

So the probability of uncorrectable error is:

In the second-level, there are two possible cases. The first case is that the number of
error bytes in all subblocks is less than or equal to k1=3, except for one subblock in
which the number of error bytes is less than k1+q+1=5=k2. In this case, these errors can
be detected in the first-level and corrected in the second-level. The other case occurs
when one subblock has more error bytes than k1+q=4 but less than k2+1=6, while all
other subblocks may have error bytes up to 2k1+q-k2=2. These errors can be corrected
in the second-level by means of 10 additional check bytes appended at the end of the
block.

After the first level of correction, the BER will be reduced to the order of 10

-12

. After the

second level of correction, the BER will decrease further to the order of 10

-16

, which

meets the design requirement of a final BER less than 10

-15

.

Actuators and Tracking

In order to achieve a recording density of 50Gb/in

2

, we chose a linear density of 625 kbpi

and a track density of 80 ktpi. Such a high track density has not yet been achieved.
However, looking at the literature and demonstrations of current technology, we can say

)

102

(

102

)

1

(

)

(

t

k

k

subblock

p

p

C

k

P

=

)

13

(

8

)

1

(

1

BER

p

=

)

14

(

50

1

1

)

)

(

(

)

3

1

(

=

=

=

=

k

k

k

subblock

block

k

P

k

k

P

)

15

(

)

3

1

(

1

'

=

=

k

k

P

P

block

block

)

16

(

background image

19

that in the near future, certainly by 2005 (perhaps as early as 2002), we can achieve such
a track density. Here we give a sample design of a servo system for 80 ktpi and discuss
the enabling technologies needed to reach this goal.

For a track density of 80 ktpi, we need a 40 nm 3

σ

TMR (track misregistration) and the

servo system needs a bandwidth of around 5 kHz.

We choose the embedded (sector) servo method to control head position because sector
servo can avoid the thermal track shift present in a dedicated-servo system. The major
contributions to track misregistration are from mechanical vibrations in the actuator, disk,
and spindle assembly, the windage friction between head and disk; servowriting errors;
and spacing error between write and read elements of the head. To reduce the TMR, we
need a more accurate servo and actuator system, we need to improve the mechanics of the
spindle and disk system, and we need to write more accurate and uniform servo sectors.

To reduce the mechanical vibrations, we choose fluid film bearing spindles. By using the
fluid bearing spindles, we can get a very small NRRO (non-repeatable runout) around 1.0

µ

in [44]. The unbalanced radial force can be largely reduced by optimization for the

pole-arc-to-pitch ratio [45].

As head track-width is reduced to the submicron range, many difficulties with head
positioning control will be encountered. In order to achieve good positioning control for
submicron track-width HDDs, it is important to write precise servo patterns in terms of
both position accuracy and magnetization uniformity. The side writing phenomenon
deteriorates the servo data quality, especially in the narrow track-width HDDs. In order
to minimize the side fringe effects during servo writing, we should use a narrow-gap and
high B

s

head to write the servo sectors. Toshiba Corp has made a head positioning

experiment using a narrow-gap (0.17um) high B

s

(1.6T) head and achieved a standard

deviation of the PES (position error signal) of 47nm for 18 ktpi recording [46]. In our
design, we use the write head as the STW (servo track writer) to write the servos tracks.
It is reasonable that such a narrow-gap and high B

s

write head as discussed above will

largely reduce the fringe effects.

As to the PES detection method, using a single PES di-bit can cause non-linearities at
very high bit densities. In order to improve linearity and reduce sensitivity to disk
surface effects, we employ a quadrature technique [43]. The essence of a quadrature
system is that two position error signals, often called normal and quadrature, are used.
The signals are derived from two sets of patterns which, when demodulated, produce
position error signals that are in space quadrature to each other. Having two signals
allows us to use only the most linear part of each. By using a quadrature layout, a more
accurate position error signal can be derived by only taking the linear portion of each di-
bit.

The actuator is the key factor in achieving very high track density. It is difficult for a
conventional VCM to achieve such high positioning accuracy as 40nm because its
bandwidth limit makes it difficult to increase the track density by improving the accuracy

background image

20

of head positioning. Thus, a dual-stage actuator will be a must. In the dual-stage
actuator, a VCM rotary actuator is used as a coarse actuator and a linear micro actuator is
used as a fine actuator. For 80 ktpi, we need servo bandwidth of about 5kHz. This
should easily be achieved with a dual-stage actuator in the near future. Fujitsu Ltd.
developed a dual-stage actuator with a fine actuator bandwidth of 3.3 kHz, a 3

σ

total

position error of 77 nm, and was able to achieve 25ktpi [47]. This actuator has a resonant
frequency of 12.7 kHz. This mechanical resonant frequency is more than high enough
for our fine actuator bandwidth of 5 kHz. Y.Tang, etc also showed a micro-electrostatic
dual-stage actuator with a 10 kHz bandwidth [48]. This justifies that it is reasonable to
achieve an actuator system with 5 kHz bandwidth and tracking accuracy of 40nm by
2005.

Among various designs of the fine actuator [47-50], we choose a piezoelectric device as
our fine actuator element. This is because piezoelectric fine actuators have been
extensively investigated and have already been successfully demonstrated in the
laboratory. The structure of the dual-stage actuator is shown in Fig 8. The fine

Figure 8. Dual-stage actuator.

piezoelectric actuator has a stroke of 3

µ

m for an applied voltage of 0 to 65 V, which

covers 9 tracks (track pitch 317 nm). We expect a linear relationship between the
displacement and applied voltage for the assembled actuator. Fig. 9 shows a block
diagram of the dual-stage actuator’s servo system

Figure 9. Block diagram of the dual-stage servo system.

background image

21

From the above discussion, we can outline a servo system for a track density of 80 ktpi.
This system would employ a fluid-bearing spindle motor, a high-Bs and small-gap STW
(servo track writer) head, and a dual-stage actuator with high bandwidth. Similar dual-
stage actuators have already been demonstrated by Toshiba Corp. [46] and Fujitsu Ltd.
for use with track densities on the order of 25 ktpi [47]. It seems reasonable that
improvements on these designs within the next 7 years will enable a track density of 80
ktpi.

The relevant equations used in this section are given below;

1. Motor torque factor:

where l is active coil length, B is average gap flux density, N is number of turns in the
coil, r is radius from pivot bearing to the center of force.

2. Maximum current in VCM:

where

Λ

is moment of initial,

θ

is the rotated angle in full seek,

τ

s

is the full seek time.

3. Maximum voltage:

where R is the coil resistance.

4. Average power in full seek:

5. Maximum force on VCM:

System Integration

The total capacity of the drive we designed is estimated from the parameters discussed
above and a few more parameters considered here. We set the disk diameter as 2.5
inches, the industry standard for portable PC storage applications. The available data
area typically ranges from a radius of 0.625” to the outer edge. We choose to divide the
usable area into 8 data zones of equal width. In this zone configuration, the total capacity
C

tot

is given by

K

lBNr

τ

=

2

2

4

s

K

i

τ

θ

τ

Λ

=

Λ

+

=

s

i

K

iR

U

τ

τ

2

max

2

1

iU

P

avg

=

F

NiBl

max

=

2

tpi

k

n

bpi

tot

D

k

r

r

n

r

D

r

r

k

p

C

=

ö

ç

è

æ

+

=

1

0

1

2

1

1

2

)

(

background image

22

where p is the number of platters, r

1

is the inner radius of the data area, r

2

is the outer disk

radius, k is the number of zones, D

bp

I is the linear density, D

tpi

is the track density. By

plugging the parameters into this equation, the total capacity is estimated as about 79.4
GigaBytes for a 2 platter drive. We believe this capacity is going to be pretty attractive
for multimedia enabled portable computers. If more storage space is required, we could
manufacture a 3 platter drive with a capacity of 119.1 GigaBytes.

The average access time estimated from the parameters shown in the servo section is
8.6ms. Current state of the art portable disk drives have an average access time of around
19.1 ms. Thus, our drive represents a modest improvement by approximately a factor of
2. The maximum data transfer rate is estimated as about 60MegaBytes/sec, which is
much faster than currently available drives. This is due to the fact that the data transfer
rate is directly proportional to the linear recording density supposing that the spindle
rotation speed is kept constant. Due to the ever-decreasing price of RAM, we can expect
to include a 1 MB data buffer to further improve the net performance of our drive.

In summary, we expect the drive described in this report to be economically competitive
in the portable disk drive market at the time of its expected release in 2004-2005.

background image

23

References

[1]

IBM’s Travelstar 6GT 2.5” drive for portable computers. Information gained
from IBM storage web page:

http://www.storage.ibm.com/hardsoft/diskdrdl/travel/travstar6g/travstar6g.htm

[2]

IBM set the record, as usual.

http://www.ibm.com.au/storage/diskRecord.html

[3]

“Technology Update: Mission impossible? 100 Gb/in

2

or bust,” Data Storage

(March 1998).

[4]

M. Kryder, “In high-density recording the medium’s the message,” Data Storage
(March 198).

[5]

M. Kryder, “Achieving 100 Gb/sq. in.: Barriers and Opportunities,” Presentation
at Toyota Technical Institute, Japan (March 1998).

[6]

K. Ashar, Magnetic Disk Drive Technology, IEEE Press, New York (1997).

[7]

S. Charap, “Modeling of Thermal Stability in Magnetic Media,” NSIC quarterly
meeting on Extremely High Density Magnetic Recording (EHDR), San Diego, 16
January, 1997.

[8]

D. Thompson, “The Role of Perpendicular Recording in the Future of Hard Disk
Storage,” J. Mag. Soc. Jap., Vol 21, Sup. S2, p. 9.

[9]

D. Thompson and J. Best, “The Extendibility of Magnetic Recording for Data
Storage,” IBM Executive Briefing (January 1998).

[10]

C. Mee and E. Daniel, Magnetic Recording Technology, 2

nd

ed., McGraw-Hill,

New York (1996).

[11]

J. Liu, Y. Liu, C. Luo, Z. Shan, D. Sellmyer, “Magnetic hardening in FePt
nanostructured films,” J. Appl. Phys, Vol 81, p.5644.

[12]

D. Zipperian and S. Kurada, “Metrology Automation for the Data Storage
Industry,” IDEMA Insight, Vol 11, No. 2, p. 3.

[13] W. Jayasekara, “Inductive Write Heads Using High Moment Pole Materials”,

Ph.D Thesis, ECE Dept. Carnegie Mellon Univ. (1998).

[14]

O. Kohmoto, “Recent Development of Thin Film Materials for Magnetic Heads”,
IEEE Trans. Magn., Vol. 27, p. 3640 (1991).

[15]

K. Sin and S. Wang, “FeN/AlN Multilayer Films for High Moment Thin Film
Recording Heads”, IEEE Trans. Magn., Vol. 32, p. 3509 (1996).

[16]

Masatoshi Hayakawa, “ Characteristics of Soft Magnetic Thin Films for Magnetic
Head Core Application”, J. MMM, 134, p. 287 (1994)

[17]

M. Minor and J. Barnerd, “Thermal Stability of FeTaN as a Function of N and Ta
Content”, IEEE Trans. Magn., Vol 33, p. 3808 (1997).

[18] H. Hu, W. Weresin, D. Horne, T. Gallagher, N. Robertson, M. Re, “High

Frequency Characterization and Recording Performance of NiFe and Laminated
FeN Heads,” IEEE Trans. Magn., Vol. 32, p. 3530 (1996).

[19]

K. Katori, K. Hayashi, M. Hayakawa, K. Aso, J. Magn. Soc. Japan 13S1, p 335
(1989).

[20]

M. Kryder and W.-Y. Lai, “Modeling of Narrow Track Thin Film Write Head
Fields”, IEEE Trans. Magn., Vol. 30, p. 3873 (1994).

[21]

H. Bertram, “Theory of Magnetic Recording”, Cambridge Univ. Press (1994).

background image

24

[22]

“Thin-Film Inductive Heads”, IBM Journal of Research and Development, Vol.
40, Num. 3, May 1996.

[23] K. Sin, C.-T. Wang, S. Wang, B. Clemens, “Effect of lamination on Soft

Magnetic Properties of FeN Films on Sloping Surfaces”, J. Appl. Phys., Vol. 81,
p. 4507 (1997).

[24]

Baibich et. al., Phys. Rev. Lett., Vol. 61, p. 2472 (1988).

[25] W. Egelhoff, Jr., P. Chen, C. Powell, M. Stiles, R. McMichael, J. Judy, K.

Takano, A. Berkowitz, “Oxygen as a surfactant in the growth of giant
magnetoresistance spin valves,” J. Appl. Phys., Vol. 82, p. 6142 (1997).

[26]

Adrian Devansahayam, personal communication, April 1998.

[27]

C. Tsang, R. Fontana, T. Lin, D. Heim, V. Speriosu, B. Gurney, M. Williams,
“Design, fabrication & testing of spin-valve read heads for high density
recording,” IEEE Trans. Mag., Vol. 30, No. 6, p. 3801 (1994).

[28]

Jefferson, “A Variable Head to Disk Spacing Controller for Magnetic
Recording on Rigid Disks,” IEEE Trans. Magn., MAG-24 (1988), p. 2736.

[29]

N. Zayed and L. Carley, “Comparison of equalization & detection for very high
density magnetic recording”, Conference paper 0-7803-3862-6, 4/97.

[30]

J. Mon and W. Zeng, “Equalization for Maximum Likelihood Detectors”, IEEE
Trans. Magn., Vol. 31, No.2, March 1995.

[31]

J. Fitzpatrick and X. Che, “An Evaluation of Partial Response Polynomials for
Magnetic Recording Systems”, IEEE Trans. Magn., Vol.31, NO.2, March 1996.

[32]

Y. Lin and C.-Y. Yeh, “A Generalized Viterbi Algorithm for Detection of Partial
Response Recording Systems”, IEEE Trans. on Magn., Vol.32, NO.5, Sept. 1996.

[33]

Y. Lin and C.-Y. Yeh, “Study of an extended partial response class IV channel for
digital magnetic recording”, IEEE Trans. Magn., Vol.33, NO.5, Sept. 1997.

[34]

K. Han and R. Spencer, “Comparison of different detection techniques for digital
magnetic recording channels”, IEEE Trans. Magn., Vol.31, NO.2, March 1995.

[35]

W. Ryan and B. Zafer, “A study of class I partial response signaling for magnetic
recording”, IEEE Trans. Magn., Vol.33, NO.6, Nov. 1997.

[36]

J.-G. Chern, C. Conroy, R. Contreras, “SA 19.4: An EPRML digital read/write
channel IC”, ISSCC97 Conference paper on disk-drive signal processing.

[37]

S. Nair, H. Shafiee, J. Moon, “Equalization and Detection in Storage Channels”,
IEEE Trans. Magn., Vol.32, NO.5, Sept. 1996.

[38]

H. Thapar, et al., “A class of partial response systems for increasing storage
density in magnetic recording”, IEEE Trans. Magn., Vol.23, NO.5, Sept. 1987.

[39] H. Sawaguchi and Y. Nishida, “Performance evaluation for FDTS/DF on MR

Nonlinear channels”, IEEE Trans. Magn., Vol.32, NO.5, Sept. 1996.

[40]

G. Silvus, “A comparison of detection methods in the presence of nonlinearities”,
IEEE Trans. Magn., Vol.34, NO.1, Jan.1998.

[41]

M. Fossorier, “Performance evaluation of decision feedback equalization for the
Lorentzian channel”, IEEE Trans. Magn., Vol.32, NO.2, March 1996.

[42] J.

Proakis,

Digital Communications, 2

nd

Ed., McGraw-Hill, 1989.

[43]

C. Mee and E. Daniel, Magnetic Storage Handbook, 2

nd

Ed., McGraw-Hill, 1996.

[44]

“New Challenge: Electromagnetic Design of BLDC Motors for High Speed Fluid
Film Bearing Spindles Used in Hard Disk Drives”, IEEE Trans. Magn., vol, 32,
NO. 5, P3854 , 1996.

background image

25

[45]

“Design Trends of Spindle Motors for High Perfomance Hard Disk Deives”,
IEEE Trans. Magn.,

vol 32, NO.5 , p3848, 1996.

[46]

“ An Experiment for Head Positioning System Using Submicron Track-width
GMR head”. IEEE Trans. Magn.,

vol. 32, No. 5. P3905, 1996.

[47]

“ A Flexural Piggyback Milli-Actuator for Over 5 Gbit/in

2

Density Magnetic

Recording”, IEEE Trans. Magn., vol 32, NO. 5, p.3908, 1996.

[48]

“Micro Electrostatic Actuators In Dual-Stage Disk Drives With High Track
Density”, IEEE Trans. Magn.,

vol 32, NO 5, p3851, 1996.

[49]

“A Dual-Stage Magnetic Disk Drive Actuator using a piezoelectric device for a
high track density.” IEEE Trans. Magn., vol 27, NO.6 , p5298,1991.

[50]

D. Miu, “Silicon microactuators for rigid disk drives”, Data Storage, July/Aug.
1995, p. 30.


Wyszukiwarka

Podobne podstrony:
Disk Operation System, I
15427 Instrumementation system design lecture 1id 16462 ppt
B2 Embedded System Design #1 ppt
EV (Electric Vehicle) and Hybrid Drive Systems
Synchronous Generator And Frequency Converter In Wind Turbine Applications System Design And Efficie
15427 Instrumementation system design lecture 1id 16462 ppt
Munster B , Prinssen W Acoustic Enhancement Systems – Design Approach And Evaluation Of Room Acoust
EV (Electric Vehicle) and Hybrid Drive Systems
Aspden POWER FROM MAGNETISM OVER UNITY MOTOR DESIGN (1996)
CEI 61400 22 Wind turbine generator systems Required Design Documentation
Lynge Odeon A Design Tool For Auditorium Acoustics, Noise Control And Loudspeaker Systems
system C 50 02 09
Cibse how to design a heating system
Magnetometer Systems for Explosive Ordnance Detection on Land
45 50 USTAWA o systemie oceny Nieznany
Ship Power Systems and Design Part 3

więcej podobnych podstron