Atomic Scale Memory at A Silicon Surface
Atomic Scale Memory at A Silicon Surface
1. Abstract……………………………………………1
2. Introduction………………………………………...2
3. Conventional Storage media……………………….4
4. Silicon Memory Structure………………………….7
4.1 STM…………………………………………….9
5. Reliability and speed……………………………...11
6. Outlook……………………………………………15
7. Advantages and disadvantages……………………16
8. Conclusion………………………………………...17
9. References………………………………………...18
1. Abstract
The limits of pushing storage density to the atomic scale are explored with a
memory that stores a bit by the presence or absence of one silicon atom. These
atoms are positioned at lattice sites along self-assembled tracks with a pitch of five
atom rows. The memory can be initialized and reformatted by controlled deposition
of silicon. The writing process involves the transfer of Si atoms to the tip of a
scanning tunneling microscope. The constraints on speed and reliability are
compared with data storage in magnetic hard disks and DNA.
2. Introduction
In 1959 physics icon Richard Feynman estimated that “all of the information that
man has carefully accumulated in all the books in the world, can be written in a cube
of material one two-hundredth of an inch wide”. Thereby, he uses a cube of 5×5×5
= 125 atoms to store one bit, which is comparable to the 32 atoms that store one bit
in DNA. Such a simple, back-of-the-envelope calculation gave a first glimpse into
how much room there is for improving the density of stored data when going down
to the atomic level.
1
Now, a little more than 40 years after Feynman's prescient estimate, scientists
have created an atomic-scale memory using atoms of silicon in place of the 1s and 0s
that computers use to store data. The feat represents a first crude step toward a
practical atomic-scale memory where atoms would represent the bits of information
that make up the words, pictures and codes read by computers.
It is our goal to push the storage density to the atomic limit and to test
whether a single atom can be used to store a bit at room temperature. How closely
can the bits be packed without interacting? What are the drawbacks of pushing the
density to its limit while neglecting speed, reliability and ease of use?
Figure 1
2
The new memory was constructed on a silicon surface that automatically
forms furrows within which rows of silicon atoms are aligned and rest like tennis
balls in a gutter. This is shown in the picture given below(fig 2).
3
In the following we begin with a description of conventional storage media and
the memory structure, move on to writing and reading, and eventually explore
reliability and speed. The outlook considers the fundamental limitations of a single
atom memory and makes a comparison to data storage in DNA.
We are going to discuss about atomic scale memory at a silicon surface .But
some knowledge about the conventional storage media will help us to understand
the atomic scale memory deeply.
4
The density limit in magnetic data storage is largely determined by the in
homogeneity of the magnetic particles that make up the storage medium.
Overcoming variations in particle size, shape, spacing, and magnetic switching
currently requires the use of about 100 particles per bit. The error limits are
extremely stringent (less than one error in 10 8 read/write cycles, which can be
reduced further to one error in 10 12 cycles by error-correcting codes). The
individual particles in today's media approach the super paramagnetic limit already
(about 10 nm), where thermal fluctuations flip the magnetization. For further
improvements one has to use fewer, but more homogeneous particles. These can be
synthesized with great perfection by growing them with a protective surfactant
shell. Our current research is aimed at depositing an ordered array of such nano
particles onto structured silicon surfaces. The ultimate goal is a single particle per
bit memory, which would increase the storage density by a factor of 100.
Hard disk drives being the most commonly used data storage device in our
days.
Two main elements constitute every hard disk. These are read/write head usually
called "slider" and magnetic disk resembling multilayer sandwich. The head consist
5
of writing and reading modules being closely nearby. The writing module is a
sophisticated solenoid working on the inductive principle. The reading module
comprises special giant magneto resistive or magneto impedance sensor (GMR-,
GMI-sensor) that is a multilayer composition of magnetic and nonmagnetic layers.
And the hard disk itself contains a number of seed layers, magnetic layers and
protective layers on the appropriate substrate This is shown in the figure below (fig
3).
Atomic Force Microscopy easily copes with these tasks along with another
structure and surface analysis techniques. It was usefully applied to measure
surface topography of hard disk, the topography of the slider in the region faced to
hard disk surface. Reversible displacements of magnetic head layers due to thermal
expansion can be observed via AFM under actual write operation conditions.
6
Conversely, bits having remnant magnetization cause measurable change in
resistance of GMR-sensor of reading module enabling to distinguish between two
levels of digital signal.
7
4 Silicon Memory structure
The new memory was made without the use of lithography as required to
make conventional memory chips. To make conventional memory chips, light is
used to etch patterns on a chemically treated silicon surface. To use lithography to
make chips that are denser than the best available chips is prohibitively expensive
and difficult.
8
Figure 5
9
In figure 4(b) the occupancy of the 5 × 4 sites has increased to 90±3% from
53±4% in figure 4(a). Higher annealing allows the extra Si to diffuse away to the
nearest step and causes vacancies to reappear, confirming that the half-filled
structure is thermodynamically stable. Thus, an average code with 1 and 0 in equal
proportion is particularly stable.
4.1 S T M
We have a technology that lets us happen some materials like electrical wires so that
their end terminates in a single atom. We are also able to move this upper wire in
atomic scale increments across the bottom surface. Closer the atomically sharp tip is
to atoms on the bottom surface, the more electricity will flow between the two,
when we make them part of electrical circuit. By measuring this electricity as the tip
and surface are moved relative to one another, we can see how atoms are arranged
on the bottom surface. The instrument that is used for this measurement is called
scanning tunneling microscope.
10
Figure 6
The writing process consists of removing Si atoms from a nearly filled lattice,
such as that in figures 4(b) and (d). Figure 5 demonstrates one of two methods,
which is based on chemical attachment to the tip. The tip is brought down towards
the Si atom to be removed, typically by 0.6 nm for 30 ms without applying a voltage.
A less reliable method uses field desorption by a voltage pulse of −4 V on the sample
(30 ms long) with the tip hovering above the Si atom to be removed.
The readout is demonstrated in figure 4(e). A line scan along one of the tracks in
figure 4(c) (marked by an arrow) produces well-defined peaks for extra Si atoms
11
that protrude well beyond the noise level. Since the memory is self-formatted into
tracks it can be read by a simple, one dimensional scan. There is no need to search in
two dimensions for the location of a bit. The signal is highly predictable since all
atoms have the same shape and occur on well- defined lattice sites. That allows for a
high level of filtering and error correction. After subtracting identical Gaussians at
the lattice sites one obtains a residual comparable to the noise (figure 4(e) bottom
trace). The height of the signal ( z = 0.13 nm) exceeds the noise (δz = 0.005 nm rms)
by a factor of 26, using a dwell time of 500 µs/point. A highly reproducible pulse
shape allows sophisticated signal filtering techniques because most noise signals do
not match the known shape and can be removed. An example is partial response
maximum likelihood detection (PRML), which is widely used for the readout of
magnetic hard disks and in long-distance communications.
Reliability becomes a key issue with such a small memory cell. The writing process is
too slow and error prone to be practical, but the readout deserves closer inspection.
The error rate for reading is related to the effective signal-to-noise ratio (SNR).
SNR = 2/πWB/σ 2
W is the full width half maximum of a signal pulse, B the bit spacing and σ the variance
in the pulse positions.
Typical values for hard drives are W ≈ 120 nm, B ≈ 50 nm, σ ≈ 4 nm, giving SNR ≈
240 ≈ 24 dB and an error rate of 10 −8. For the atomic memory one can derive
12
analogous quantities W = 0.55 nm, B = 1.54 nm, and σ = 0.015 nm from figure 4(e)
by taking the peak width, the lattice spacing and the jitter of the peak positions
relative to the lattice points. These numbers can serve as input for designing filters
and codes that minimize the error rate. Such models will be different from those for
hard disks, where readout pulses alternate in sign and bits are less than a pulse width
apart. A closer analogue might be the unipolar soliton pulses that are used in long
distance communications through optical fibres.
ν(T ) = ν0 exp(E/kBT ).
13
Figure 7
One of the fundamental limitations to devices operating on the atomic scale is speed.
By investigating a storage device at the single atom limit one can learn something
about how today's data storage might evolve in the future. The graph below shows
readout speed versus storage density(figure 8), two key properties of a memory.
Compared to traditional data storage in hard disks the silicon atom memory has a
very impressive density (250 Terabits per square inch), but its data rate is incredibly
low For example, the minimum switching time t is given by the uncertainty relation t
= h/E, where E is the switching energy. E has to be larger than the minimum energy
14
Figure 8
Emin = kBT ln 2 for switching one bit. In our case, the activation energy of 1.2 eV for
moving one Si atom is much larger than kBT = 25 meV.
In principle, that would allow very fast switching and writing. The readout,
however, has to slow down for small bits since the signal decreases and becomes
noisier. In our case, the tunneling current is affected by statistical fluctuations in the
number of electrons and by thermal noise. Their respective spectral densities are
Ss(ω) = 2eI and St(ω) = 4kBT/R, resulting in current fluctuations of 8 and 1 .3 fA
Hz−1/2 for our conditions (I = 0.2 nA, R = 1010 ohm, T = 300 K).
Adding the two noise contributions and integrating over a dwell time of τ =
500 µs/point, one finds a current fluctuation δI = [(Ss +St )/τ ]1/2 = 3.6×10−13 A. This
current fluctuation is translated into a height fluctuation δz via the exponential
dependence I (z) = I0 exp(−kz) of the tunnelling current on z: δz = δI/(∂I/∂z) = δI/
(−kI ) = 9 × 10−5 nm with k = 20 nm−1 for a typical tunnel barrier of 4 eV. That is 55
times smaller than the actual noise δz = 0.005 nm.
15
Statistical and thermal noise would meet this level with a dwell time of only
160 ns (200 electrons/point). The corresponding readout speed would be 6×10 6
points s−1, which is respectable but still slower than today’s hard disks (figure 8).
High-speed STM amplifiers operating at rates up to 50 MHz exist.
A tool for enhancing speed is a high degree of parallelism. In fact, there has been a
substantial effort directed at producing large arrays of scanning probe tips by silicon
processing methods. An array of 32 × 32 = 1024 tips with 92 µm pitch is operational.
The atomic precision of the tracks ensures that the tip array follows the tracks after a
one-time adjustment of the tip positions and the scan direction.
6. OUTLOOK
Compared to conventional storage media, both DNA and the silicon surface
excel by their storage density (figure 8). The highest density achieved in hard disk
16
demos is about 100 Gbit inch −2, whereas the Si atom memory exhibits 250 Tbit
inch−2.
Figure 9
17
7. Advantages and disadvantages
Obviously there are some drawbacks. The memory was constructed and
manipulated in a vacuum, and that a scanning tunneling microscope is needed to
write memory which makes the writing process very time consuming.
8. Conclusion
The push towards the atomic density limit requires a sacrifice in speed, as
demonstrated in figure 5. Practical data storage might evolve in a similar direction,
with the gain in speed slows down as the density increases. Somewhere on the way to
the atomic scale ought to be an optimum combination of density and speed.
If the reading and writing speed is improved and the memory is made cost
effective, this will revolutionize the field of secondary storage devices. Researchers
are working on manufacturing STM with multiple tips or heads that can perform
parallel read-write processes.
18
This type of memory may eventually become useful for storing vast amounts of
data, but because the stability of each bit of information depends on one or a few
atoms, it likely to be used for applications where a small number of errors can be
tolerated.
9. References
1. http://www.iop.org/EJ/abstract/0957-4484/13/4/312/
2. http://news.bbc.co.uk/1/hi/sci/tech/2290707.stm
3. http://www.sciencedaily.com/releases
19