0% found this document useful (0 votes)
198 views

Atomic Scale Memory at A Silicon Surface

This document provides an overview of atomic-scale memory technology. It begins with an introduction discussing how data storage can be improved by going to the atomic scale. It then describes the new atomic-scale memory, which stores a bit as the presence or absence of a single silicon atom. The memory uses self-assembled tracks of silicon atoms on a surface. Writing and reading is done using a scanning tunneling microscope. Speed and reliability constraints are discussed. Conventional storage media like hard disks are also overviewed for comparison.

Uploaded by

krishan jangir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
198 views

Atomic Scale Memory at A Silicon Surface

This document provides an overview of atomic-scale memory technology. It begins with an introduction discussing how data storage can be improved by going to the atomic scale. It then describes the new atomic-scale memory, which stores a bit as the presence or absence of a single silicon atom. The memory uses self-assembled tracks of silicon atoms on a surface. Writing and reading is done using a scanning tunneling microscope. Speed and reliability constraints are discussed. Conventional storage media like hard disks are also overviewed for comparison.

Uploaded by

krishan jangir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

TABLE OF CONTENTS

1. Abstract……………………………………………1
2. Introduction………………………………………...2
3. Conventional Storage media……………………….4
4. Silicon Memory Structure………………………….7
4.1 STM…………………………………………….9
5. Reliability and speed……………………………...11
6. Outlook……………………………………………15
7. Advantages and disadvantages……………………16
8. Conclusion………………………………………...17
9. References………………………………………...18
1. Abstract

The limits of pushing storage density to the atomic scale are explored with a
memory that stores a bit by the presence or absence of one silicon atom. These
atoms are positioned at lattice sites along self-assembled tracks with a pitch of five
atom rows. The memory can be initialized and reformatted by controlled deposition
of silicon. The writing process involves the transfer of Si atoms to the tip of a
scanning tunneling microscope. The constraints on speed and reliability are
compared with data storage in magnetic hard disks and DNA.

2. Introduction

In 1959 physics icon Richard Feynman estimated that “all of the information that
man has carefully accumulated in all the books in the world, can be written in a cube
of material one two-hundredth of an inch wide”. Thereby, he uses a cube of 5×5×5
= 125 atoms to store one bit, which is comparable to the 32 atoms that store one bit
in DNA. Such a simple, back-of-the-envelope calculation gave a first glimpse into
how much room there is for improving the density of stored data when going down
to the atomic level.

In the meantime, there has been great progress towards miniaturizing


electronic devices all the way down to single molecules or nanotubes as active
elements. Memory structures have been devised that consist of crossed arrays of
nanowires linked by switchable organic molecules or crossed arrays of carbon
nanotubes with electro statically switchable intersections.

1
Now, a little more than 40 years after Feynman's prescient estimate, scientists
have created an atomic-scale memory using atoms of silicon in place of the 1s and 0s
that computers use to store data. The feat represents a first crude step toward a
practical atomic-scale memory where atoms would represent the bits of information
that make up the words, pictures and codes read by computers.

It is our goal to push the storage density to the atomic limit and to test
whether a single atom can be used to store a bit at room temperature. How closely
can the bits be packed without interacting? What are the drawbacks of pushing the
density to its limit while neglecting speed, reliability and ease of use?

The result is a two-dimensional realization of the device envisaged by Feynman,


as shown in figure 1. A bit is encoded by the presence or absence of a Si atom inside
a unit cell of 5×4 = 20 atoms. The remaining 19 atoms are required to prevent
adjacent bits from interacting with each other, which is verified by measuring the
autocorrelation. A specialty of the structure in figure 1 is the array of self-assembled
tracks with a pitch of five atom rows that supports the extra atoms. Such regular
tracks are reminiscent of a conventional CDROM. However, the scale is shrunk from
µm to nm. Although the memory created now is in two dimensions rather than the
three-dimensional cube envisioned by Feynman, it provides a storage density a
million times greater than a CD-ROM, today's conventional means of storing data.

Figure 1

2
The new memory was constructed on a silicon surface that automatically
forms furrows within which rows of silicon atoms are aligned and rest like tennis
balls in a gutter. This is shown in the picture given below(fig 2).

Fig 2 Scattering gold atoms on to silicon wafer

The readout of such a memory via scanning tunneling microscopy (STM) is


obvious, albeit slow. Writing is more difficult. While atoms can be moved
controllably at liquid helium temperature, it is much harder to achieve that at room
temperature. In order to prevent diffusion, it is necessary to choose atoms that are
strongly bound to the surface. Moving them requires strong forces and a close
approach by the STM tip, which entails the risk of an atom jumping over to the tip.
We use this effect as virtue to remove a silicon atom from the surface for writing a 0.
The memory is pre-formatted with a 1 everywhere by controlled deposition of
silicon onto vacant sites.

Like conventional memory, the atomic-scale device can be initialized,


formatted, written and read at room temperature.

3
In the following we begin with a description of conventional storage media and
the memory structure, move on to writing and reading, and eventually explore
reliability and speed. The outlook considers the fundamental limitations of a single
atom memory and makes a comparison to data storage in DNA.

3. Conventional Storage Media

We are going to discuss about atomic scale memory at a silicon surface .But
some knowledge about the conventional storage media will help us to understand
the atomic scale memory deeply.

The highest commercial storage density is achieved with magnetic hard


disks, whose aerial density has increased by seven orders of magnitude since their
invention in Feynman's days. Currently, the storage density is approaching 100
Gigabits per square inch in commercial hard disks. Typical storage media consist of
a combination of several metals, which segregate into magnetic particles embedded
into a non-magnetic matrix that keeps them magnetically independent. A strip of
particles with parallel magnetic orientation makes up a bit, as color coded red and
turquoise in the figure below. (The dimensions keep getting smaller.) When such a
bit is imaged by a magnetic force microscope the collection of these particles shows
up as white or dark line, depending on the magnetic orientation.

4
The density limit in magnetic data storage is largely determined by the in
homogeneity of the magnetic particles that make up the storage medium.
Overcoming variations in particle size, shape, spacing, and magnetic switching
currently requires the use of about 100 particles per bit. The error limits are
extremely stringent (less than one error in 10 8 read/write cycles, which can be
reduced further to one error in 10 12 cycles by error-correcting codes). The
individual particles in today's media approach the super paramagnetic limit already
(about 10 nm), where thermal fluctuations flip the magnetization. For further
improvements one has to use fewer, but more homogeneous particles. These can be
synthesized with great perfection by growing them with a protective surfactant
shell. Our current research is aimed at depositing an ordered array of such nano
particles onto structured silicon surfaces. The ultimate goal is a single particle per
bit memory, which would increase the storage density by a factor of 100.

Hard disk drives being the most commonly used data storage device in our
days.
Two main elements constitute every hard disk. These are read/write head usually
called "slider" and magnetic disk resembling multilayer sandwich. The head consist

5
of writing and reading modules being closely nearby. The writing module is a
sophisticated solenoid working on the inductive principle. The reading module
comprises special giant magneto resistive or magneto impedance sensor (GMR-,
GMI-sensor) that is a multilayer composition of magnetic and nonmagnetic layers.
And the hard disk itself contains a number of seed layers, magnetic layers and
protective layers on the appropriate substrate This is shown in the figure below (fig
3).

    Atomic Force Microscopy easily copes with these tasks along with another
structure and surface analysis techniques. It was usefully applied to measure
surface topography of hard disk, the topography of the slider in the region faced to
hard disk surface. Reversible displacements of magnetic head layers due to thermal
expansion can be observed via AFM under actual write operation conditions.

Fig 3 Schematic of magnetic recording and reading

 As well known, information on hard disk stored in terms of bits -


microscopic (300 nm or less in width) areas having or not having local magnetic
moment thus expressing "high" or "low" level of digital signal . The writing module
operates inducing local magnetic moments in bit areas of hard disk magnetic layer.

6
Conversely, bits having remnant magnetization cause measurable change in
resistance of GMR-sensor of reading module enabling to distinguish between two
levels of digital signal.

     The progress in hard disk engineering followed by ongoing reduction of sizes of


all the materials and modules involved in the process of magnetic recording. Suffice
it to say, that thickness of each magnetic sub layer and protective carbon coating
amounts to merely several nanometers and head hovers over disk surface at heights
not exceeded 50 nm. Flying on so small heights requires extraordinary perfection of
disk surface, absence of any defects and particles on it since presence of the smallest
particle may result in severe damage of the surface with moving head. Moreover, the
effects of non uniform thermal expansion negligible in early stages of hard disk
development nowadays interfere significantly in the mentioned processes.

 Thus monitoring of roughness and defectiveness of disk surface as well as


magnetic head topography with nanometer accuracy is of vital importance in
magnetic storage technology.

7
4 Silicon Memory structure

The new memory was made without the use of lithography as required to
make conventional memory chips. To make conventional memory chips, light is
used to etch patterns on a chemically treated silicon surface. To use lithography to
make chips that are denser than the best available chips is prohibitively expensive
and difficult.

The self-assembled memory structure shown in figures 1 and 2 is obtained by


depositing 0.4 monolayer of gold onto a Si(111) surface at 700 ◦C with a post-anneal
at 850 ◦C, thereby forming the well-known Si(111)5 × 2–Au structure. All images
are taken by STM with a tunneling current of 0.2 nA and a sample bias of −2 V. At
this bias the extra silicon atoms are enhanced compared to the underlying 5 × 2
lattice. A stepped Si(111) substrate tilted by 1◦___ towards the azimuth is used to
obtain one of the three possible domain orientations exclusively. The surface
arranges itself into tracks that are exactly five atom rows wide (figure 1). They are
oriented parallel to the steps. Protrusions reside on top of the tracks on a 5 × 4
lattice. Only half of the possible sites are occupied in thermal equilibrium (figure
4(a)).When varying the Au coverage the occupancy remains close to 50%. Excess Au
is taken up by patches of the Au-rich Si(111)√3 × √3–Au phase, and Au deficiency
leads to patches of clean Si(111)7 × 7. In order to find out whether the protrusions
are Si or Au, we evaporate additional Si and Au at low temperature (300 ◦C). Silicon
fills the vacant sites (figures 4(b) and (d)), but gold does not.

8
Figure 5

9
In figure 4(b) the occupancy of the 5 × 4 sites has increased to 90±3% from
53±4% in figure 4(a). Higher annealing allows the extra Si to diffuse away to the
nearest step and causes vacancies to reappear, confirming that the half-filled
structure is thermodynamically stable. Thus, an average code with 1 and 0 in equal
proportion is particularly stable.

Writing is more difficult. While atoms can be positioned controllably at liquid


helium temperature, that is much harder to achieve that at room temperature. In
order to prevent them from moving around spontaneously it is necessary to choose
atoms that are strongly bound to the surface. Pushing them around with the STM tip
requires a close approach, which entails the risk of an atom jumping over to the tip.
This problem can be turned into a solution by using the STM tip to remove silicon
atoms for writing zeros. The memory is pre-formatted with a 1 everywhere by
controlled deposition of silicon onto all vacant sites.

4.1 S T M

We have a technology that lets us happen some materials like electrical wires so that
their end terminates in a single atom. We are also able to move this upper wire in
atomic scale increments across the bottom surface. Closer the atomically sharp tip is
to atoms on the bottom surface, the more electricity will flow between the two,
when we make them part of electrical circuit. By measuring this electricity as the tip
and surface are moved relative to one another, we can see how atoms are arranged
on the bottom surface. The instrument that is used for this measurement is called
scanning tunneling microscope.

10
Figure 6

The writing process consists of removing Si atoms from a nearly filled lattice,
such as that in figures 4(b) and (d). Figure 5 demonstrates one of two methods,
which is based on chemical attachment to the tip. The tip is brought down towards
the Si atom to be removed, typically by 0.6 nm for 30 ms without applying a voltage.
A less reliable method uses field desorption by a voltage pulse of −4 V on the sample
(30 ms long) with the tip hovering above the Si atom to be removed.

The readout is demonstrated in figure 4(e). A line scan along one of the tracks in
figure 4(c) (marked by an arrow) produces well-defined peaks for extra Si atoms

11
that protrude well beyond the noise level. Since the memory is self-formatted into
tracks it can be read by a simple, one dimensional scan. There is no need to search in
two dimensions for the location of a bit. The signal is highly predictable since all
atoms have the same shape and occur on well- defined lattice sites. That allows for a
high level of filtering and error correction. After subtracting identical Gaussians at
the lattice sites one obtains a residual comparable to the noise (figure 4(e) bottom
trace). The height of the signal ( z = 0.13 nm) exceeds the noise (δz = 0.005 nm rms)
by a factor of 26, using a dwell time of 500 µs/point. A highly reproducible pulse
shape allows sophisticated signal filtering techniques because most noise signals do
not match the known shape and can be removed. An example is partial response
maximum likelihood detection (PRML), which is widely used for the readout of
magnetic hard disks and in long-distance communications.

In PRML the signal is filtered to produce a standard line shape, sampled at


regular intervals and processed in real time by the Viterbi algorithm, which selects
the most likely bit sequence.

5. Reliability and speed

Reliability becomes a key issue with such a small memory cell. The writing process is
too slow and error prone to be practical, but the readout deserves closer inspection.
The error rate for reading is related to the effective signal-to-noise ratio (SNR).

SNR = 2/πWB/σ 2

W is the full width half maximum of a signal pulse, B the bit spacing and σ the variance
in the pulse positions.

Typical values for hard drives are W ≈ 120 nm, B ≈ 50 nm, σ ≈ 4 nm, giving SNR ≈
240 ≈ 24 dB and an error rate of 10 −8. For the atomic memory one can derive

12
analogous quantities W = 0.55 nm, B = 1.54 nm, and σ = 0.015 nm from figure 4(e)
by taking the peak width, the lattice spacing and the jitter of the peak positions
relative to the lattice points. These numbers can serve as input for designing filters
and codes that minimize the error rate. Such models will be different from those for
hard disks, where readout pulses alternate in sign and bits are less than a pulse width
apart. A closer analogue might be the unipolar soliton pulses that are used in long
distance communications through optical fibres.

Pulse amplitude variation plays a role for solitons, which is 0.005 nm or 4%


of the peak height in our case.The thermal stability of Si atoms on the tracks can be
estimated by fitting high-temperature STM to a simple model of activated diffusion.
The temperature dependence of the jump rate ν(T) is determined by a Boltzmann
factor containing an activation energy E:

ν(T ) = ν0 exp(E/kBT ).

The attempt frequency ν0 is comparable to the frequency of lattice vibrations, with a


typical value ν0 = 1013 s−1. Estimating a jump rate of about 1 s−1 at a temperature T
=475 K
one obtains an activation energy E = kBT ln(ν0/ν) ≈ 1.2 eV. That gives a jump rate of
10−8 s−1 at room temperature (kBT = 25 meV), i.e. one jump in 2–3 years. Thus,
thermal stability is not an issue compared to less fundamental limits, such as surface
contamination. Correlations between adjacent bits come into play at high storage
density. In magnetic storage one has to be concerned about magnetic coupling
between adjacent particles at a bit spacing of 10 nm or less. The bits are much closer
than that on the Si surface (1.5 nm along the track and 1.7 nm between tracks). In
order to detect interactions between the Si atoms we have determined the
autocorrelation function of their equilibrium distribution (figure 5). Along the tracks,
the nearest site of the underlying 5×2 lattice is almost completely excluded, with
occupancy of only 0.04 relative to the average.

13
Figure 7

Therefore, a bit spacing of 2 lattice sites would discriminate a 1 1 pair against a


1 0 by a factor of 25. All other correlations are much closer to the average of 1, with
the largest deviation (1.33) occurring at the second 5 × 2 site in figure 7 . Thus, the 5
× 4 cell represents the smallest viable cell for the underlying 5×2 lattice that keeps bit
interactions under control. Feynman’s proposed spacing of five atoms between bits
was correct.

One of the fundamental limitations to devices operating on the atomic scale is speed.
By investigating a storage device at the single atom limit one can learn something
about how today's data storage might evolve in the future. The graph below shows
readout speed versus storage density(figure 8), two key properties of a memory.
Compared to traditional data storage in hard disks the silicon atom memory has a
very impressive density (250 Terabits per square inch), but its data rate is incredibly
low For example, the minimum switching time t is given by the uncertainty relation t
= h/E, where E is the switching energy. E has to be larger than the minimum energy

14
Figure 8

Emin = kBT ln 2 for switching one bit. In our case, the activation energy of 1.2 eV for
moving one Si atom is much larger than kBT = 25 meV.

In principle, that would allow very fast switching and writing. The readout,
however, has to slow down for small bits since the signal decreases and becomes
noisier. In our case, the tunneling current is affected by statistical fluctuations in the
number of electrons and by thermal noise. Their respective spectral densities are
Ss(ω) = 2eI and St(ω) = 4kBT/R, resulting in current fluctuations of 8 and 1 .3 fA
Hz−1/2 for our conditions (I = 0.2 nA, R = 1010 ohm, T = 300 K).

Adding the two noise contributions and integrating over a dwell time of τ =
500 µs/point, one finds a current fluctuation δI = [(Ss +St )/τ ]1/2 = 3.6×10−13 A. This
current fluctuation is translated into a height fluctuation δz via the exponential
dependence I (z) = I0 exp(−kz) of the tunnelling current on z: δz = δI/(∂I/∂z) = δI/
(−kI ) = 9 × 10−5 nm with k = 20 nm−1 for a typical tunnel barrier of 4 eV. That is 55
times smaller than the actual noise δz = 0.005 nm.

15
Statistical and thermal noise would meet this level with a dwell time of only
160 ns (200 electrons/point). The corresponding readout speed would be 6×10 6
points s−1, which is respectable but still slower than today’s hard disks (figure 8).
High-speed STM amplifiers operating at rates up to 50 MHz exist.

A tool for enhancing speed is a high degree of parallelism. In fact, there has been a
substantial effort directed at producing large arrays of scanning probe tips by silicon
processing methods. An array of 32 × 32 = 1024 tips with 92 µm pitch is operational.
The atomic precision of the tracks ensures that the tip array follows the tracks after a
one-time adjustment of the tip positions and the scan direction.

6. OUTLOOK

An interesting yardstick is the storage and transcription of data in biological


syatems,5*4=20 surface atoms store one bit on silicon compared to 32 atoms used
by DNA (64 atoms for an AT base pair plus backbone,68 atoms for CG, with each
base pair coding the four combinations AT,TA,CG,GC,ie 2 bits ).The transcription
rate from DNA to RNA is ≈60 nucleotides s −1 for E-coli at 37 ◦C and 10 times faster
for DNA replication .The STM acquisition rate on silicon is comparable (120 bits s −1
in figure 2(c).It could be as high as 10 7 bit s−1 at the statistical noise limit .Parallel
readout can be used in both cases, with ≈ 10 1 subsections of DNA being replicated
simultaneously and an array of 10 3 tips scanning in parallel. Cells use a similar
parallelism of 103–104 for protein synthesis, where speed is more important. The
error rate achieved in DNA replication is as low as 10 −7–10−11 with error correction
by DNA polymerase.

Compared to conventional storage media, both DNA and the silicon surface
excel by their storage density (figure 8). The highest density achieved in hard disk

16
demos is about 100 Gbit inch −2, whereas the Si atom memory exhibits 250 Tbit
inch−2.

Figure 9

17
7. Advantages and disadvantages

An intriguing aspect of atomic scale memory is that memory density is comparable


to the way nature stores data in DNA molecules. The Wisconsin atomic-scale silicon
memory uses 20 atoms to store one bit of information, including the space around
the single atom bits. DNA uses 32 atoms to store information in one half of the
chemical base pair that is the fundamental unit that makes up genetic information.
Compared to conventional storage media, both DNA and the silicon surface excel by
their storage density.

Obviously there are some drawbacks. The memory was constructed and
manipulated in a vacuum, and that a scanning tunneling microscope is needed to
write memory which makes the writing process very time consuming.

Moreover, there is a tradeoff between memory density and speed. As density


increases, the ability to read the memory comes down because we get less and less
of a signal. As we make things smaller, it's going to get slower.

8. Conclusion

The push towards the atomic density limit requires a sacrifice in speed, as
demonstrated in figure 5. Practical data storage might evolve in a similar direction,
with the gain in speed slows down as the density increases. Somewhere on the way to
the atomic scale ought to be an optimum combination of density and speed.
If the reading and writing speed is improved and the memory is made cost
effective, this will revolutionize the field of secondary storage devices. Researchers
are working on manufacturing STM with multiple tips or heads that can perform
parallel read-write processes.

18
This type of memory may eventually become useful for storing vast amounts of
data, but because the stability of each bit of information depends on one or a few
atoms, it likely to be used for applications where a small number of errors can be
tolerated.

9. References

1. http://www.iop.org/EJ/abstract/0957-4484/13/4/312/
2. http://news.bbc.co.uk/1/hi/sci/tech/2290707.stm
3. http://www.sciencedaily.com/releases

4. The images are from:


Webpage
http://www.di.com/movies/movies_inhance/appnotes/odt/odtmain.html
5. http://www.wisc.edu./
6. http://uw.physics.wisc.edu/~himpsel/memory.html
7. http://www.scienceagogo.com/news/20020805010233data_trunc_sys.shtml
8. http://www.boingboing.net/2002/09/06/atomicscale_memory.html
9. http://www.jefallbright.net/node/744
10. http://www.newsfactor.com/perl/story/19405.html

19

You might also like