0% found this document useful (0 votes)
70 views

Stochastic Process

Uploaded by

Solopro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Stochastic Process

Uploaded by

Solopro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

n signals, parking brake, headlights, transmission position).

Cautions may be displayed


for special problems (fuel low, check engine, tire pressure low, door ajar, seat belt
unfastened). Problems are recorded so they can be reported to diagnostic equipment.
Navigation systems can provide voice commands to reach a destination. Automotive
instrumentation must be cheap and reliable over long periods in harsh environments.
There may be independent airbag systems that contain sensors, logic and actuators.
Anti-skid braking systems use sensors to control the brakes, while cruise control affects

throttle position. A wide v Cathode


66 languages

​ Article
​ Talk
​ Read
​ Edit
​ View history
Tools
Appearance hide

Text
​ Small
​ Standard
​ Large
Width
​ Standard
​ Wide
From Wikipedia, the free encyclopedia

Diagram of a copper cathode in a galvanic cell (e.g., a battery). Positively charged cations move
towards the cathode allowing a positive current i to flow out of the cathode.

A cathode is the electrode from which a conventional current leaves a polarized


electrical device. This definition can be recalled by using the mnemonic CCD for
Cathode Current Departs. A conventional current describes the direction in which
positive charges move. Electrons have a negative electrical charge, so the movement of
electrons is opposite to that of the conventional current flow. Consequently, the
mnemonic cathode current departs also means that electrons flow into the device's
cathode from the external circuit. For example, the end of a household battery marked
with a + (plus) is the cathode.

The electrode through which conventional current flows the other way, into the device, is
termed an anode.

Charge flContents hide

​ (Top)
​ Formula
​ Justification
​ Occurrences

​ Toggle Occurrences subsection

​ Gravitation
​ Electrostatics
​ Light and other electromagnetic radiation

​ Example

​ Sound in a gas

​ Field theory interpretation


​ Non-Euclidean implications
​ History
​ See also
​ References
​ External links

Inverse-square law
31 languages

​ Article
​ Talk

​ Read
​ Edit
​ View history

Tools

Appearance hide

Text

​ Small
​ Standard
​ Large

Width

​ Standard
​ Wide

Color (beta)

​ Automatic
​ Light
​ Dark
​ Report an issue with dark mode

From Wikipedia, the free encyclopedia


S represents the light source, while r represents the measured points. The lines represent the
flux emanating from the sources and fluxes. The total number of flux lines depends on the
strength of the light source and is constant with increasing distance, where a greater density of
flux lines (lines per unit area) means a stronger energy field. The density of flux lines is inversely
proportional to the square of the distance from the source because the surface area of a sphere
increases with the square of the radius. Thus the field intensity is inversely proportional to the
square of the distance from the source.

In science, an inverse-square law is any scientific law stating that the observed
"intensity" of a specified physical quantity is inversely proportional to the square of the
distance from the source of that physical quantity. The fundamental cause for this can
be understood as geometric dilution corresponding to point-source radiation into
three-dimensional space.

Radar energy expands during both the signal transmission and the reflected return, so
the inverse square for both paths means that the radar will receive energy according to
the inverse fourth power of the range.

To prevent dilution of energy while propagating a signal, certain methods can be used
such as a waveguide, which acts like a canal does for water, or how a gun barrel
restricts hot gas expansion to one dimension in order to prevent loss of energy transfer
to a bullet.

Formula[edit]
In mathematical notation the inverse square law can be expressed as an intensity (I)
varying as a function of distance (d) from some centre. The intensity is proportional (see
∝) to the reciprocal of the square of the distance thus:

intensity ∝ 1distance2

It can also be mathematically expressed as :

intensity1intensity2=distance22distance12

or as the formulation of a constant quantity:

intensity1×distance12=intensity2×distance22
The divergence of a vector field which is the resultant of radial inverse-square law fields
with respect to one or more sources is proportional to the strength of the local sources,
and hence zero outside sources. Newton's law of universal gravitation follows an
inverse-square law, as do the effects of electric, light, sound, and radiation phenomena.

Justification[edit]
The inverse-square law generally applies when some force, energy, or other conserved
quantity is evenly radiated outward from a point source in three-dimensional space.
2
Since the surface area of a sphere (which is 4πr ) is proportional to the square of the
radius, as the emitted radiation gets farther from the source, it is spread out over an
area that is increasing in proportion to the square of the distance from the source.
Hence, the intensity of radiation passing through any unit area (directly facing the point
source) is inversely proportional to the square of the distance from the point source.
Gauss's law for gravity is similarly applicable, and can be used with any physical
quantity that acts in accordance with the inverse-square relationship.

Occurrences[edit]
Gravitation[edit]

Gravitation is the attraction between objects that have mass. Newton's law states:

The gravitational attraction force between two point masses is directly proportional to
the product of their masses and inversely proportional to the square of their separation
[citation
distance. The force is always attractive and acts along the line joining them.
needed]

If the distribution of matter in each body is spherically symmetric, then the objects can
be treated as point masses without approximation, as shown in the shell theorem.
Otherwise, if we want to calculate the attraction between massive bodies, we need to
add all the point-point attraction forces vectorially and the net attraction might not be
exact inverse square. However, if the separation between the massive bodies is much
larger compared to their sizes, then to a good approximation, it is reasonable to treat
the masses as a point mass located at the object's center of mass while calculating the
gravitational force.

As the law of gravitation, this law was suggested in 1645 by Ismaël Bullialdus. But
Bullialdus did not accept Kepler's second and third laws, nor did he appreciate
Christiaan Huygens's solution for circular motion (motion in a straight line pulled aside
by the central force). Indeed, Bullialdus maintained the sun's force was attractive at
aphelion and repulsive at perihelion. Robert Hooke and Giovanni Alfonso Borelli both
[1]
expounded gravitation in 1666 as an attractive force. Hooke's lecture "On gravity" was
[2]
at the Royal Society, in London, on 21 March. Borelli's "Theory of the Planets" was
[3]
published later in 1666. Hooke's 1670 Gresham lecture explained that gravitation
applied to "all celestiall bodys" and added the principles that the gravitating power
decreases with distance and that in the absence of any such power bodies move in
straight lines. By 1679, Hooke thought gravitation had inverse square dependence and
[4]
communicated this in a letter to Isaac Newton: my supposition is that the attraction
[5]
always is in duplicate proportion to the distance from the center reciprocall.

Hooke remained bitter about Newton claiming the invention of this principle, even
though Newton's 1686 Principia acknowledged that Hooke, along with Wren and Halley,
[6]
had separately appreciated the inverse square law in the solar system, as well as
[7]
giving some credit to Bullialdus.

Electrostatics[edit]

Main article: Electrostatics

The force of attraction or repulsion between two electrically charged particles, in


addition to being directly proportional to the product of the electric charges, is inversely
proportional to the square of the distance between them; this is known as Coulomb's
15 [8]
law. The deviation of the exponent from 2 is less than one part in 10 .

F=keq1q2r2

Light and other electromagnetic radiation[edit]

The intensity (or illuminance or irradiance) of light or other linear waves radiating from a
point source (energy per unit of area perpendicular to the source) is inversely
proportional to the square of the distance from the source, so an object (of the same
size) twice as far away receives only one-quarter the energy (in the same time period).

More generally, the irradiance, i.e., the intensity (or power per unit area in the direction
of propagation), of a spherical wavefront varies inversely with the square of the distance
from the source (assuming there are no losses caused by absorption or scattering).

For example, the intensity of radiation from the Sun is 9126 watts per square meter at
the distance of Mercury (0.387 AU); but only 1367 watts per square meter at the
distance of Earth (1 AU)—an approximate threefold increase in distance results in an
approximate ninefold decrease in intensity of radiation.

For non-isotropic radiators such as parabolic antennas, headlights, and lasers, the
effective origin is located far behind the beam aperture. If you are close to the origin,
you don't have to go far to double the radius, so the signal drops quickly. When you are
far from the origin and still have a strong signal, like with a laser, you have to travel very
far to double the radius and reduce the signal. This means you have a stronger signal or
have antenna gain in the direction of the narrow beam relative to a wide beam in all
directions of an isotropic antenna.
In photography and stage lighting, the inverse-square law is used to determine the “fall
off” or the difference in illumination on a subject as it moves closer to or further from the
light source. For quick approximations, it is enough to remember that doubling the
[9]
distance reduces illumination to one quarter; or similarly, to halve the illumination
increase the distance by a factor of 1.4 (the square root of 2), and to double illumination,
reduce the distance to 0.7 (square root of 1/2). When the illuminant is not a point
source, the inverse square rule is often still a useful approximation; when the size of the
light source is less than one-fifth of the distance to the subject, the calculation error is
[10]
less than 1%.

The fractional reduction in electromagnetic fluence (Φ) for indirectly ionizing radiation
with increasing distance from a point source can be calculated using the inverse-square
law. Since emissions from a point source have radial directions, they intercept at a
2
perpendicular incidence. The area of such a shell is 4πr where r is the radial distance
from the center. The law is particularly important in diagnostic radiography and
radiotherapy treatment planning, though this proportionality does not hold in practical
situations unless source dimensions are much smaller than the distance. As stated in
Fourier theory of heat “as the point source is magnification by distances, its radiation is
dilute proportional to the sin of the angle, of the increasing circumference arc from the
point of origin”.

Example[edit]

Let P be the total power radiated from a point source (for example, an omnidirectional
isotropic radiator). At large distances from the source (compared to the size of the
source), this power is distributed over larger and larger spherical surfaces as the
distance from the source increases. Since the surface area of a sphere of radius r is A =
2
4πr , the intensity I (power per unit area) of radiation at distance r is

I=PA=P4πr2.
The energy or intensity decreases (divided by 4) as the distance r is doubled; if
measured in dB would decrease by 6.02 dB per doubling of distance. When referring to
measurements of power quantities, a ratio can be expressed as a level in decibels by
evaluating ten times the base-10 logarithm of the ratio of the measured quantity to the
reference value.

Sound in a gas[edit]

In acoustics, the sound pressure of a spherical wavefront radiating from a point source
decreases by 50% as the distance r is doubled; measured in dB, the decrease is still
6.02 dB, since dB represents an intensity ratio. The pressure ratio (as opposed to power
ratio) is not inverse-square, but is inverse-proportional (inverse distance law):

p ∝ 1r

The same is true for the component of particle velocity

that is in-phase with the instantaneous sound pressure

:
v ∝1r

In the near field is a quadrature component of the particle velocity that is 90° out of
phase with the sound pressure and does not contribute to the time-averaged energy or
the intensity of the sound. The sound intensity is the product of the RMS sound
pressure and the in-phase component of the RMS particle velocity, both of which are
inverse-proportional. Accordingly, the intensity follows an inverse-square behaviour:

I = pv ∝ 1r2.

Field theory interpretation[edit]


For an irrotational vector field in three-dimensional space, the inverse-square law
corresponds to the property that the divergence is zero outside the source. This can be
generalized to higher dimensions. Generally, for an irrotational vector field in
n-dimensional Euclidean space, the intensity "I" of the vector field falls off with the
th
distance "r" following the inverse (n − 1) power law

I∝1rn−1,
[citation needed]
given that the space outside the source is divergence free.

Non-Euclidean implications[edit]
The inverse-square law, fundamental in Euclidean spaces, also applies to
non-Euclidean geometries, including hyperbolic space. The inherent curvature in these
spaces impacts physical laws, underpinning various fields such as cosmology, general
[11]
relativity, and string theory.

John D. Barrow, in his 2020 paper "Non-Euclidean Newtonian Cosmology," elaborates


on the behavior of force (F) and potential (Φ) within hyperbolic 3-space (H3). He
illustrates that F and Φ obey the formulas F ∝ 1 / R^2 sinh^2(r/R) and Φ ∝ coth(r/R),
where R and r represent the curvature radius and the distance from the focal point,
[11]
respectively.

The concept of the dimensionality of space, first proposed by Immanuel Kant, is an


[12]
ongoing topic of debate in relation to the inverse-square law. Dimitria Electra Gatzia
and Rex D. Ramsier, in their 2021 paper, argue that the inverse-square law pertains
[12]
more to the symmetry in force distribution than to the dimensionality of space.

Within the realm of non-Euclidean geometries and general relativity, deviations from the
inverse-square law might not stem from the law itself but rather from the assumption
that the force between bodies depends instantaneously on distance, contradicting
special relativity. General relativity instead interprets gravity as a distortion of
spacetime, causing freely falling particles to traverse geodesics in this curved
[13]
spacetime.

History[edit]
John Dumbleton of the 14th-century Oxford Calculators, was one of the first to express
functional relationships in graphical form. He gave a proof of the mean speed theorem
stating that "the latitude of a uniformly difform movement corresponds to the degree of
the midpoint" and used this method to study the quantitative decrease in intensity of
illumination in his Summa logicæ et philosophiæ naturalis (ca. 1349), stating that it was
not linearly proportional to the distance, but was unable to expose the Inverse-square
[14]
law.

German astronomer Johannes Kepler discussed the inverse-square law and how it affects the
intensity of light.

In proposition 9 of Book 1 in his book Ad Vitellionem paralipomena, quibus astronomiae


pars optica traditur (1604), the astronomer Johannes Kepler argued that the spreading
[15][16]
of light from a point source obeys an inverse square law:
Sicut se Just as [the ratio
habent of] spherical
spharicae surfaces, for
superificies, which the source
quibus origo of light is the
lucis pro centro center, [is] from
est, amplior ad the wider to the
angustiorem: narrower, so the
ita se habet density or
fortitudo seu fortitude of the
densitas lucis rays of light in
radiorum in the narrower
angustiori, ad [space], towards
illamin in laxiori the more
sphaerica, hoc spacious
est, conversim. spherical
Nam per 6. 7. surfaces, that is,
tantundem inversely. For
lucis est in according to
angustiori [propositions] 6
sphaerica & 7, there is as
superficie, much light in the
quantum in narrower
fusiore, tanto spherical
ergo illie surface, as in
stipatior & the wider, thus it
densior quam is as much more
hic. compressed and
dense here than
there.
In 1645, in his book Astronomia Philolaica ..., the French astronomer Ismaël Bullialdus
[17]
(1605–1694) refuted Johannes Kepler's suggestion that "gravity" weakens as the
inverse of the distance; instead, Bullialdus argued, "gravity" weakens as the inverse
[18][19]
square of the distance:
Virtus autem illa, As for the
qua Sol power by which
prehendit seu the Sun seizes
harpagat or holds the
planetas, planets, and
corporalis quae which, being
ipsi pro manibus corporeal,
est, lineis rectis functions in the
in omnem manner of
mundi hands, it is
amplitudinem emitted in
emissa quasi straight lines
species solis throughout the
cum illius whole extent of
corpore rotatur: the world, and
cum ergo sit like the species
corporalis of the Sun, it
imminuitur, & turns with the
extenuatur in body of the
maiori spatio & Sun; now,
intervallo, ratio seeing that it is
autem huius corporeal, it
imminutionis becomes
eadem est, ac weaker and
luminus, in attenuated at a
ratione nempe greater
dupla distance or
intervallorum, interval, and
sed eversa. the ratio of its
decrease in
strength is the
same as in the
case of light,
namely, the
duplicate
proportion, but
inversely, of the
distances [that
is, 1/d²].

In England, the Anglican bishop Seth Ward (1617–1689) publicized the ideas of
Bullialdus in his critique In Ismaelis Bullialdi astronomiae philolaicae fundamenta
inquisitio brevis (1653) and publicized the planetary astronomy of Kepler in his book
Astronomia geometrica (1656).

In 1663–1664, the English scientist Robert Hooke was writing his book Micrographia
(1666) in which he discussed, among other things, the relation between the height of
the atmosphere and the barometric pressure at the surface. Since the atmosphere
surrounds the Earth, which itself is a sphere, the volume of atmosphere bearing on any
unit area of the Earth's surface is a truncated cone (which extends from the Earth's
center to the vacuum of space; obviously only the section of the cone from the Earth's
surface to space bears on the Earth's surface). Although the volume of a cone is
proportional to the cube of its height, Hooke argued that the air's pressure at the Earth's
surface is instead proportional to the height of the atmosphere because gravity
diminishes with altitude. Although Hooke did not explicitly state so, the relation that he
proposed would be true only if gravity decreases as the inverse square of the distance
[20][21]
from the Earth's center.

See also
d together and supplied from the same current source, even though the cathodes they
heat may be at different potentials.
In order to improve electron emission, cathodes are treated with chemicals, usually
compounds of metals with a low work function. Treated cathodes require less surface
area, lower temperatures and less power to supply the same cathode current. The
untreated tungsten filaments used in early tubes (called "bright emitters") had to be
heated to 1,400 °C (2,550 °F), white-hot, to produce sufficient thermionic emission for
use, while modern coated cathodes produce far more electrons at a given temperature
[4][9][10]
so they only have to be heated to 425–600 °C (797–1,112 °F) There are two
[4][8]
main types of treated cathodes:

Cold cathode (lefthand electrode) in neon lamp

● Coated cathode – In these the cathode is covered with a coating of alkali


metal oxides, often barium and strontium oxide. These are used in low-power
tubes.
● Thoriated tungsten – In high-power tubes, ion bombardment can destroy the
coating on a coated cathode. In these tubes a directly heated cathode
consisting of a filament made of tungsten incorporating a small amount of
thorium is used. The layer of thorium on the surface which reduces the work
function of the cathode is continually replenished as it is lost by diffusion of
[11]
thorium from the interior of the metal.

Cold cathode[edit]

Main article: Cold cathode

This is a cathode that is not heated by a filament. They may emit electrons by field
electron emission, and in gas-filled tubes by secondary emission. Some examples are
electrodes in neon lights, cold-cathode fluorescent lamps (CCFLs) used as backlights in
laptops, thyratron tubes, and Crookes tubes. They do not necessarily operate at room
temperature; in some devices the cathode is heated by the electron current flowing
through it to a temperature at which thermionic emission occurs. For example, in some
fluorescent tubes a momentary high voltage is applied to the electrodes to start the
current through the tube; after starting the electrodes are heated enough by the current
[citation needed]
to keep emitting electrons to sustain the discharge.

Cold cathodes may also emit electrons by photoelectric emission. These are often
called photocathodes and are used in phototubes used in scientific instruments and
[citation needed]
image intensifier tubes used in night vision goggles.

Diodes[edit]

In a semiconductor diode, the cathode is the N–doped layer of the PN junction with a
high density of free electrons due to doping, and an equal density of fixed positive
charges, which are the dopants that have been thermally ionized. In the anode, the
converse applies: It features a high density of free "holes" and consequently fixed
[citation
negative dopants which have captured an electron (hence the origin of the holes).
needed]

When P and N-doped layers are created adjacent to each other, diffusion ensures that
electrons flow from high to low density areas: That is, from the N to the P side. They
leave behind the fixed positively charged dopants near the junction. Similarly, holes
diffuse from P to N leaving behind fixed negative ionised dopants near the junction.
These layers of fixed positive and negative charges are collectively known as the
depletion layer because they are depleted of free electrons and holes. The depletion
layer at the junction is at the origin of the diode's rectifying properties. This is due to the
resulting internal field and corresponding potential barrier which inhibit current flow in
reverse applied bias which increases the internal depletion layer field. Conversely, they
allow it in forwards applied bias where
ariety of services can be provided via communication links on the OnStar system.
Autonomous cars (with exotic instrumentation) have been shown.

Aircraft[edit]
[7]
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle
deflections that could be interpreted as altitude and airspeed. A magnetic compass
provided a sense of direction. The displays to the pilot were as critical as the
measurements.

A modern aircraft has a far more sophisticated suite of sensors and displays, which are
embedded into avionics systems. The aircraft may contain inertial navigation systems,
global positioning systems, weather radar, autopilots, and aircraft stabilization systems.
Redundant sensors are used for reliability. A subset of the information may be
transferred to a crash recorder to aid mishap investigations. Modern pilot displays now
include computer displays including head-up displays.

Air traffic control radar is a distributed instrumentation system. The ground part sends
an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders
that transmit codes on reception of the pulse. The system displays an aircraft map
location, an identifier and optionally altitude. The map location is based on sensed
antenna direction and sensed time delay. The other information is embedded in the
transponder transmission.

Laboratory instrumentation[edit]

Among the possible uses of the term is a collection of laboratory test equipment
controlled by a computer through an IEEE-488 bus (also known as GPIB for General
Purpose Instrument Bus or HPIB for Hewlitt Packard Instrument Bus). Laboratory
equipment is available to measure many electrical and chemical quantities. Such a
collection of equipment might be used to automate the testing of drinking water for
pollutants.

Instrumentation engineering [edit]


The instrumentation part of a piping and instrumentation diagram will be developed by an
instrumentation engineer.

Instrumentation engineering is the engineering specialization focused on the principle


and operation of measuring instruments that are used in design and configuration of
automated systems in areas such as electrical and pneumatic domains, and the control
of quantities being measured. They typically work for industries

[3]
devices did not become standard in meteorology for two centuries. The concept has
remained virtually unchanged as evidenced by pneumatic chart recorders, where a
pressurized bellows displaces a pen. Integrating sensors, displays, recorders, and
controls was uncommon until the industrial revolution, limited by both need and
practicality.

Early industrial[edit]

The evolution of analogue control loop signalling from the pneumatic era to the electronic era

Early systems used direct process connections to local control panels for control and
indication, which from the early 1930s saw the introduction of pneumatic transmitters
and automatic 3-term (PID) controllers.

The ranges of pneumatic transmitters were defined by the need to control valves and
actuators in the field. Typically, a signal ranged from 3 to 15 psi (20 to 100kPa or 0.2 to
1.0 kg/cm2) as a standard, was standardized with 6 to 30 psi occasionally being used
for larger valves. Transistor electronics enabled wiring to replace pipes, initially with a
range of 20 to 100mA at up to 90V for loop powered devices, reducing to 4 to 20mA at
12 to 24V in more modern systems. A transmitter is a device that produces an output
signal, often in the form of a 4–20 mA electrical current signal, although many other
options using voltage, frequency, pressure, or ethernet are possible. The transistor was
[4]
commercialized by the mid-1950s.

Instruments attached to a control system provided signals used to operate solenoids,


valves, regulators, circuit breakers, relays and other devices. Such devices could
control a desired output variable, and provide either remote monitoring or automated
control capabilities.

Each instrument company introduced their own standard instrumentation signal,


causing confusion until the 4–20 mA range was used as the standard electronic
instrument signal for transmitters and valves. This signal was eventually standardized
as ANSI/ISA S50, "Compatibility of Analog Signals for Electronic Industrial Process
Instruments", in the 1970s. The transformation of instrumentation from mechanical
pneumatic transmitters, controllers, and valves to electronic instruments reduced
maintenance costs as electronic instruments were more dependable than mechanical
instruments. This also increased efficiency and production due to their increase in
accuracy. Pneumatics enjoyed some advantages, being favored in corrosive and
[5]
explosive atmospheres.

Automatic process control[edit]

Example of a single industrial control loop, showing continuously modulated control of process
flow

In the early years of process control, process indicators and control elements such as
valves were monitored by an operator, that walked around the unit adjusting the valves
to obtain the desired temperatures, pressures, and flows. As technology evolved
pneumatic controllers were invented and mounted in the field that monitored the
process and controlled the valves. This reduced the amount of time process operators
needed to monitor the process. Latter years, the actual controllers were moved to a
central room and signals were sent into the control room to monitor the process and
outputs signals were sent to the final control element such as a valve to adjust the
process as needed. These controllers and indicators were mounted on a wall called a
control board. The operators stood in front of this board walking back and forth
monitoring the process indicators. This again reduced the number and amount of time
process operators were needed to walk around the units. The most standard pneumatic
[6]
signal level used during these years was 3–15 psig.

Large integrated computer-based systems[edit]

Pneumatic "three term" pneumatic PID controller, widely used before electronics became reliable
and cheaper and safe to use in hazardous areas (Siemens Telepneu Example)

A pre-DCS/SCADA era central control room. Whilst the controls are centralised in one place, they
are still discrete and not integrated into one system.
A DCS control room where plant information and controls are displayed on computer graphics
screens. The operators are seated and can view and control any part of the process from their
screens, whilst retaining a plant overview.

Process control of large industrial plants has evolved through many stages. Initially,
control would be from panels local to the process plant. However, this required a large
manpower resource to attend to these dispersed panels, and there was no overall view
of the process. The next logical development was the transmission of all plant
measurements to a permanently staffed central control room. Effectively this was the
centralization of all the localized panels, with the advantages of lower manning levels
and easy overview of the process. Often the controllers were behind the control room
panels, and all automatic and manual control outputs were transmitted back to plant.

However, whilst providing a central control focus, this arrangement was inflexible as
each control loop had its own controller hardware, and continual operator movement
within the control room was required to view different parts of the process. With coming
of electronic processors and graphic displays it became possible to replace these
discrete controllers with computer-based algorithms, hosted on a network of
input/output racks with their own control processors. These could be distributed around
plant, and communicate with the graphic display in the control room or rooms. The
distributed control concept was born.

The introduction of DCSs and SCADA allowed easy interconnection and


re-configuration of plant controls such as cascaded loops and interlocks, and easy
interfacing with other production computer systems. It enabled sophisticated alarm
handling, introduced automatic event logging, removed the need for physical records
such as chart recorders, allowed the control racks to be networked and thereby located
locally to plant to reduce cabling runs, and provided high level overviews of plant status
and production levels.

Application[edit]
In some cases, the sensor is a very minor element of the mechanism. Digital cameras
and wristwatches might technically meet the loose definition of instrumentation because
they record and/or display sensed information. Under most circumstances neither would
be called instrumentation, but when used to measure the elapsed time of a race and to
document the winner at the finish line, both would be called instrumentation.

Household[edit]

A very simple example of an instrumentation system is a mechanical thermostat, used


to control a household furnace and thus to control room temperature. A typical unit
senses temperature with a bi-metallic strip. It displays temperature by a needle on the
free end of the strip. It activates the furnace by a mercury switch. As the switch is
rotated by the strip, the mercury makes physical (and thus electrical) contact between
electrodes.

Another example of an instrumentation system is a home security system. Such a


system consists of sensors (motion detection, switches to detect door openings), simple
algorithms to detect intrusion, local control (arm/disarm) and remote monitoring of the
system so that the police can be summoned. Communication is an inherent part of the
design.

Kitchen appliances use sensors for control.

● A refrigerator maintains a constant temperature by actuating the cooling


system when the temperature becomes too high.
● An automatic ice machine makes ice until a limit switch is thrown.
● Pop-up bread toasters allow the time to be set.
● Non-electronic gas ovens will regulate the temperature with a thermostat
controlling the flow of gas to the gas burner. These may feature a sensor bulb
sited within the main chamber of the oven. In addition, there may be a safety
cut-off flame supervision device: after ignition, the burner's control knob must
be held for a short time in order for a sensor to become hot, and permit the
flow of gas to the burner. If the safety sensor becomes cold, this may indicate
the flame on the burner has become extinguished, and to prevent a
continuous leak of gas the flow is stopped.
● Electric ovens use a temperature sensor and will turn on heating elements
when the temperature is too low. More advanced ovens will actuate fans in
response to temperature sensors, to distribute heat or to cool.
● A common toilet refills the water tank until a float closes the valve. The float is
acting as a water level sensor.
Automotive[edit]
Modern automobiles have complex instrumentation. In addition to displays of engine
rotational speed and vehicle linear speed, there are also displays of battery voltage and
current, fluid levels, fluid temperatures, distance traveled, and feedback of various
controls (turn signals, parking brake, headlights, transmission position). Cautions may
be displayed for special problems (fuel low, check engine, tire pressure low, door ajar,
seat belt unfastened). Problems are recorded so they can be reported to diagnostic
equipment. Navigation systems can provide voice commands to reach a destination.
Automotive instrumentation must be cheap and reliable over long periods in harsh
environments. There may be independent airbag systems that contain sensors, logic
and actuators. Anti-skid braking systems use sensors to control the brakes, while cruise
control affects throttle position. A wide variety of services can be provided via
communication links on the OnStar system. Autonomous cars (with exotic
instrumentation) have been shown.

Aircraft[edit]
[7]
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle
deflections that could be interpreted as altitude and airspeed. A magnetic compass
provided a sense of direction. The displays to the pilot were as critical as the
measurements.

A modern aircraft has a far more sophisticated suite of sensors and displays, which are
embedded into avionics systems. The aircraft may contain inertial navigation systems,
global positioning systems, weather radar, autopilots, and aircraft stabilization systems.
Redundant sensors are used for reliability. A subset of the information may be
transferred to a crash recorder to aid mishap investigations. Modern pilot displays now
include computer displays including head-up displays.

Air traffic control radar is a distributed instrumentation system. The ground part sends
an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders
that transmit co

The instrumentation part of a piping and instrumentation diagram will be developed by an


instrumentation engineer.

Instrumentation engineering is the engineering specialization focused on the principle


and operation of measuring instruments that are used in design and configuration of
automated systems in areas such as electrical and pneumatic domains, and the control
of quantities being measured. They typically work for industries

ngineering
h controllers, electronics control engineers may use electronic circuits, digital signal
processors, microcontrollers, and programmable logic controllers (PLCs). Control
engineering has a wide range of applications from the flight and propulsion systems of
[66]
commercial airliners to the cruise control present in many modern automobiles. It
also plays an important role in industrial automation.

Control engineers often use feedback when designing control systems. For example, in
an automobile with cruise control the vehicle's speed is continuously monitored and fed
[67]
back to the system which adjusts the motor's power output accordingly. Where there
is regular feedback, control theory can be used to determine how the system responds
to such feedback.

Control engineers also work in robotics to design autonomous systems using control
algorithms which interpret sensory feedback to control actuators that move robots such
as autonomous vehicles, autonomous drones and others used in a variety of
[68]
industries.

Electronics[edit]

Main article: Electronic eng yearsor transistors, although all main electronic components
(resistors, capacitors etc.) can be created at a microscopic level.

Nanoelectronics is the further scaling of devices down to nanometer levels. Modern


devices are already in the nanometer regime, with below 100 nm processing having
[72]
been standard since around 2002.

Microelectronic components are created by chemically fabricating wafers of


semiconductors such as silicon (at higher frequencies, compound semiconductors like
gallium arsenide and indium phosphide) to obtain the desired transport of electronic
charge and control of current. The field of microelectronics involves a significant amount
of chemistry and material science and requires the electronic engineer working in the
[73]
field to have a very good working knowledge of the effects of quantum mechanics.

Signal processing[edit], broadcast engineering, power electronics, and


biomedical engineering as many alreadtruments measure variables such as wind speed
and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples
use the Peltier-Seebeck effect to measure the temperature difference between two
[79]
points.

Often instrumentation is not used by itself, but instead as the sensors of larger electrical
systems. For example, a thermocouple might be used to help ensure a furnace's
[80]
temperature remains constant. For this reason, instrumentation engineering is often
viewed as the counterpart of control.eneration, transmission, amplification, modulation,
detection, and analysis of electromagnetic radiation. The application of optics deals with
design of optical instruments such as lenses, microscopes, telescopes, and other
equipment that uses the properties of electromagnetic radiation. Other prominent
applications of optics include electro-optical sensors and measurement systems, lasers,
fiber-optic communication systems, and optical disc systems (e.g. CD and DVD).
Photonics builds heavily on optical technology, supplemented with modern
developments such as optoelec

​ Talk
​ Read
​ Edit
​ View history
Tools
Appearance hide

Text
​ Small
​ Standard
​ Large
Width
​ Standard
​ Wide
Color (beta)
​ Automaticent, devices, and systems which use electricity, electronics, and
electromagnetism. It emerged as an identifiable occupation in the latter half of
the 19th century after the commercialization of the electric telegraph, the
telephone, and electrical power generation, distribution, and use.
Electrical engineering is divided into a wide range of different fields, including computer
engineering, systems engineering, power engineering, telecommunications,

radio-frequency engineeri Weather radar


28 languages

​ Article
​ Talk
​ Read
​ Edit
​ View history
Tools

Appearance hide

Text
​ Small
​ Standard
​ Large
Width
​ Standard
​ Wide
Color (beta)
​ Automatic
​ Light
​ Dark
​ Report an issue with dark mode
From Wikipedia, the free encyclopedia
Weather radar in Norman, Oklahoma with rainshaft

Weather (WF44) radar dish

University of Oklahoma OU-PRIME C-band, polarimetric, weather radar during construction

Weather radar, also called weather surveillance radar (WSR) and Doppler weather
radar, is a type of radar used to locate precipitation, calculate its motion, and estimate
its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars,
capable of detecting the motion of rain droplets in addition to the intensity of the
precipitation. Both types of data can be analyzed to determine the structure of storms
and their potential to cause severe weather.

During World War II, radar operators discovered that weather was causing echoes on
their screens, masking potential enemy targets. Techniques were developed to filter
them, but scientists began to study the phenomenon. Soon after the war, surplus radars
were used to detect precipitation. Since then, weather radar has evolved and is used by
national weather services, research departments in universities, and in television
stations' weather departments. Raw images are routinely processed by specialized
software to make short term forecasts of future positions and intensities of rain, snow,
hail, and other weather phenomena. Radar output is even incorporated into numerical
weather prediction models to improve analyses and forecasts.

History[edit]

Typhoon Cobra as seen on a ship's radar screen in December 1944.

During World War II, military radar operators noticed noise in returned echoes due to
rain, snow, and sleet. After the war, military scientists returned to civilian life or
continued in the Armed Forces and pursued their work in developing a use for those
[1]
echoes. In the United States, David Atlas at first working for the Air Force and later for
MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H.
[2][3]
Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral
student Walter Palmer are well known for their work on the drop size distribution in
mid-latitude rain that led to understanding of the Z-R relation, which correlates a given
radar reflectivity with the rate at which rainwater is falling. In the United Kingdom,
research continued to study the radar echo patterns and weather elements such as
stratiform rain and convective clouds, and experiments were done to evaluate the
potential of different wavelengths from 1 to 10 centimeters. By 1950 the UK company
EKCO was demonstrating its airborne 'cloud and collision warning search radar
[4]
equipment'.
1960s radar technology detected tornado producing supercells over the Minneapolis-Saint Paul
metropolitan area.

Between 1950 and 1980, reflectivity radars, which measure the position and intensity of
precipitation, were incorporated by weather services around the world. The early
meteorologists had to watch a cathode ray tube. In 1953 Donald Staggs, an electrical
engineer working for the Illinois State Water Survey, made the first recorded radar
[5]
observation of a "hook echo" associated with a tornadic thunderstorm.

The first use of weather radar on television in the United States was in September 1961.
As Hurricane Carla was approaching the state of Texas, local reporter Dan Rather,
suspecting the hurricane was very large, took a trip to the U.S. Weather Bureau
WSR-57 radar site in Galveston in order to get an idea of the size of the storm. He
convinced the bureau staff to let him broadcast live from their office and asked a
meteorologist to draw him a rough outline of the Gulf of Mexico on a transparent sheet
of plastic. During the broadcast, he held that transparent overlay over the computer's
black-and-white radar display to give his audience a sense both of Carla's size and of
the location of the storm's eye. This made Rather a national name and his report helped
in the alerted population accepting the evacuation of an estimated 350,000 people by
the authorities, which was the largest evacuation in US history at that time. Just 46
people were killed thanks to the warning and it was estimated that the evacuation saved
several thousand lives, as the smaller 1900 Galveston hurricane had killed an estimated
[6]
6000-12000 people.

During the 1970s, radars began to be standardized and organized into networks. The
first devices to capture radar images were developed. The number of scanned angles
was increased to get a three-dimensional view of the precipitation, so that horizontal
cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the
organization of thunderstorms were then possible for the Alberta Hail Project in Canada
and National Severe Storms Laboratory (NSSL) in the US in particular.
The NSSL, created in 1964, began experimentation on dual polarization signals and on
Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west
of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from NSSL
[7]
documented the entire life cycle of the tornado. The researchers discovered a
mesoscale rotation in the cloud aloft before the tornado touched the ground – the
tornadic vortex signature. NSSL's research helped convince the National Weather
[7]
Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of
tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get
[citation needed]
funding for further developments.

NEXRAD in South Dakota with a supercell in the background.

Between 1980 and 2000, weather radar networks became the norm in North America,
Europe, Japan and other developed countries. Conventional radars were replaced by
Doppler radars, which in addition to position and intensity could track the relative
velocity of the particles in the air. In the United States, the construction of a network
consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar
[7][8]
1988 Doppler), was started in 1988 following NSSL's research. In Canada,
[9]
Environment Canada constructed the King City station, with a 5 cm research Doppler
radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar
[10]
Observatory) in 1993. This led to a complete Canadian Doppler network between
1998 and 2004. France and other European countries had switched to Doppler
networks by the early 2000s. Meanwhile, rapid advances in computer technology led to
algorithms to detect signs of severe weather, and many applications for media outlets
and researchers.

After 2000, research on dual polarization technology moved into operational use,
increasing the amount of information available on precipitation type (e.g. rain vs. snow).
"Dual polarization" means that microwave radiation which is polarized both horizontally
and vertically (with respect to the ground) is emitted. Wide-scale deployment was done
by the end of the decade or the beginning of the next in some countries such as the
[11]
United States, France, and Canada. In April 2013, all United States National Weather
[12]
Service NEXRADs were completely dual-polarized.

Since 2003, the U.S. National Oceanic and Atmospheric Administration has been
experimenting with phased-array radar as a replacement for conventional parabolic
antenna to provide more time resolution in atmospheric sounding. This could be
significant with severe thunderstorms, as their evolution can be better evaluated with
more timely data.

Also in 2003, the National Science Foundation established the Engineering Research
Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a
multidisciplinary, multi-university collaboration of engineers, computer scientists,
meteorologists, and sociologists to conduct fundamental research, develop enabling
technology, and deploy prototype engineering systems designed to augment existing
radar systems by sampling the generally undersampled lower troposphere with
inexpensive, fast scanning, dual polarization, mechanically scanned and phased array
radars.

In 2023, the private American company Tomorrow.io launched a Ka-band space-based


[13][14]
radar for weather observation and forecasting.

Principle[edit]
Sending radar pulses[edit]
A radar beam spreads out as it moves away from the radar station, covering an increasingly
large volume.

Weather radars send directional pulses of microwave radiation, on the order of one
microsecond long, using a cavity magnetron or klystron tube connected by a waveguide
to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the
diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs
at these frequencies. This means that part of the energy of each pulse will bounce off

these small particles, back towards the Weather


radar
28 languages

​ Article
​ Talk
​ Read
​ Edit
​ View history
Tools

Appearance hide

Text
​ Small
​ Standard
​ Large
Width
​ Standard
​ Wide
Color (beta)
​ Automatic
​ Light
​ Dark
​ Report an issue with dark mode
From Wikipedia, the free encyclopedia

Weather radar in Norman, Oklahoma with rainshaft

Weather (WF44) radar dish


University of Oklahoma OU-PRIME C-band, polarimetric, weather radar during construction

Weather radar, also called weather surveillance radar (WSR) and Doppler weather
radar, is a type of radar used to locate precipitation, calculate its motion, and estimate
its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars,
capable of detecting the motion of rain droplets in addition to the intensity of the
precipitation. Both types of data can be analyzed to determine the structure of storms
and their potential to cause severe weather.

During World War II, radar operators discovered that weather was causing echoes on
their screens, masking potential enemy targets. Techniques were developed to filter
them, but scientists began to study the phenomenon. Soon after the war, surplus radars
were used to detect precipitation. Since then, weather radar has evolved and is used by
national weather services, research departments in universities, and in television
stations' weather departments. Raw images are routinely processed by specialized
software to make short term forecasts of future positions and intensities of rain, snow,
hail, and other weather phenomena. Radar output is even incorporated into numerical
weather prediction models to improve analyses and forecasts.

History[edit]
Typhoon Cobra as seen on a ship's radar screen in December 1944.

During World War II, military radar operators noticed noise in returned echoes due to
rain, snow, and sleet. After the war, military scientists returned to civilian life or
continued in the Armed Forces and pursued their work in developing a use for those
[1]
echoes. In the United States, David Atlas at first working for the Air Force and later for
MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H.
[2][3]
Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral
student Walter Palmer are well known for their work on the drop size distribution in
mid-latitude rain that led to understanding of the Z-R relation, which correlates a given
radar reflectivity with the rate at which rainwater is falling. In the United Kingdom,
research continued to study the radar echo patterns and weather elements such as
stratiform rain and convective clouds, and experiments were done to evaluate the
potential of different wavelengths from 1 to 10 centimeters. By 1950 the UK company
EKCO was demonstrating its airborne 'cloud and collision warning search radar
[4]
equipment'.
1960s radar technology detected tornado producing supercells over the Minneapolis-Saint Paul
metropolitan area.

Between 1950 and 1980, reflectivity radars, which measure the position and intensity of
precipitation, were incorporated by weather services around the world. The early
meteorologists had to watch a cathode ray tube. In 1953 Donald Staggs, an electrical
engineer working for the Illinois State Water Survey, made the first recorded radar
[5]
observation of a "hook echo" associated with a tornadic thunderstorm.

The first use of weather radar on television in the United States was in September 1961.
As Hurricane Carla was approaching the state of Texas, local reporter Dan Rather,
suspecting the hurricane was very large, took a trip to the U.S. Weather Bureau
WSR-57 radar site in Galveston in order to get an idea of the size of the storm. He
convinced the bureau staff to let him broadcast live from their office and asked a
meteorologist to draw him a rough outline of the Gulf of Mexico on a transparent sheet
of plastic. During the broadcast, he held that transparent overlay over the computer's
black-and-white radar display to give his audience a sense both of Carla's size and of
the location of the storm's eye. This made Rather a national name and his report helped
in the alerted population accepting the evacuation of an estimated 350,000 people by
the authorities, which was the largest evacuation in US history at that time. Just 46
people were killed thanks to the warning and it was estimated that the evacuation saved
several thousand lives, as the smaller 1900 Galveston hurricane had killed an estimated
[6]
6000-12000 people.

During the 1970s, radars began to be standardized and organized into networks. The
first devices to capture radar images were developed. The number of scanned angles
was increased to get a three-dimensional view of the precipitation, so that horizontal
cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the
organization of thunderstorms were then possible for the Alberta Hail Project in Canada
and National Severe Storms Laboratory (NSSL) in the US in particular.

The NSSL, created in 1964, began experimentation on dual polarization signals and on
Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west
of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from NSSL
[7]
documented the entire life cycle of the tornado. The researchers discovered a
mesoscale rotation in the cloud aloft before the tornado touched the ground – the
tornadic vortex signature. NSSL's research helped convince the National Weather
[7]
Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of
tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get
[citation needed]
funding for further developments.

NEXRAD in South Dakota with a supercell in the background.

Between 1980 and 2000, weather radar networks became the norm in North America,
Europe, Japan and other developed countries. Conventional radars were replaced by
Doppler radars, which in addition to position and intensity could track the relative
velocity of the particles in the air. In the United States, the construction of a network
consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar
[7][8]
1988 Doppler), was started in 1988 following NSSL's research. In Canada,
[9]
Environment Canada constructed the King City station, with a 5 cm research Doppler
radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar
[10]
Observatory) in 1993. This led to a complete Canadian Doppler network between
1998 and 2004. France and other European countries had switched to Doppler
networks by the early 2000s. Meanwhile, rapid advances in computer technology led to
algorithms to detect signs of severe weather, and many applications for media outlets
and researchers.

After 2000, research on dual polarization technology moved into operational use,
increasing the amount of information available on precipitation type (e.g. rain vs. snow).
"Dual polarization" means that microwave radiation which is polarized both horizontally
and vertically (with respect to the ground) is emitted. Wide-scale deployment was done
by the end of the decade or the beginning of the next in some countries such as the
[11]
United States, France, and Canada. In April 2013, all United States National Weather
[12]
Service NEXRADs were completely dual-polarized.
Since 2003, the U.S. National Oceanic and Atmospheric Administration has been
experimenting with phased-array radar as a replacement for conventional parabolic
antenna to provide more time resolution in atmospheric sounding. This could be
significant with severe thunderstorms, as their evolution can be better evaluated with
more timely data.

Also in 2003, the National Science Foundation established the Engineering Research
Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a
multidisciplinary, multi-university collaboration of engineers, computer scientists,
meteorologists, and sociologists to conduct fundamental research, develop enabling
technology, and deploy prototype engineering systems designed to augment existing
radar systems by sampling the generally undersampled lower troposphere with
inexpensive, fast scanning, dual polarization, mechanically scanned and phased array
radars.

In 2023, the private American company Tomorrow.io launched a Ka-band space-based


[13][14]
radar for weather observation and forecasting.

Principle[edit]
Sending radar pulses[edit]

A radar beam spreads out as it moves away from the radar station, covering an increasingly
large volume.

Weather radars send directional pulses of microwave radiation, on the order of one
microsecond long, using a cavity magnetron or klystron tube connected by a waveguide
to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the
diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs
at these frequencies. This means that part of the energy of each pulse will bounce off
[15]
these small particles, back towards the radar station.

Shorter wavelengths are useful for smaller particles, but the signal is more quickly
attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm
C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm
Ka-band weather radar is used only for research on small-particle phenomena such as
[15]
drizzle and fog. W band (3 mm) weather radar systems have seen limited university
use, but due to quicker attenuation, most data are not operational.

Radar pulses diverge as they move away from the radar station. Thus the volume of air
that a radar pulse is traversing is larger for areas farther away from the station, and
smaller for nearby areas, decreasing resolution at farther distances. At the end of a 150
– 200 km sounding range, the volume of air sca

[15]
adar station.

Shorter wavelengths are useful for smaller particles, but the signal is more quickly
attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm
C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm
Ka-band weather radar is used only for research on small-particle phenomena such as
[15]
drizzle and fog. W band (3 mm) weather radar systems have seen limited university
use, but due to quicker attenuation, most data are not operational.

Radar pulses diverge as they move away from the radar station. Thus the volume of air
that a radar pulse is traversing is larger for areas farther away from the station, and
smaller for nearby areas, decreasing resolution at farther distances. At the end of a 150
– 200 km sounding range, the volume of aiContents hide

​ (Top)
​ Construction and operation
​ Toggle Construction and operation subsection
​ Conventional tube design
​ Hull or single-anode magnetron
​ Split-anode magnetron
​ Cavity magnetron
​ Common features
​ Applications
​ Toggle Applications subsection
​ Radar
​ Heating
​ Lighting
​ History
​ Health hazards
​ See also
​ References
​ External links

Cavity magnetron
45 languages

​ Article
​ Talk
​ Read
​ Edit
​ View history
Tools

Appearance hide

Text
​ Small
​ Standard
​ Large
Width
​ Standard
​ Wide
Color (beta)
​ Automatic
​ Light
​ Dark
​ Report an issue with dark mode
From Wikipedia, the free encyclopedia

"Magnetron" redirects here. Not to be confused with Megatron, Metatron, or Magneton


(disambiguation).
Magnetron with section removed to exhibit the cavities. The cathode in the center is not visible.
The antenna emitting microwaves is at the left. The magnets producing a field parallel to the long
axis of the device are not shown.

A similar magnetron with a different section removed. Central cathode is visible; antenna
conducting microwaves at the top; magnets are not shown.

Obsolete 9 GHz magnetron tube and magnets from a Soviet aircraft radar. The tube is embraced
between the poles of two horseshoe-shaped alnico magnets (top, bottom), which create a
magnetic field along the axis of the tube. The microwaves are emitted from the waveguide
aperture (top) which in use is attached to a waveguide conducting the microwaves to the radar
antenna. Modern tubes use rare-earth magnets, electromagnets or ferrite magnets which are
much less bulky.

The cavity magnetron is a high-power vacuum tube used in early radar systems and
subsequently in microwave ovens and in linear particle accelerators. A cavity
magnetron generates microwaves using the interaction of a stream of electrons with a
magnetic field, while moving past a series of cavity resonators, which are small, open
cavities in a metal block. Electrons pass by the cavities and cause microwaves to
oscillate within, similar to the functioning of a whistle producing a tone when excited by
an air stream blown past its opening. The resonant frequency of the arrangement is
determined by the cavities' physical dimensions. Unlike other vacuum tubes, such as a
klystron or a traveling-wave tube (TWT), the magnetron cannot function as an amplifier
for increasing the intensity of an applied microwave signal; the magnetron serves solely
as an electronic oscillator generating a microwave signal from direct current electricity
supplied to the vacuum tube.

The use of magnetic fields as a means to control the flow of an electric current was
spurred by the invention of the Audion by Lee de Forest in 1906. Albert Hull of General
Electric Research Laboratory, USA, began development of magnetrons to avoid de
[1]
Forest's patents, but these were never completely successful. Other experimenters
picked up on Hull's work and a key advance, the use of two cathodes, was introduced
by Habann in Germany in 1924. Further research was limited until Okabe's 1929
Japanese paper noting the production of centimeter-wavelength signals, which led to
worldwide interest. The development of magnetrons with multiple cathodes was
proposed by A. L. Samuel of Bell Telephone Laboratories in 1934, leading to designs by
Postumus in 1934 and Hans Hollmann in 1935. Production was taken up by Philips,
General Electric Company (GEC), Telefunken and others, limited to perhaps 10 W
output. By this time the klystron was producing more power and the magnetron was not
widely used, although a 300W device was built by Aleksereff and Malearoff in the USSR
[1]
in 1936 (published in 1940).

The cavity magnetron was a radical improvement introduced by John Randall and Harry
[2]: 24–26 [3]
Boot at the University of Birmingham, England in 1940. Their first working
example produced hundreds of watts at 10 cm wavelength, an unprecedented
[4][5]
achievement. Within weeks, engineers at GEC had improved this to well over a
kilowatt, and within months 25 kilowatts, over 100 kW by 1941 and pushing towards a
megawatt by 1943. The high power pulses were generated from a device the size of a
small book and transmitted from an antenna only centimeters long, reducing the size of
[6]
practical radar systems by orders of magnitude. New radars appeared for
[6]
night-fighters, anti-submarine aircraft and even the smallest escort ships, and from
that point on the Allies of World War II held a lead in radar that their counterparts in
Germany and Japan were never able to close. By the end of the war, practically every
Allied radar was based on the magnetron.
The magnetron continued to be used in radar in the post-war period but fell from favour
in the 1960s as high-power klystrons and traveling-wave tubes emerged. A key
characteristic of the magnetron is that its output signal changes from pulse to pulse,
both in frequency and phase. This renders it less suitable for pulse-to-pulse
comparisons for performing moving target indication and removing "clutter" from the
[7]
radar display. The magnetron remains in use in some radar systems, but has become
much more common as a low-cost source for microwave ovens. In this form, over one
[7][8]
billion magnetrons are in use today.

Construction and operation[edit]


Conventional tube design[edit]

In a conventional electron tube (vacuum tube), electrons are emitted from a negatively
charged, heated component called the cathode and are attracted to a positively charged
component called the anode. The components are normally arranged concentrically,
placed within a tubular-shaped container from which all air has been evacuated, so that
the electrons can move freely (hence the name "vacuum" tubes, called "valves" in
British English).

If a third electrode (called a control grid) is inserted between the cathode and the anode,
the flow of electrons between the cathode and anode can be regulated by varying the
voltage on this third electrode. This allows the resulting electron tube (called a "triode"
because it now has three electrodes) to function as an amplifier because small
variations in the electric charge applied to the control grid will result in identical
variations in the much larger current of electrons flowing between the cathode and
[9]
anode.

Hull or single-anode magnetron[edit]

The idea of using a grid for control was invented by Philipp Lenard, who received the
Nobel Prize for Physics in 1905. In the USA it was later patented by Lee de Forest,
resulting in considerable research into alternate tube designs that would avoid his
patents. One concept used a magnetic field instead of an electrical charge to control
current flow, leading to the development of the magnetron tube. In this design, the tube
was made with two electrodes, typically with the cathode in the form of a metal rod in
the center, and the anode as a cylinder around it. The tube was placed between the
[10][better source needed]
poles of a horseshoe magnet arranged such that the magnetic field
was aligned parallel to the axis of the electrodes.

With no magnetic field present, the tube operates as a diode, with electrons flowing
directly from the cathode to the anode. In the presence of the magnetic field, the
electrons will experience a force at right angles to their direction of motion (the Lorentz
force). In this case, the electrons follow a curved path between the cathode and anode.
The curvature of the path can be controlled by varying either the magnetic field using an
electromagnet, or by changing the electrical potential between the electrodes.

At very high magnetic field settings the electrons are forced back onto the cathode,
preventing current flow. At the opposite extreme, with no field, the electrons are free to
flow straight from the cathode to the anode. There is a point between the two extremes,
the critical value or Hull cut-off magnetic field (and cut-off voltage), where the electrons
just reach the anode. At fields around this point, the device operates similar to a triode.
However, magnetic control, due to hysteresis and other effects, results in a slower and
less faithful response to control current than electrostatic control using a control grid in a
conventional triode (not to mention greater weight and complexity), so magnetrons saw
limited use in conventional electronic designs.

It was noticed that when the magnetron was operating at the critical value, it would emit
energy in the radio frequency spectrum. This occurs because a few of the electrons,
instead of reaching the anode, continue to circle in the space between the cathode and
the anode. Due to an effect now known as cyclotron radiation, these electrons radiate
radio frequency energy. The effect is not very efficient. Eventually the electrons hit one
of the electrodes, so the number in the circulating state at any given time is a small
percentage of the overall current. It was also noticed that the frequency of the radiation
depends on the size of the tube, and even early examples were built that produced
signals in the microwave regime.

Early conventional tube systems were limited to the high frequency bands, and although
very high frequency systems became widely available in the late 1930s, the ultra high
frequency and microwave bands were well beyond the ability of conventional circuits.
The magnetron was one of the few devices able to generate signals in the microwave
band and it was the only one that was able to produce high power at centimeter
wavelengths.

Split-anode magnetron[edit]
Split-anode magnetron (c. 1935). (left) The bare tube, about 11 cm high. (right) Installed for use
between the poles of a strong permanent magnet

The original magnetron was very difficult to keep operating at the critical value, and
even then the number of electrons in the circling state at any time was fairly low. This
meant that it produced very low-power signals. Nevertheless, as one of the few devices
known to create microwaves, interest in the device and potential improvements was
widespread.

The first major improvement was the split-anode magnetron, also known as a
negative-resistance magnetron. As the name implies, this design used an anode that
was split in two—one at each end of the tube—creating two half-cylinders. When both
were charged to the same voltage the system worked like the original model. But by
slightly altering the voltage of the two plates, the electrons' trajectory could be modified
so that they would naturally travel towards the lower voltage side. The plates were
connected to an oscillator that reversed the relative voltage of the two plates at a given
[10]
frequency.

At any given instant, the electron will naturally be pushed towards the lower-voltage side
of the tube. The electron will then oscillate back and forth as the voltage changes. At the
same time, a strong magnetic field is applied, stronger than the critical value in the
original design. This would normally cause the electron to circle back to the cathode, but
due to the oscillating electrical field, the electron instead follows a looping path that
[10]
continues toward the anodes.

Since all of the electrons in the flow experienced this looping motion, the amount of RF
energy being radiated was greatly improved. And as the motion occurred at any field
level beyond the critical value, it was no longer necessary to carefully tune the fields
and voltages, and the overall stability of the device was greatly improved. Unfortunately,
the higher field also meant that electrons often circled back to the cathode, depositing
their energy on it and causing it to heat up. As this normally causes more electrons to
[10]
be released, it could sometimes lead to a runaway effect, damaging the device.

Cavity magnetron[edit]

The great advance in magnetron design was the resonant cavity magnetron or
electron-resonance magnetron, which works on entirely different principles. In this
design the oscillation is created by the physical shape of the anode, rather than external
circuits or fields.

A cross-sectional diagram of a resonant cavity magnetron. Magnetic lines of force are parallel to
the geometric axis of this structure.

Mechanically, the cavity magnetron consists of a large, solid cylinder of metal with a
hole drilled through the centre of the circular face. A wire acting as the cathode is run
down the center of this hole, and the metal block itself forms the anode. Around this
hole, known as the "interaction space", are a number of similar holes ("resonators")
drilled parallel to the interaction space, connected to the interaction space by a short
channel. The resulting block looks something like the cylinder on a revolver, with a
[11]
somewhat larger central hole. Early models were cut using Colt pistol jigs.
Remembering that in an AC circuit the electrons travel along the surface, not the core,
of the conductor, the parallel sides of the slot acts as a capacitor while the round holes
form an inductor: an LC circuit made of solid copper, with the resonant frequency
defined entirely by its dimensions.

The magnetic field is set to a value well below the critical, so the electrons follow arcing
paths towards the anode. When they strike the anode, they cause it to become
negatively charged in that region. As this process is random, some areas will become
more or less charged than the areas around them. The anode is constructed of a highly
conductive material, almost always copper, so these differences in voltage cause
currents to appear to even them out. Since the current has to flow around the outside of
the cavity, this process takes time. During that time additional electrons will avoid the
hot spots and be deposited further along the anode, as the additional current flowing
around it arrives too. This causes an oscillating current to form as the current tries to
[12]
equalize one spot, then another.

The oscillating currents flowing around the cavities, and their effect on the electron flow
within the tube, cause large amounts of microwave radiofrequency energy to be
generated in the cavities. The cavities are open on one end, so the entire mechanism
forms a single, larger, microwave oscillator. A "tap", normally a wire formed into a loop,
extracts microwave energy from one of the cavities. In some systems the tap wire is
replaced by an open hole, which allows the microwaves to flow into a waveguide.

As the oscillation takes some time to set up, and is inherently random at the start,
subsequent startups will have different output parameters. Phase is almost never
preserved, which makes the magnetron difficult to use in phased array systems.
Frequency also drifts from pulse to pulse, a more difficult problem for a wider array of
radar systems. Neither of these present a problem for continuous-wave radars, nor for
microwave ovens.

Common features[edit]
Cutaway drawing of a cavity magnetron of 1984. Part of the righthand magnet and copper anode
block is cut away to show the cathode and cavities. This older magnetron uses two horseshoe
shaped alnico magnets, modern tubes use rare-earth magnets.

All cavity magnetrons consist of a heated cylindrical cathode at a high (continuous or


pulsed) negative potential created by a high-voltage, direct-current power supply. The
cathode is placed in the center of an evacuated, lobed, circular metal chamber. The
walls of the chamber are the anode of the tube. A magnetic field parallel to the axis of
the cavity is imposed by a permanent magnet. The electrons initially move radially
outward from the cathode attracted by the electric field of the anode walls. The
magnetic field causes the electrons to spiral outward in a circular path, a consequence
of the Lorentz force. Spaced around the rim of the chamber are cylindrical cavities.
Slots are cut along the length of the cavities that open into the central, common cavity
space. As electrons sweep past these slots, they induce a high-frequency radio field in
each resonant cavity, which in turn causes the electrons to bunch into groups. A portion
of the radio frequency energy is extracted by a short coupling loop that is connected to
a waveguide (a metal tube, usually of rectangular cross section). The waveguide directs
the extracted RF energy to the load, which may be a cooking chamber in a microwave
oven or a high-gain antenna in the case of radar.

The sizes of the cavities determine the resonant frequency, and thereby the frequency
of the emitted microwaves. However, the frequency is not precisely controllable. The
operating frequenc

You might also like