Env Issues
Env Issues
An electrostatic precipitator (ESP), or electrostatic air cleaner is a particulate collection device that removes particles from a flowing gas (such as air) using the force of an induced electrostatic charge. Electrostatic precipitators are highly efficient filtration devices that minimally impede the flow of gases through the device, and can easily remove fine particulate matter such as dust and smoke from the air stream. [1] In contrast to wet scrubbers which apply energy directly to the flowing fluid medium, an ESP applies energy only to the particulate matter being collected and therefore is very efficient in its consumption of energy (in the form of electricity).
Contents
[hide]
1 Invention of the electrostatic precipitator 2 The plate precipitator 2.1 Collection efficiency (R)
3 Modern industrial electrostatic precipitators 4 Wet electrostatic precipitator 5 Consumer-oriented electrostatic air cleaners 6 See also 7 References 8 External links
[1edit]Invention
The first use of corona discharge to remove particles from an aerosol was by Hohlfeld in 1824. However, it was not commercialized until almost a century later. In 1907 Dr. Frederick G. Cottrell applied for a patent on a device for charging particles and then collecting them through electrostatic attraction the first electrostatic precipitator. He was then a professor of chemistry at the University of California, Berkeley. Cottrell first applied
the device to the collection of sulfuric acid mist and lead oxide fume emitted from various acid-making and smelting activities. Vineyards in northern California were being adversely affected by the lead emissions. At the time of Cottrell's invention, the theoretical basis for operation was not understood. The operational theory was developed later in the 1920s, in Germany. Prof. Cottrell used proceeds from his invention to fund scientific research through the creation of a foundation called Research Corporation in 1912 to which he assigned the patents. The intent of the organization was to bring inventions made by educators (such as Cottrell) into the commercial world for the benefit of society at large. The operation of Research Corporation is perpetuated by royalties paid by commercial firms after commercialization occurs. Research Corporation has provided vital funding to many scientific projects: Goddard's rocketry experiments, Lawrence's cyclotron, production methods for vitamins A and 1%1B1, among many others. By a decision of the U.S. Supreme Court the Corporation had to be split into two entities, the Research Corporation and two commercial firms making the hardware: Research-Cottrell Inc. (operating east of the Mississippi River) and Western Precipitation operating in the Western states. The Research Corporation continues to be active to this day and the two companies formed to commercialize the invention for industrial and utility applications are still in business as well. Electrophoresis is the term used for migration of gas-suspended charged particles in a direct-current electrostatic field. If your television set accumulates dust on the face it is because of this phenomenon (a CRT is a direct-current machine operating at about 35kV).
[2edit]The
plate precipitator
The most basic precipitator contains a row of thin vertical wires, and followed by a stack of large flat metal plates oriented vertically, with the plates typically spaced about 1 cm to 18 cm apart, depending on the application. The air or gas stream flows horizontally through the spaces between the wires, and then passes through the stack of plates. A negative voltage of several thousand volts is applied between wire and plate. If the applied voltage is high enough an electric (corona) discharge ionizes the gas around the electrodes. Negative ions flow to the plates and charge the gas-flow particles. The ionized particles, following the negative electric field created by the power supply, move to the grounded plates. Particles build up on the collection plates and form a layer. The layer does not collapse, thanks to electrostatic pressure (given from layer resistivity, electric field, and current flowing in the collected layer).
[3edit]Collection
efficiency (R)
Precipitator performance is very sensitive due to two particulate properties: 1) Resistivity; and 2) Particle size distribution. These properties can be determined economically and accurately in the laboratory. A widely taught concept to calculate the collection efficiency is the Deutsch model, which assumes infinite remixing of the particles perpendicular to the gas stream.
Resistivity can be determined as a function of temperature in accordance with IEEE Standard 548. This test is conducted in an air environment containing a specified moisture concentration. The test is run as a function of ascending or descending temperature or both. Data are acquired using an average ash layer electric field of 4 kV/cm. Since relatively low applied voltage is used and no sulfuric acid vapor is present in the environment, the values obtained indicate the maximum ash resistivity. Usually the descending temperature test is suggested when no unusual circumstances are involved. Before the test, the ash is thermally equilibrated in dry air at 454 C (850F) for about 14 hours. It is believed that this procedure anneals the ash and restores the surface to pre-collection condition. If there is a concern about the effect of combustibles, the residual effect of a conditioning agent other than sulfuric acid vapor, or the effect of some other agent that inhibits the reaction of the ash with water vapor, the combination of the ascending and descending test mode is recommended. The thermal treatment that occurs between the two test modes is capable of eliminating the foregoing effects. This results in ascending and descending temperature resistivity curves that show a hysteresis related to the presence and removal of some effect such as a significant level of combustibles. With particles of high resistivity (cement dust for example) Sulfur trioxide is sometimes injected into a flue gas stream to lower the resistivity of the particles in order to improve the collection efficiency of the electrostatic precipitator .
[4edit]Modern
ESPs continue to be excellent devices for control of many industrial particulate emissions, including smoke from electricity-generating utilities (coal and oil fired), salt cake collection from black liquor boilers in pulp mills, and catalyst collection from fluidized bed catalytic cracker units in oil refineries to name a few. These devices treat gas volumes from several hundred thousand ACFM to 2.5 million ACFM (1,180 m/s) in the largest coalfired boiler applications. For a coal-fired boiler the collection is usually performed downstream of the air preheater at about 160 C (320 deg.F) which provides optimal resistivity of the coal-ash particles. For some difficult applications with low-sulfur fuel hot-end units have been built operating above 371 C (700 deg.F). The original parallel plateweighted wire design (described above) has evolved as more efficient (and robust) discharge electrode designs were developed, today focusing on rigid (pipe-frame) discharge electrodes to which many sharpened spikes are attached (barbed wire), maximizing corona production. Transformer-rectifier systems apply voltages of 50 100 kV at relatively high current densities. Modern controls, such as anautomatic voltage control, minimize electric sparking and prevent arcing (sparks are quenched within 1/2 cycle of the TR set), avoiding damage to the components. Automatic plate-rapping systems and hopperevacuation systems remove the collected particulate matter while on line, theoretically allowing ESPs to stay in operation for years at a time.
[5edit]Wet
electrostatic precipitator
A wet electrostatic precipitator (WESP or wet ESP) operates with saturated air streams (100% relative humidity). WESPs are commonly used to remove liquid droplets such as sulfuric acid mist from industrial process gas streams. The WESP is also commonly used where the gases are high in moisture content, contain combustible particulate, have particles that are sticky in nature. The preferred and most modern type of WESP is a downflow tubular design. This design allows the collected moisture and particulate to form a slurry that helps to keep the collection surfaces clean. Plate style and upflow design WESPs are very unreliable and should not be used in applications where particulate is sticky in nature.
[6edit]Consumer-oriented
Plate precipitators are commonly marketed to the public as air purifier devices or as a permanent replacement for furnace filters, but all have the undesirable attribute of being somewhat messy to clean. A negative sideeffect of electrostatic precipitation devices is the production of toxic ozone and NOx. However, electrostatic precipitators offer benefits over other air purifications technologies, such as HEPA filtration, which require expensive filters and can become "production sinks" for many harmful forms of bacteria. The two-stage design (charging section ahead of collecting section) has the benefit of minimizing ozone production which would adversely affect health of personnel working in enclosed spaces. For shipboard engine rooms where gearboxes generate an oil fog, two-stage ESP's are used to clean the air improving the operating environment and preventing buildup of flammable oil fog accumulations. Collected oil is returned to the gear lubricating system. With electrostatic precipitators, if the collection plates are allowed to accumulate large amounts of particulate matter, the particles can sometimes bond so tightly to the metal plates that vigorous washing and scrubbing may be required to completely clean the collection plates. The close spacing of the plates can make thorough cleaning difficult, and the stack of plates often cannot be easily disassembled for cleaning. One solution, suggested by several manufacturers, is to wash the collector plates in a dishwasher. Some consumer precipitation filters are sold with special soak-off cleaners, where the entire plate array is removed from the precipitator and soaked in a large container overnight, to help loosen the tightly bonded particulates. A study by the Canada Mortgage and Housing Corporation testing a variety of forced-air furnace filters found that ESP filters provided the best, and most cost-effective means of cleaning air using a forced-air system.[2]
Before the 1Air Pollution Control Act of 1955, air pollution was not considered a national environmental problem.
The Air Pollution Control Act of 1955 (Pub.L. 84-159, ch. 360, 69 Stat. 322) was the first United States Clean Air Act enacted by Congress to address the nationalenvironmental problem of air pollution on July 14, 1955. This was "an act to provide research and technical assistance relating to air pollution control." [1] The act "left states principally in charge of prevention and control of air pollution at the source."[2] The act declared that air pollution was a danger to public health and welfare, but preserved the "primary responsibilities and rights of the states and local government in controlling air pollution."[3] The act put the federal government in a purely informational role, authorizing the United States Surgeon General to conduct research, investigate, and pass out information "relating to air pollution and the prevention and abatement thereof."[4] Therefore, The Air Pollution Control Act contained no provisions for the federal government to actively combat air pollution by punishing polluters. The next Congressional statement on air pollution would come with the Clean Air Act of 1963. The Air Pollution Control Act was the culmination of much research done on fuel emissions by the federal government in the 1930s and 1940s. Additional legislation was passed in 1963 to better fully define air quality criteria and give more power in defining what air quality was to the secretary of Health, Education, and Labor. This additional legislation would provide grants to both local and state agencies. A replacement, the United States Clean Air Act (CAA), was enacted to substitute the Air Pollution Control Act of 1955. A decade later the Motor Vehicle Air Pollution Control Act was enacted to focus more specifically on automotive emission standards. A mere two years later, the Federal Air Quality Act was established to define "air quality control regions" scientifically based on topographical and meteorological facets of air pollution. California was the first state to act against air pollution when the metropolis of Los Angeles began to notice deteriorating air quality. The location of Los Angeles furthered the problem as several geographical and meteorological problems unique to the area exacerbated the air pollution problem.[5]
Contents
[hide]
1 Prior to 1955 2 Amendments to the Air Pollution Control Act of 1955 3 National Air Pollution Symposium
[1edit]Prior
to 1955
Prior to the Air Pollution Control Act of 1955, little headway was made to initiate this air pollution reform. U.S. cities Chicago and Cincinnati first established smoke ordinances in 1881. In 1904, Philadelphia passed an ordinance limiting the amount of smoke in flues, chimneys, and open spaces. The ordinance imposed a penalty if not all smoke inspections were passed. It wasn't until 1947 that California authorized the creation of Air Pollution Control Districts in every county of the state.[6]
[2edit]Amendments
There have been several amendments made to The Air Pollution Act of 1955. The first amendment came in 1960, which extended research funding for four years. The next amendment came in 1962 and basically enforced the principle provisions of the original act. In addition, this amendment also called for research to be done by the 1U.S. surgeon general to determine health effects of various motor vehicle exhaust substances.[7] In 1963, the Senate Subcommittee on Air and Water Pollution was created and was soon overwhelmed with public concern over air and water pollution. It was then that the Clean Air Act of 1963 was passed. This act directed $95 million over the next three years to state and local governments to better develop "air pollution criteria." Amendments were made to the act in 1965 regarding emissions standards for new automobiles. This amendment also recognized the problem of transborder air pollution and began research on the effects of air pollution to and from Mexico and Canada.[8] In 1966, another federal amendment was made expanding local air pollution control programs. These changes initiated the creation of the National Air Pollution Control Administration (NAPCA) and designated Air Quality Control Regions across the U.S. to monitor ambient air.[9] In 1967, the Air Quality Act of 1967 was passed. This amendment allowed states to enact federal automobile emissions standards. Senator Edmond Muskie (D-Maine) said that this was the first comprehensive federal air pollution control. The National Air Pollution Control Administration then provided technical information to the states, which the states used to develop air quality standards. The NAPCA then had the power to veto any of the states' proposed emission standards. This amendment was not as effective as it was initially thought to be, with only 36 air regions designated, and as well as no states having fully developed pollution control programs.[10] In 1969, another amendment was made to the act. This amendment further expanded the research on low emissions, fuels, and automobiles.[11]
The 1970 amendments completely rewrote the 1967 act. In particular, the 1970 amendments required the newly created The United States Environmental Protection Agency to set the National Ambient Air Quality Standards to protect public health and welfare. In addition, the 1970 amendments required various states to submit state implementation plans for attaining and maintaining the National Ambient Air Quality Standards. This amendment also allowed citizens the ability to sue polluters or government agencies for failure to abide by the act. Finally, the amendment required that by 1975, the entire United States would attain clean air status.[12] 1990 was the most recent amendments to the act under President George H.W. Bush. The 1990 amendments granted significantly more authority to the federal government than any prior air quality legislation. Nine subjects were identified in this amendment, with smog, acid rain, motor vehicle emissions, and toxic air pollution among them. Five severity classifications were identified to measure smog. To better control acid rain, new regulatory programs were created. New and stricter emission standards were created for motor vehicles beginning with the 1995 model year. The National Emission Standards for Hazardous Air Pollutants program was created to expand much broader industries and activities.[13]
[3edit]National
The first National Air Pollution Symposium in the United States was held in 1949. At first, smaller governments were responsible for the passage and enforcement of such legislation.[14] The main purpose of the Air Pollution Control Act of 1955 was to provide research assistance to find a way to control air pollution from its source. A total of $5 million was granted to the public health service for a five-year period to conduct this research.[15][16] According to a private website, the amount was $3 million allotted per year for the five-year period of research.[17]
[4edit]Effects
of the Act
This was the first act from the government that made U.S. citizens and policy makers aware of this global problem. Unfortunately, this act did little to prevent air pollution, but it at least made government aware that this was a national problem. The act allowed Congress to reserve the right to control this growing problem. [18] The Air Pollution Control Act of 1955 was the first federal law regarding air pollution. This act began to inform the public about the hazards of air pollution and detailed new emissions standards. Public opinion polls showed that the percentage of Americans who regarded air pollution as a serious problem almost doubled from 28% in 1965 to 55% in 1968 with the addition of all the amendments made to the original Air Pollution Control Act of 1955.[19] Despite having the term "control" in the title of the act, this legislation had no regulation component. [20] In the early 1950s Congress did not want to interfere with states' rights; as such, the early laws of the act were not strong. This act set up the role that the government would play in research on air pollution effects and control.[21] As such, the act was the forefront of the air pollution movement that continues to this day. Amendments were added to the The Air Pollution Control Act of 1955 as well as the Clear Air Act frequently by the government, as the government continued to further research on the topic and improve air quality. [22]
Sewage treatment
From Wikipedia, the free encyclopedia
The objective of sewage treatment is to produce a disposable effluent without causing harm to the surrounding environment, and also prevent pollution.[1]
Sewage treatment, or domestic wastewater treatment, is the process of removing contaminants from wastewater and household sewage, both runoff (effluents) and domestic. It includes physical, chemical, and biological processes to remove physical, chemical and biological contaminants. Its objective is to produce an environmentally-safe fluid waste stream (or treated effluent) and a solid waste (or treated sludge) suitable for disposal or reuse (usually as farm fertilizer). Using advanced technology it is now possible to re-use sewage effluent for drinking water, although Singapore is the only country to implement such technology on a production scale in its production of NEWater.[2]
Contents
[hide]
1 Origins of sewage 2 Process overview 2.1 Pre-treatment 2.1.1 Screening 2.1.2 Grit removal 2.1.3 Flow equalization 2.1.4 Fat and grease removal
2.3.2 Surface-aerated basins (Lagoons) 2.3.3 Constructed wetlands 2.3.4 Filter beds (oxidizing beds) 2.3.5 Soil Bio-Technology 2.3.6 Biological aerated filters 2.3.7 Rotating biological contactors 2.3.8 Membrane bioreactors 2.3.9 Secondary sedimentation
2.4 Tertiary treatment 2.4.1 Filtration 2.4.2 Lagooning 2.4.3 Nutrient removal 2.4.3.1 Nitrogen removal 2.4.3.2 Phosphorus removal
3 Package plants and batch reactors 4 Sludge treatment and disposal 4.1 Anaerobic digestion 4.2 Aerobic digestion 4.3 Composting 4.4 Incineration 4.5 Sludge disposal
[1edit]Origins
of sewage
Sewage is created by residential, institutional, and commercial and industrial establishments. It includes household waste liquid from toilets, baths, showers, kitchens, sinks and so forth that is disposed of viasewers. In many areas, sewage also includes liquid waste from industry and commerce. The separation and draining of household waste into greywater and blackwater is becoming more common in the developed world, with greywater being permitted to be used for watering plants or recycled for flushing toilets. Sewage may include stormwater runoff. Sewerage systems capable of handling stormwater are known as combined systems. Combined sewer systems are usually avoided now because precipitation causes widely varying flows reducing sewage treatment plant efficiency. Combined sewers require much larger and more expensive treatment facilities than sanitary sewers. Heavy storm runoff may overwhelm the sewage treatment system, causing a spill or overflow. Sanitary sewers are typically much smaller than combined sewers, and they are not designed to transport stormwater. Backups of raw sewage can occur if excessiveInfiltration/Inflow is allowed into a sanitary sewer system. Modern sewered developments tend to be provided with separate storm drain systems for rainwater.[3] As rainfall travels over roofs and the ground, it may pick up various contaminants including soil particles and other sediment, heavy metals, organic compounds, animal waste, and oil and grease. (See urban runoff.)[4] Some jurisdictions require stormwater to receive some level of treatment before being discharged directly into waterways. Examples of treatment processes used for stormwater include retention basins, wetlands, buried vaults with various kinds of media filters, and vortex separators (to remove coarse solids).
[2edit]Process
overview
Sewage can be treated close to where it is created, a decentralised system, (in septic tanks, biofilters or aerobic treatment systems), or be collected and transported via a network of pipes and pump stations to a municipal treatment plant, a centralised system, (see sewerage and pipes and infrastructure). Sewage collection and treatment is typically subject to local, state and federal regulations and standards. Industrial sources of wastewater often require specialized treatment processes (see Industrial wastewater treatment). Sewage treatment generally involves three stages, called primary, secondary and tertiary treatment. Primary treatment consists of temporarily holding the sewage in a quiescent basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface. The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Secondary treatment removes dissolved and suspended biological matter. Secondary treatment is typically performed by indigenous, water-borne micro-organisms in a managed habitat. Secondary treatment may require a separation process to remove the micro-organisms from the treated water prior to discharge or tertiary treatment. Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow rejection into a highly sensitive or fragile ecosystem (estuaries, low-flow rivers, coral reefs,...).
Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, green way or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes.
Process Flow Diagram for a typical treatment plant via Subsurface Flow Constructed Wetlands (SFCW)
[3edit]Pre-treatment
Pre-treatment removes materials that can be easily collected from the raw waste water before they damage or clog the pumps and skimmers of primary treatment clarifiers (trash, tree limbs, leaves, etc.).
[4edit]Screening
The influent sewage water is screened to remove all large objects like cans, rags, sticks, plastic packets etc. carried in the sewage stream.[5] This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, whilst in smaller or less modern plants a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed they become entrained in pipes and moving parts of the treatment plant and can cause substantial damage and inefficiency in the process.[6]:9
[5edit]Grit removal
Pre-treatment may include a sand or grit channel or chamber where the velocity of the incoming wastewater is adjusted to allow the settlement of sand, grit, stones, and broken glass. These particles are removed because
they may damage pumps and other equipment. For small sanitary sewer systems, the grit chambers may not be necessary, but grit removal is desirable at larger plants.[6]:10
[6edit]Flow equalization
Clarifiers and mechanized secondary treatment are more efficient under uniform flow conditions. Equalization basins may be used for temporary storage of diurnal or wet-weather flow peaks. Basins provide a place to temporarily hold incoming sewage during plant maintenance and a means of diluting and distributing batch discharges of toxic or high-strength waste which might otherwise inhibit biological secondary treatment (including portable toilet waste, vehicle holding tanks, and septic tank pumpers). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators. Cleaning may be easier if the basin is downstream of screening and grit removal.[7]
[8edit]Primary
treatment
In the primary sedimentation stage, sewage flows through large tanks, commonly called "primary clarifiers" or "primary sedimentation tanks." The tanks are used to settle sludge while grease and oils rise to the surface and are skimmed off. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities.[6]:9-11Grease and oil from the floating material can sometimes be recovered for saponification.
The dimensions of the tank should be designed to effect removal of a high percentage of the floatables and sludge. A typical sedimentation tank may remove from 50 to 70 percent of suspended solids, and from 30 to 35 percent of biochemical oxygen demand (BOD) from the sewage.
[9edit]Secondary
treatment
Secondary treatment is designed to substantially degrade the biological content of the sewage which are derived from human waste, food waste, soaps and detergent. The majority of municipal plants treat the settled sewage liquor using aerobic biological processes. To be effective, the biota require both oxygen and food to live. The bacteria and protozoa consume biodegradable soluble organic contaminants (e.g. sugars, fats, organic short-chain carbon molecules, etc.) and bind much of the less soluble fractions into floc. Secondary treatment systems are classified as fixed-film or suspended-growth systems. Fixed-film or attached growth systems include trickling filters, Moving Bed Biofilm Reactors (MBBR), and rotating biological contactors, where the biomass grows on media and the sewage passes over its surface. Suspended-growth systems include activated sludge, where the biomass is mixed with the sewage and can be operated in a smaller space than fixed-film systems that treat the same amount of water. However, fixed-film systems are more able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems.[6]:11-13 Roughing filters are intended to treat particularly strong or variable organic loads, typically industrial, to allow them to then be treated by conventional secondary treatment processes. Characteristics include filters filled with media to which wastewater is applied. They are designed to allow high hydraulic loading and a high level of aeration. On larger installations, air is forced through the media using blowers. The resultant wastewater is usually within the normal range for conventional treatment processes.
A filter removes a small percentage of the suspended organic matter, while the majority of the organic matter undergoes a change of character, only due to the biological oxidation and nitrification taking place in the filter. With this aerobic oxidation and nitrification, the organic solids are converted into coagulated suspended mass, which is heavier and bulkier, and can settle to the bottom of a tank. The effluent of the filter is therefore passed through a sedimentation tank, called a secondary clarifier, secondary settling tank or humus tank.
[edit]Activated sludge
Main article: Activated sludge In general, activated sludge plants encompass a variety of mechanisms and processes that use dissolved oxygen to promote the growth of biological floc that substantially removes organic material.[6]:12-13 The process traps particulate material and can, under ideal conditions, convert ammonia to nitrite and nitrate ultimately to nitrogen gas. (See alsodenitrification).
is normally achieved in activated sludge systems and therefore aerated basins do not achieve the same performance level as activated sludge units.[9] Biological oxidation processes are sensitive to temperature and, between 0 C and 40 C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 C and 32 C.[9]
[edit]Constructed wetlands
Constructed wetlands (can either be surface flow or subsurface flow, horizontal or vertical flow), include engineered reedbeds and belong to the family of phytorestoration and ecotechnologies; they provide a high degree of biological improvement and depending on design, act as a primary, secondary and sometimes tertiary treatment, also see phytoremediation. One example is a small reedbed used to clean the drainage from the elephants' enclosure atChester Zoo in England; numerous CWs are used to recycle the water of the city of Honfleur in France and numerous other towns in Europe, the US, Asia and Australia. They are known to be highly productive systems as they copy natural wetlands, called the "Kidneys of the earth" for their fundamental recycling capacity of the hydrological cycle in the biosphere. Robust and reliable, their treatment capacities improve as time go by, at the opposite of conventional treatment plants whose machinery age with time. They are being increasingly used, although adequate and experienced design are more fundamental than for other systems and space limitation may impede their use.
[edit]Soil Bio-Technology
Main article: 1Soil Bio-Technology A new process called Soil Bio-Technology (SBT) developed at IIT Bombay has shown tremendous improvements in process efficiency enabling total water reuse, due to extremely low operating power requirements of less than 50 joules per kg of treated water.[10] Typically SBT systems can achieve chemical oxygen demand (COD) levels less than 10 mg/L from sewage input of COD 400 mg/L.[11] SBT plants exhibit high reductions in COD values and bacterial counts as a result of the very high microbial densities available in
the media. Unlike conventional treatment plants, SBT plants produce insignificant amounts of sludge, precluding the need for sludge disposal areas that are required by other technologies.[12] In the Indian context, conventional sewage treatment plants fall into systemic disrepair due to 1) high operating costs, 2) equipment corrosion due to methanogenesis and hydrogen sulphide, 3) non-reusability of treated water due to high COD (>30 mg/L) and high fecal coliform (>3000 NFU) counts, 4) lack of skilled operating personnel and 5) equipment replacement issues. Examples of such systemic failures has been documented by Sankat Mochan Foundation at the Ganges basin after a massive cleanup effort by the Indian government in 1986 by setting up sewage treatment plants under the Ganga Action Plan failed to improve river water quality.
Schematic diagram of a typical rotating biological contactor (RBC). The treated effluent clarifier/settler is not included in the diagram.
disks rotate. As the micro-organisms grow, they build up on the media until they are sloughed off due to shear forces provided by the rotating discs in the sewage. Effluent from the RBC is then passed through final clarifiers where the micro-organisms in suspension settle as a sludge. The sludge is withdrawn from the clarifier for further treatment. A functionally similar biological filtering system has become popular as part of home aquarium filtration and purification. The aquarium water is drawn up out of the tank and then cascaded over a freely spinning corrugated fiber-mesh wheel before passing through a media filter and back into the aquarium. The spinning mesh wheel develops a biofilm coating of microorganisms that feed on the suspended wastes in the aquarium water and are also exposed to the atmosphere as the wheel rotates. This is especially good at removing waste urea and ammonia urinated into the aquarium water by the fish and other animals.
[edit]Membrane bioreactors
Membrane bioreactors (MBR) combine activated sludge treatment with a membrane liquid-solid separation process. The membrane component uses low pressure microfiltration or ultrafiltration membranes and eliminates the need for clarification and tertiary filtration. The membranes are typically immersed in the aeration tank; however, some applications utilize a separate membrane tank. One of the key benefits of an MBR system is that it effectively overcomes the limitations associated with poor settling of sludge in conventional activated sludge (CAS) processes. The technology permits bioreactor operation with considerably higher mixed liquor suspended solids (MLSS) concentration than CAS systems, which are limited by sludge settling. The process is typically operated at MLSS in the range of 8,00012,000 mg/L, while CAS are operated in the range of 2,000 3,000 mg/L. The elevated biomass concentration in the MBR process allows for very effective removal of both soluble and particulate biodegradable materials at higher loading rates. Thus increased sludge retention times, usually exceeding 15 days, ensure complete nitrification even in extremely cold weather. The cost of building and operating an MBR is usually higher than conventional wastewater treatment. Membrane filters can be blinded with grease or abraded by suspended grit and lack a clarifier's flexibility to pass peak flows. The technology has become increasingly popular for reliably pretreated waste streams and has gained wider acceptance where infiltration and inflow have been controlled, however, and the life-cycle costs have been steadily decreasing. The small footprint of MBR systems, and the high quality effluent produced, make them particularly useful for water reuse applications.[13]
[edit]Secondary sedimentation
The final step in the secondary treatment stage is to settle out the biological floc or filter material through a secondary clarifier and to produce sewage water containing low levels of organic material and suspended matter.
[edit]Tertiary
treatment
The purpose of tertiary treatment is to provide a final treatment stage to raise the effluent quality before it is discharged to the receiving environment (sea, river, lake, ground, etc.). More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called "effluent polishing."
[edit]Filtration
Sand filtration removes much of the residual suspended matter.[6]:22-23 Filtration over activated carbon, also called carbon adsorption, removes residual toxins.[6]:19
[edit]Lagooning
Lagooning provides settlement and further biological improvement through storage in large man-made ponds or lagoons. These lagoons are highly aerobic and colonization by native macrophytes, especially reeds, is often encouraged. Small filter feeding invertebrates such as Daphnia and species of Rotifera greatly assist in treatment by removing fine particulates.
[edit]Nutrient removal
Wastewater may contain high levels of the nutrients nitrogen and phosphorus. Excessive release to the environment can lead to a build up of nutrients, calledeutrophication, which can in turn encourage the overgrowth of weeds, algae, and cyanobacteria (blue-green algae). This may cause an algal bloom, a rapid growth in the population of algae. The algae numbers are unsustainable and eventually most of them die. The decomposition of the algae by bacteria uses up so much of oxygen in the water that most or all of the animals die, which creates more organic matter for the bacteria to decompose. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies. Different treatment processes are required to remove nitrogen and phosphorus.
[edit]Nitrogen removal
The removal of nitrogen is effected through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water. Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH3) to nitrite (NO2) is most often facilitated by Nitrosomonas spp. (nitroso referring to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment almost exclusively by Nitrospira spp. Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. It is facilitated by a wide diversity of bacteria. Sand filters, lagooning and reed beds can all be used to reduce nitrogen, but the activated sludge process (if designed well) can do the job the most easily.[6]:17-18 Since denitrification is the reduction of nitrate to dinitrogen gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from faeces), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge [RAS], and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification. Sometimes the conversion of toxic ammonia to nitrate alone is referred to as tertiary treatment. Many sewage treatment plants use axial flow pumps to transfer the nitrified mixed liquor from the aeration zone to the anoxic zone for denitrification. These pumps are often referred to as Internal Mixed Liquor Recycle (IMLR) pumps.
[edit]Phosphorus removal
Each person excretes between 200 and 1000 grams of phosphorus annually. Studies of United States sewage in the late 1960s estimated mean per capita contributions of 500 grams in urine and feces, 1000 grams in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies.[14] Source control via alternative detergent formulations has subsequently reduced the largest contribution, but the content of urine and feces will remain unchanged. Phosphorus removal is important as it is a limiting nutrient for algae growth in many fresh water systems. (For a description of the negative effects of algae, see Nutrient removal). It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis. Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass). When the biomass enriched in these bacteria is separated from the treated water, these biosolids have a high fertilizer value. Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride), aluminum (e.g. alum), or lime.[6]:18 This may lead to excessive sludge production as hydroxides precipitates and the added chemicals can be expensive. Chemical phosphorus removal requires significantly
smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite. Once removed, phosphorus, in the form of a phosphate-rich sludge, may be stored in a land fill or resold for use in fertilizer.
[edit]Disinfection
The purpose of disinfection in the treatment of waste water is to substantially reduce the number of microorganisms in the water to be discharged back into the environment for the later use of drinking, bathing, irrigation, etc. The effectiveness of disinfection depends on the quality of the water being treated (e.g., cloudiness, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Cloudy water will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite.[6]:16 Chloramine, which is used for drinking water, is not used in waste water treatment because of its persistence. After multiple steps of disinfection, the treated water is ready to be released back into the water cycle by means of the nearest body of water or agriculture. Afterwards, the water can be transferred to reserves for everyday human uses. Chlorination remains the most common form of waste water disinfection in North America due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment. Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In the United Kingdom, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the wastewater and in chlorinating organics in the receiving water. Some sewage treatment systems in Canada and the US also use UV light for their effluent water disinfection.[15][16] Ozone (O3) is generated by passing oxygen (O2) through a high voltage potential resulting in a third oxygen atom becoming attached and forming O3. Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated onsite as needed. Ozonation also produces fewer
disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators.
[edit]Odour
Control
Odours emitted by sewage treatment are typically an indication of an anaerobic or "septic" condition. [17] Early stages of processing will tend to produce foul smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odours with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the obnoxious gases.[18] Other methods of odour control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels.
[edit]Package
To use less space, treat difficult waste and intermittent flows, a number of designs of hybrid treatment plants have been produced. Such plants often combine at least two stages of the three main treatment stages into one combined stage. In the UK, where a large number of wastewater treatment plants serve small populations, package plants are a viable alternative to building a large structure for each process stage. In the US, package plants are typically used in rural areas, highway rest stops and trailer parks.[19] One type of system that combines secondary treatment and settlement is the sequencing batch reactor (SBR). Typically, activated sludge is mixed with raw incoming sewage, and then mixed and aerated. The settled sludge is run off and re-aerated before a proportion is returned to the headworks.[20] SBR plants are now being deployed in many parts of the world. The disadvantage of the SBR process is that it requires a precise control of timing, mixing and aeration. This precision is typically achieved with computer controls linked to sensors. Such a complex, fragile system is unsuited to places where controls may be unreliable, poorly maintained, or where the power supply may be intermittent. Extended aeration package plants use separate basins for aeration and settling, and are somewhat larger than SBR plants with reduced timing sensitivity.[21] Package plants may be referred to as high charged or low charged. This refers to the way the biological load is processed. In high charged systems, the biological stage is presented with a high organic load and the combined floc and organic material is then oxygenated for a few hours before being charged again with a new load. In the low charged system the biological stage contains a low organic load and is combined with flocculate for longer times.
[edit]Sludge
Main article: Sewage sludge treatment The sludges accumulated in a wastewater treatment process must be treated and disposed of in a safe and effective manner. The purpose of digestion is to reduce the amount of organic matter and the number of
disease-causing microorganisms present in the solids. The most common treatment options include anaerobic digestion, aerobic digestion, and composting. Incineration is also used albeit to a much lesser degree.[6]:19-21 Sludge treatment depends on the amount of solids generated and other site-specific conditions. Composting is most often applied to small-scale plants with aerobic digestion for mid sized operations, and anaerobic digestion for the larger-scale operations.
[edit]Anaerobic
digestion
Main article: Anaerobic digestion Anaerobic digestion is a bacterial process that is carried out in the absence of oxygen. The process can either be thermophilic digestion, in which sludge is fermented in tanks at a temperature of 55C, ormesophilic, at a temperature of around 36C. Though allowing shorter retention time (and thus smaller tanks), thermophilic digestion is more expensive in terms of energy consumption for heating the sludge. Anaerobic digestion is the most common (mesophilic) treatment of domestic sewage in septic tanks, which normally retain the sewage from one day to two days, reducing the BOD by about 35 to 40 percent. This reduction can be increased with a combination of anaerobic and aerobic treatment by installing Aerobic Treatment Units (ATUs) in the septic tank. One major feature of anaerobic digestion is the production of biogas (with the most useful component being methane), which can be used in generators for electricity production and/or in boilers for heating purposes.
[edit]Aerobic
digestion
Aerobic digestion is a bacterial process occurring in the presence of oxygen. Under aerobic conditions, bacteria rapidly consume organic matter and convert it into carbon dioxide. The operating costs used to be characteristically much greater for aerobic digestion because of the energy used by the blowers, pumps and motors needed to add oxygen to the process. Aerobic digestion can also be achieved by using diffuser systems or jet aerators to oxidize the sludge. Fine bubble diffusers are typically the more cost-efficient diffusion method, however, plugging is typically a problem due to sediment settling into the smaller air holes. Coarse bubble diffusers are more commonly used in activated sludge tanks (generally a side process in waste water management) or in the flocculation stages. A key component for selecting diffuser type is to ensure it will produce the required oxygen transfer rate.
[edit]Composting
Composting is also an aerobic process that involves mixing the sludge with sources of carbon such as sawdust, straw or wood chips. In the presence of oxygen, bacteria digest both the wastewater solids and the added carbon source and, in doing so, produce a large amount of heat.[6]:20
[edit]Incineration
Incineration of sludge is less common because of air emissions concerns and the supplemental fuel (typically natural gases or fuel oil) required to burn the low calorific value sludge and vaporize residual water. Stepped multiple hearth incinerators with high residence time and fluidized bed incinerators are the most common systems used to combust wastewater sludge. Co-firing in municipal waste-to-energy plants is occasionally done, this option being less expensive assuming the facilities already exist for solid waste and there is no need for auxiliary fuel.[6]:20-21
[edit]Sludge
disposal
When a liquid sludge is produced, further treatment may be required to make it suitable for final disposal. Typically, sludges are thickened (dewatered) to reduce the volumes transported off-site for disposal. There is no process which completely eliminates the need to dispose of biosolids. There is, however, an additional step some cities are taking to superheat sludge and convert it into small pelletized granules that are high in nitrogen and other organic materials. In New York City, for example, several sewage treatment plants have dewatering facilities that use large centrifuges along with the addition of chemicals such as polymer to further remove liquid from the sludge. The removed fluid, called centrate, is typically reintroduced into the wastewater process. The product which is left is called "cake" and that is picked up by companies which turn it into fertilizer pellets. This product is then sold to local farmers and turf farms as a soil amendment or fertilizer, reducing the amount of space required to dispose of sludge in landfills. Much sludge originating from commercial or industrial areas is contaminated with toxic materials that are released into the sewers from the industrial processes. [22] Elevated concentrations of such materials may make the sludge unsuitable for agricultural use and it may then have to be incinerated or disposed of to landfill.
[edit]Treatment
The outlet of the Karlsruhe sewage treatment plant flows into the Alb.
Many processes in a wastewater treatment plant are designed to mimic the natural treatment processes that occur in the environment, whether that environment is a natural water body or the ground. If not overloaded, bacteria in the environment will consume organic contaminants, although this will reduce the levels of oxygen in the water and may significantly change the overall ecology of the receiving water. Native bacterial populations feed on the organic contaminants, and the numbers of disease-causing microorganisms are reduced by natural
environmental conditions such as predation or exposure to ultraviolet radiation. Consequently, in cases where the receiving environment provides a high level of dilution, a high degree of wastewater treatment may not be required. However, recent evidence has demonstrated that very low levels of specific contaminants in wastewater, including hormones (from animal husbandry and residue from human hormonal contraception methods) and synthetic materials such as phthalates that mimic hormones in their action, can have an unpredictable adverse impact on the natural biota and potentially on humans if the water is re-used for drinking water.[23] In the US and EU, uncontrolled discharges of wastewater to the environment are not permitted under law, and strict water quality requirements are to be met, as clean drinking water is essential. (For requirements in the US, see Clean Water Act.) A significant threat in the coming decades will be the increasing uncontrolled discharges of wastewater within rapidly developing countries.
[edit]Effects
on Biology
Sewage treatment plants can have multiple effects on nutrient levels in the water that the treated sewage flows into. These effects on nutrients can have large effects on the biological life in the water in contact with the effluent. Stabilization ponds (or treatment ponds) can include any of the following: Oxidation ponds, which are aerobic bodies of water usually 12 meters in depth that receive effluent from sedimentation tanks or other forms of primary treatment. Dominated by algae
Polishing ponds are similar to oxidation ponds but receive effluent from an oxidation pond or from a plant with an extended mechanical treatment. Dominated by zooplankton Facultative lagoons, raw sewage lagoons, or sewage lagoons are ponds where sewage is added with no primary treatment other than coarse screening. These ponds provide effective treatment when the surface remains aerobic; although anaerobic conditions may develop near the layer of settled sludge on the bottom of the pond.[24] Anaerobic lagoons are heavily loaded ponds. Dominated by bacteria Sludge lagoons are aerobic ponds, usually 25 meters in depth, that receive anaerobically digested primary sludge, or activated secondary sludge under water. Upper layers are dominated by algae [25] Phosphorus limitation is a possible result from sewage treatment and results in flagellatedominated plankton, particularly in summer and fall.[26] At the same time a different study found high nutrient concentrations linked to sewage effluents. High nutrient concentration leads to high chlorophyll a concentrations, which is a proxy for primary production in marine environments. High primary production means high phytoplankton populations and most likely high zooplankton populations because
zooplankton feed on phytoplankton. However, effluent released into marine systems also leads to greater population instability.[27] A study done in Britain found that the quality of effluent affected the planktonic life in the water in direct contact with the wastewater effluent. Turbid, low-quality effluents either did not contain ciliated protozoa or contained only a few species in small numbers. On the other hand, high-quality effluents contained a wide variety of ciliated protozoa in large numbers. Due to these findings, it seems unlikely that any particular component of the industrial effluent has, by itself, any harmful effects on the protozoan populations of activated sludge plants.[28] The planktonic trends of high populations close to input of treated sewage is contrasted by the bacterial trend. In a study of Aeromonas spp. in increasing distance from a wastewater source, greater change in seasonal cycles was found the furthest from the effluent. This trend is so strong that the furthest location studied actually had an inversion of the Aeromonas spp. cycle in comparison to that of fecal coliforms. Since there is a main pattern in the cycles that occurred simultaneously at all stations it indicates seasonal factors (temperature, solar radiation, phytoplankton) control of the bacterial population. The effluent dominant species changes from Aeromonas caviae in winter to Aeromonas sobria in the spring and fall while the inflow dominant species is Aeromonas caviae, which is constant throughout the seasons.[29]
[edit]Sewage
Few reliable figures on the share of the wastewater collected in sewers that is being treated in the world exist. In many developing countries the bulk of domestic and industrial wastewater is discharged without any treatment or after primary treatment only. In Latin America about 15% of collected wastewater passes through treatment plants (with varying levels of actual treatment). In Venezuela, a below average country inSouth America with respect to wastewater treatment, 97 percent of the countrys sewage is discharged raw into the environment.[30] In a relatively developed Middle Eastern country such as Iran, the majority ofTehran's population has totally untreated sewage injected to the citys groundwater.[31] However now the construction of major parts of the sewage system, collection and treatment, in Tehran is almost complete, and under development, due to be fully completed by the end of 2012. In Israel, about 50 percent of agricultural water usage (total use was 1 billion cubic metres in 2008) is provided through reclaimed sewer water. Future plans call for increased use of treated sewer water as well as more desalination plants.[32] Most of sub-Saharan Africa is without wastewater treatment.
Eutrophication
It has been suggested that Cultural eutrophication be merged into this article or section. (Discuss) Proposed since August 2010.
The eutrophication of the Potomac River is evident from its bright green water, caused by a dense bloom of cyanobacteria.
Eutrophication (Greek: eutrophiahealthy, adequate nutrition, development; German: Eutrophie) is the movement of a body of waters trophic status in the direction of increasing plant biomass, by the addition of artificial or natural substances, such as nitrates and phosphates, through fertilizers or sewage, to an aquatic system.[1] In other terms, it is the "bloom" or great increase of phytoplankton in a water body. Negative environmental effects include hypoxia, the depletion of oxygen in the water, which induces reductions in specific fish and other animal populations. Other species (such as Nemopilema nomurai jellyfish in Japanese waters) may experience an increase in population that negatively affects other species.
Contents
[hide]
1 Lakes and rivers 2 Ocean waters 3 Terrestrial ecosystems 4 Ecological effects 4.1 Decreased biodiversity 4.2 New species invasion 4.3 Toxicity
5 Sources of high nutrient runoff 5.1 Point sources 5.2 Nonpoint sources 5.2.1 Soil retention 5.2.2 Runoff to surface water and leaching to groundwater 5.2.3 Atmospheric deposition
6 Prevention and reversal 6.1 Effectiveness 6.2 Minimizing nonpoint pollution: future work 6.2.1 Riparian buffer zones 6.2.2 Prevention policy 6.2.3 Nitrogen testing and modeling 6.2.4 Organic farming
[1edit]Lakes
and rivers
Eutrophication can be human-caused or natural. Untreated sewage effluent and agricultural run-off carrying fertilizers are examples of human-caused eutrophication. However, it also occurs naturally in situations where nutrients accumulate (e.g. depositional environments), or where they flow into systems on an ephemeral basis. Eutrophication generally promotes excessive plant growth and decay, favouring simple algae and plankton over other more complicated plants, and causes a severe reduction in water quality. Enhanced growth of aquatic vegetation or phytoplankton and algal blooms disrupts normal functioning of the ecosystem, causing a variety of problems such as a lack of oxygen needed for fish and shellfish to survive. The water becomes cloudy, typically coloured a shade of green, yellow, brown, or red. Eutrophication also decreases the value of rivers, lakes, and estuaries for recreation, fishing, hunting, and aesthetic enjoyment. Health problems can occur where eutrophic conditions interfere with drinking water treatment.[2] Eutrophication was recognized as a pollution problem in European and North American lakes and reservoirs in the mid-20th century.[3] Since then, it has become more widespread. Surveys showed that 54% of lakes in Asia are eutrophic; in Europe, 53%; in North America, 48%; in South America, 41%; and in Africa, 28%.[4] Although eutrophication is commonly caused by human activities, it can also be a natural process particularly in lakes. Eutrophy occurs in many lakes in temperate grasslands, for instance. Paleolimnologists now recognise that climate change, geology, and other external influences are critical in regulating the natural productivity of lakes. Some lakes also demonstrate the reverse process (meiotrophication), becoming less nutrient rich with time.[5][6] Eutrophication can also be a natural process in seasonally inundated tropical floodplains. In the Barotse Floodplain of the Zambezi River, the first floodwaters of the rainy season are usually hypoxic because of material such as cattle manure and previous decay of vegetation which grew during the dry season. These socalled "red waters" kill many fish.[7] The process can be made worse by the use of fertilizers in crops such as maize, rice, and sugarcane grown on the floodplain.
Human activities can accelerate the rate at which nutrients enter ecosystems. Runoff from agriculture and development, pollution from septic systems and sewers, and other human-related activities increase the flow of both inorganic nutrients and organic substances into ecosystems. Elevated levels of atmospheric compounds of nitrogen can increase nitrogen availability. Phosphorus is often regarded as the main culprit in cases of eutrophication in lakes subjected to "point source" pollution from sewage pipes. The concentration of algae and the trophic state of lakes correspond well to phosphorus levels in water. Studies conducted in the Experimental Lakes Area in Ontario have shown a relationship between the addition of phosphorus and the rate of eutrophication. Humankind has increased the rate of phosphorus cycling on Earth by four times, mainly due to agricultural fertilizer production and application. Between 1950 and 1995, an estimated 600,000,000 tonnes of phosphorus were applied to Earth's surface, primarily on croplands.[8] Policy changes to control point sources of phosphorus have resulted in rapid control of eutrophication.
[2edit]Ocean
waters
Eutrophication is a common phenomenon in coastal waters. In contrast to freshwater systems, nitrogen is more commonly the key limiting nutrient of marine waters; thus, nitrogen levels have greater importance to understanding eutrophication problems in salt water. Estuaries tend to be naturally eutrophic because landderived nutrients are concentrated where run-off enters a confined channel. Upwelling in coastal systems also promotes increased productivity by conveying deep, nutrient-rich waters to the surface, where the nutrients can be assimilated by algae. The World Resources Institute has identified 375 hypoxic coastal zones in the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly Japan. [9] In addition to runoff from land, atmospheric fixed nitrogen can enter the open ocean. A study in 2008 found that this could account for around one third of the oceans external (non-recycled) nitrogen supply, and up to 3% of the annual new marine biological production.[10] It has been suggested that accumulating reactive nitrogen in the environment may prove as serious as putting carbon dioxide in the atmosphere.[11]
[3edit]Terrestrial
ecosystems
Terrestrial ecosystems are subject to similarly adverse impacts from eutrophication. [12] Increased nitrates in soil are frequently undesirable for plants. Many terrestrial plant species are endangered as a result of soil eutrophication, such as the majority of orchid species in Europe.[13] Meadows, forests, and bogs are characterized by low nutrient content and slowly growing species adapted to those levels, so they can be overgrown by faster growing and more competitive species. In meadows, tall grasses that can take advantage of higher nitrogen levels may change the area so that natural species may be lost. Species-rich fenscan be overtaken by reed or reedgrass species. Forest undergrowth affected by run-off from a nearby fertilized field can be turned into a nettle and bramble thicket. Chemical forms of nitrogen are most often of concern with regard to eutrophication, because plants have high nitrogen requirements so that additions of nitrogen compounds will stimulate plant growth. Nitrogen is not readily available in soil because N2, a gaseous form of nitrogen, is very stable and unavailable directly to higher
plants. Terrestrial ecosystems rely on microbial nitrogen fixation to convert N2 into other forms such as nitrates. However, there is a limit to how much nitrogen can be utilized. Ecosystems receiving more nitrogen than the plants require are called nitrogen-saturated. Saturated terrestrial ecosystems then can contribute both inorganic and organic nitrogen to freshwater, coastal, and marine eutrophication, where nitrogen is also typically a limiting nutrient.[14] This is also the case with increased levels of phosphorus. However, because phosphorus is generally much less soluble than nitrogen, it is leached from the soil at a much slower rate than nitrogen. Consequently, phosphorus is much more important as a limiting nutrient in aquatic systems.[15]
[4edit]Ecological
effects
Eutrophication is apparent as increased turbidity in the northern part of the Caspian Sea, imaged from orbit.
Many ecological effects can arise from stimulating primary production, but there are three particularly troubling ecological impacts: decreased biodiversity, changes in species composition and dominance, and toxicity effects. Increased biomass of phytoplankton Toxic or inedible phytoplankton species Increases in blooms of gelatinous zooplankton Decreased biomass of benthic and epiphytic algae Changes in macrophyte species composition and biomass Decreases in water transparency (increased turbidity) Colour, smell, and water treatment problems Dissolved oxygen depletion Increased incidences of fish kills
Loss of desirable fish species Reductions in harvestable fish and shellfish Decreases in perceived aesthetic value of the water body
[5edit]Decreased
biodiversity
When an ecosystem experiences an increase in nutrients, primary producers reap the benefits first. In aquatic ecosystems, species such as algae experience a population increase (called an algal bloom). Algal blooms limit the sunlight available to bottom-dwelling organisms and cause wide swings in the amount of dissolved oxygen in the water. Oxygen is required by all aerobicalyrespiring plants and animals and it is replenished in daylight by photosynthesizing plants and algae. Under eutrophic conditions, dissolved oxygen greatly increases during the day, but is greatly reduced after dark by the respiring algae and by microorganisms that feed on the increasing mass of dead algae. When dissolved oxygen levels decline to hypoxic levels, fish and other marine animals suffocate. As a result, creatures such as fish, shrimp, and especially immobile bottom dwellers die off.[16] In extreme cases, anaerobic conditions ensue, promoting growth of bacteria such as Clostridium botulinum that produces toxins deadly to birds and mammals. Zones where this occurs are known as dead zones.
[6edit]New
species invasion
Eutrophication may cause competitive release by making abundant a normally limiting nutrient. This process causes shifts in the species composition of ecosystems. For instance, an increase in nitrogen might allow new, competitive species to invade and out-compete original inhabitant species. This has been shown to occur[17] in New England salt marshes.
[7edit]Toxicity
Some algal blooms, otherwise called "nuisance algae" or "harmful algal blooms", are toxic to plants and animals. Toxic compounds they produce can make their way up the food chain, resulting in animal mortality.[18] Freshwater algal blooms can pose a threat to livestock. When the algae die or are eaten, neuroand hepatotoxins are released which can kill animals and may pose a threat to humans.[19][20] An example of algal toxins working their way into humans is the case of shellfish poisoning.[21] Biotoxins created during algal blooms are taken up by shellfish (mussels, oysters), leading to these human foods acquiring the toxicity and poisoning humans. Examples include paralytic, neurotoxic, and diarrhoetic shellfish poisoning. Other marine animals can be vectors for such toxins, as in the case of ciguatera, where it is typically a predator fish that accumulates the toxin and then poisons humans.
[8edit]Sources
Characteristics of point and nonpoint sources of chemical inputs ( [8] modified from Novonty and Olem 1994) Point sources
Wastewater effluent (municipal and industrial)
Runoff and leachate from waste disposal systems Runoff and infiltration from animal feedlots Runoff from mines, oil fields, unsewered industrial sites Overflows of combined storm and sanitary sewers Runoff from construction sites less than 20,000 m (220,000 ft) Untreated sewage
Nonpoint sources
Runoff from agriculture/irrigation Runoff from pasture and range Urban runoff from unsewered areas Septic tank leachate Runoff from construction sites >20,000 m Runoff from abandoned mines Atmospheric deposition over a water surface Other land activities generating contaminants
In order to gauge how to best prevent eutrophication from occurring, specific sources that contribute to nutrient loading must be identified. There are two common sources of nutrients and organic matter: point and nonpoint sources.
[9edit]Point
sources
Point sources are directly attributable to one influence. In point sources the nutrient waste travels directly from source to water. Point sources are relatively easy to regulate.
[edit]Nonpoint
sources
Nonpoint source pollution (also known as 'diffuse' or 'runoff' pollution) is that which comes from ill-defined and diffuse sources. Nonpoint sources are difficult to regulate and usually vary spatially and temporally (with season, precipitation, and other irregular events). It has been shown that nitrogen transport is correlated with various indices of human activity in watersheds,[22][23] including the amount of development.[17] Ploughing in Agriculture and development are activities that contribute most to nutrient loading. There are three reasons that nonpoint sources are especially troublesome:[15]
[edit]Soil retention
Nutrients from human activities tend to accumulate in soils and remain there for years. It has been shown[24] that the amount of phosphorus lost to surface waters increases linearly with the amount of phosphorus in the soil. Thus much of the nutrient loading in soil eventually makes its way to water. Nitrogen, similarly, has a turnover time of decades or more.
[edit]Atmospheric deposition
Nitrogen is released into the air because of ammonia volatilization and nitrous oxide production. The combustion of fossil fuels is a large human-initiated contributor to atmospheric nitrogen pollution. Atmospheric deposition (e.g., in the form of acid rain) can also affect nutrient concentration in water,[26] especially in highly industrialized regions.
[edit]Other
causes
Any factor that causes increased nutrient concentrations can potentially lead to eutrophication. In modeling eutrophication, the rate of water renewal plays a critical role; stagnant water is allowed to collect more nutrients than bodies with replenished water supplies. It has also been shown that the drying of wetlands causes an increase in nutrient concentration and subsequent eutrophication blooms.[27]
[edit]Prevention
and reversal
Eutrophication poses a problem not only to ecosystems, but to humans as well. Reducing eutrophication should be a key concern when considering future policy, and a sustainable solution for everyone, including farmers and ranchers, seems feasible. While eutrophication does pose problems, humans should be aware that natural runoff (which causes algal blooms in the wild) is common in ecosystems and should thus not reverse nutrient concentrations beyond normal levels.
[edit]Effectiveness
Cleanup measures have been mostly, but not completely, successful. Finnish phosphorus removal measures started in the mid-1970s and have targeted rivers and lakes polluted by industrial and municipal discharges. These efforts have had a 90% removal efficiency.[28] Still, some targeted point sources did not show a decrease in runoff despite reduction efforts.
[edit]Minimizing
Nonpoint pollution is the most difficult source of nutrients to manage. The literature suggests, though, that when these sources are controlled, eutrophication decreases. The following steps are recommended to minimize the amount of pollution that can enter aquatic ecosystems from ambiguous sources.
[edit]Prevention policy
Laws regulating the discharge and treatment of sewage have led to dramatic nutrient reductions to surrounding ecosystems,[15] but it is generally agreed that a policy regulating agricultural use of fertilizer and animal waste must be imposed. In Japan the amount of nitrogen produced by livestock is adequate to serve the fertilizer needs for the agriculture industry.[30] Thus, it is not unreasonable to command livestock owners to clean up animal wastewhich when left stagnant will leach into ground water. Policy concerning the prevention and reduction of eutrophication can be broken down into four sectors: Technologies, public participation, economic instruments, and cooperation.[31] The term technology is used loosely, referring to a more widespread use of existing methods rather than an appropriation of new technologies. As mentioned before, nonpoint sources of pollution are the primary contributors to eutrophication, and their effects can be easily minimized through common agricultural practices. Reducing the amount of pollutants that reach a watershed can be achieved through the protection of its forest cover, reducing the amount of erosion leeching into a watershed. Also, through the efficient, controlled use of land using sustainable agricultural practices to minimize land degradation, the amount of soil runoff and nitrogen-based fertilizers reaching a watershed can be reduced.[32] Waste disposal technology constitutes another factor in eutrophication prevention. Because a major contributor to the nonpoint source nutrient loading of water bodies is untreated domestic sewage, it is necessary to provide treatment facilities to highly urbanized areas, particularly those in underdeveloped nations, in which treatment of domestic waste water is a scarcity. [33] The technology to safely and efficiently reuse waste water, both from domestic and industrial sources, should be a primary concern for policy regarding eutrophication. The role of the public is a major factor for the effective prevention of eutrophication. In order for a policy to have any effect, the public must be aware of their contribution to the problem, and ways in which they can reduce their effects. Programs instituted to promote participation in the recycling and elimination of wastes, as well as education on the issue of rational water use are necessary to protect water quality within urbanized areas and adjacent water bodies. Economic instruments, which include, among others, property rights, water markets, fiscal and financial instruments, charge systems and liability systems, are gradually becoming a substantive component of the
management tool set used for pollution control and water allocation decisions."[31] Incentives for those who practice clean, renewable, water management technologies are an effective means of encouraging pollution prevention. By internalizing the costs associated with the negative effects on the environment, governments are able to encourage a cleaner water management. Because a body of water can have an effect on a range of people reaching far beyond that of the watershed, cooperation between different organizations is necessary to prevent the intrusion of contaminants that can lead to eutrophication. Agencies ranging from state governments to those of water resource management and nongovernmental organizations, going as low as the local population, are responsible for preventing eutrophication of water bodies.
Organic farming
There has been a study that found that organically fertilized fields "significantly reduce harmful nitrate leaching" over conventionally fertilized fields.[35] However, a more recent study found that eutrophication impacts are in some cases higher from organic production than they are from conventional production.[36]
"Municipal waste" redirects here. For other uses, see Municipal waste (disambiguation).
Public Infrastructure
Airport Bridge Broadband Canal Critical Dams Electricity Energy Freight Hazardous waste Hospitals Levees Lighthouses Parks Port Mass transit Public housing Public schools Public Space Rail Road Sewage Shipment Solid Waste Telecommunications Utilities Water locks Water system Wastewater
Concepts
Asset Management Appropriations Bank Benefit tax Build-Operate-Transfer Design-Build Earmark Fixed cost Engineering Contracts Externality Government debt Life cycle assessment Logistics Maintenance Monopoly Project Management Property Tax Public-private partnerships Public capital Public finance Public good Public sector Renovation Replacement Spillover effect Supply chain Taxation
Fields of Study
Architecture Civil, Electrical, Mechanical engineering Public economics Public policy Urban planning
Infrastructure Portal
This box: view talk edit
Achievements 1%5[show]
Municipal solid waste (MSW), commonly known as trash or garbage, is a waste type consisting of everyday items we consume and discard. It predominantly includes food wastes, yard wastes, containers and product packaging, and other miscellaneous inorganic wastes from residential, commercial, institutional, and industrial sources.[1]Examples of inorganic wastes are appliances, newspapers, clothing, food scrapes, boxes, disposable tableware, office and classroom paper, furniture, wood pallets, rubber tires, and cafeteria wastes. Municipal solid waste does not include industrial wastes, agricultural wastes, and sewage sludge.[2] The collection is performed by themunicipality within a given area. They are in either solid or semisolid form. The term residual waste relates to waste left from household sources containing materials that have not been separated out or sent for reprocessing.[3] Following are the different types of wastes. Biodegradable waste: food and kitchen waste, green waste, paper (can also be recycled). Recyclable material: paper, glass, bottles, cans, metals, certain plastics, etc. Inert waste: construction and demolition waste, dirt, rocks, debris. Composite wastes: waste clothing, Tetra Packs, waste plastics such as toys. Domestic hazardous waste (also called "household hazardous waste") & toxic waste: medication, ewaste, paints, chemicals, light bulbs, fluorescent tubes, spray cans,fertilizer and pesticide containers, batteries, shoe polish.
Contents
[hide]
1 The functional elements of solid waste 1.1 Waste generation 1.2 Collection 1.3 Waste handling and separation, storage and processing at the source 1.4 Separation and processing and transformation of solid wastes 1.5 Transfer and transport 1.6 Disposal 1.7 Energy Generation
[1edit]The
The municipal solid waste industry has four components: recycling, composting, landfilling, and waste-toenergy via incineration.[4] The primary steps are generation, collection, sorting and separation, transfer, and disposal.
[2edit]Waste
generation
Waste generation encompasses activities in which materials are identified as no longer being of value and are either thrown out or gathered together for disposal.
[3edit]Collection
The functional element of collection includes not only the gathering of solid waste and recyclable materials, but also the transport of these materials, after collection, to the location where the collection vehicle is emptied. This location may be a materials processing facility, a transfer station or a landfill disposal site.
[4edit]Waste
Waste handling and separation involves activities associated with waste management until the waste is placed in storage containers for collection. Handling also encompasses the movement of loaded containers to the point of collection. Separating different types of waste components is an important step in the handling and storage of solid waste at the source.
[5edit]Separation
The types of means and facilities that are now used for the recovery of waste materials that have been separated at the source include curbside collection, drop off and buy back centers. The separation and processing of wastes that have been separated at the source and the separation of commingled wastes usually occur at a materials recovery facility, transfer stations, combustion facilities and disposal sites.
[6edit]Transfer
and transport
This element involves two main steps. First, the waste is transferred from a smaller collection vehicle to larger transport equipment. The waste is then transported, usually over long distances, to a processing or disposal site.
[7edit]Disposal
Today, the disposal of wastes by land filling or land spreading is the ultimate fate of all solid wastes, whether they are residential wastes collected and transported directly to a landfill site, residual materials from materials recovery facilities (MRFs), residue from the combustion of solid waste, compost, or other substances from
various solid waste processing facilities. A modern sanitary landfill is not a dump; it is an engineered facility used for disposing of solid wastes on land without creating nuisances or hazards to public health or safety, such as the breeding of insects and the contamination of ground water.
[8edit]Energy
Generation
Municipal solid waste can be used to generate energy. Several technologies have been developed that make the processing of MSW for energy generation cleaner and more economical than ever before, including landfill gas capture, combustion, pyrolysis, gasification, and plasma arc gasification.[5] While older waste incineration plants emitted high levels of pollutants, recent regulatory changes and new technologies have significantly reduced this concern. EPA regulations in 1995 and 2000 under the Clean Air Acthave succeeded in reducing emissions of dioxins from waste-to-energy facilities by more than 99 percent below 1990 levels, while mercury emissions have been by over 90 percent.[6] The EPA noted these improvements in 2003, citing waste-to-energy as a power source with less environmental impact than almost any other source of electricity. [7]
Agrochemical
From Wikipedia, the free encyclopedia
(Redirected from Agro-chemicals)
Agrochemical (or agrichemical), a contraction of agricultural chemical, is a generic term for the various chemical products used in agriculture. In most cases,agrichemical refers to the broad range of pesticides, including insecticides, herbicides, and fungicides. It may also include synthetic fertilizers, hormones and other chemical growth agents, and concentrated stores of raw animal manure.[1][2][3] Many agrichemicals are toxic, and agrichemicals in bulk storage may pose significant environmental and/or health risks, particularly in the event of accidental spills. In many countries, use of agrichemicals is highly regulated. Government-issued permits for purchase and use of approved agrichemicals may be required. Significant penalties can result from misuse, including improper storage resulting in spillage. On farms, proper storage facilities and labeling, emergency clean-up equipment and
procedures, and safety equipment and procedures for handling, application and disposal are often subject to mandatory standards and regulations. Usually, the regulations are carried out through the registration process. According to Agrow, Bayer CropScience led the agrichemical industry in sales in 2007. Syngenta was second, followed by BASF, Dow Agrosciences, Monsanto, andDuPont.[4]
Radioactive waste
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may bechallenged and removed. (January 2011)
Radioactive wastes are wastes that contain radioactive material. Radioactive wastes are usually byproducts of nuclear power generation and other applications of nuclear fission or nuclear technology, such asresearch and medicine. Radioactive waste is hazardous to human health and the environment, and is regulated by government agencies in order to protect human health and the environment. Radioactivity diminishes over time, so waste is typically isolated and stored for a period of time until it no longer poses a hazard. The period of time waste must be stored depends on the type of waste. Low-level waste with low levels of radioactivity per mass or volume (such as some common medical or industrial radioactive wastes) may need to be stored for only hours, days, or months, while high-level wastes (such asspent nuclear fuel or by-products of nuclear reprocessing) must be stored for thousands of years. Current major approaches to managing radioactive waste have been segregation and storage for short-lived wastes, near-surface disposal for low and some intermediate level wastes, and deep burial or transmutation for the long-lived, high-level wastes. A summary of the amounts of radioactive wastes and management approaches for most developed countries are presented and reviewed periodically as part of the International Atomic Energy Agency (IAEA) Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management. [1]
Contents
[hide]
1 The nature and significance of radioactive waste 1.1 Physics 1.2 Pharmacokinetics
2 Sources of waste 2.1 Nuclear fuel cycle 2.1.1 Front end 2.1.2 Back end 2.1.3 Fuel composition and long term radioactivity 2.1.4 Proliferation concerns
2.2 Nuclear weapons decommissioning 2.3 Legacy waste 2.4 Medical 2.5 Industrial 2.6 Naturally occurring radioactive material (NORM) 2.6.1 Coal 2.6.2 Oil and gas
3 Classification of radioactive waste 3.1 Uranium tailings 3.2 Low-level waste 3.3 Intermediate-level waste 3.4 High-level waste 3.5 Transuranic waste
4 Prevention of waste 5 Management of waste 5.1 Initial treatment of waste 5.1.1 Vitrification 5.1.2 Ion exchange 5.1.3 Synroc
5.2 Long term management of waste 5.2.1 Above-ground disposal 5.2.2 Geologic disposal 5.2.3 Transmutation 5.2.4 Re-use of waste 5.2.5 Space disposal
10 External links
[1edit]The
Radioactive waste typically comprises a number of radioisotopes: unstable configurations of elements that decay, emitting ionizing radiation which can be harmful to humans and the environment. Those isotopes emit different types and levels of radiation, which last for different periods of time.
[2edit]Physics
Main article: fission product yield
Medium-lived fission products Prop: t Yield Q * Unit: a % keV * 155 Eu 4.76 .0803 252 85 Kr 10.76 .2180 687 113m Cd 14.1 .0008 316 90 Sr 28.9 4.505 2826 137 Cs 30.23 6.337 1176 121m Sn 43.9 .00005 390 151 Sm 90 .5314 77
Long-lived fission products Prop: t Yield Q * Unit: Ma % KeV * 99 Tc 0.211 6.1385 294 126 Sn 0.230 0.1084 4050 79 Se 0.327 0.0447 151 93 Zr 1.53 5.4575 91 135 Cs 2.3 6.9110 269 107 Pd 6.5 1.2499 33 129 I 15.7 0.8410 194
The radioactivity of all nuclear waste diminishes with time. All radioisotopes contained in the waste have a halflifethe time it takes for any radionuclide to lose half of its radioactivityand eventually all radioactive waste decays into non-radioactive elements (i.e., stable isotopes). Certain radioactive elements (such as plutonium239) in spent fuel will remain hazardous to humans and other creatures for hundreds of thousands of years. Other radioisotopes remain hazardous for millions of years. Thus, these wastes must be shielded for centuries and isolated from the livingenvironment for millennia.[2] Some elements, such as iodine-131, have a short halflife (around 8 days in this case) and thus they will cease to be a problem much more quickly than other, longerlived, decay products, but their activity is therefore much greater initially. The two tables show some of the major radioisotopes, their half-lives, and their radiation yield as a proportion of the yield of fission of uranium235.
The shorter a radioisotope's half-life, the more radioactive a sample of it will be. The opposite also applies; for instance, 96% of the element Indium in nature is the In-115 radioisotope, but it is considered non-toxic in pure metal form and mainly like a stable element because its multi-trillion-year half-life means that a relatively minuscule portion of its atoms decay per unit of time.[3] The energy and the type of the ionizing radiation emitted by a radioactive substance are also important factors in determining its threat to humans.[4] The chemical properties of the radioactive element will determine how mobile the substance is and how likely it is to spread into the environment and contaminate humans.[5] This is further complicated by the fact that many radioisotopes do not decay immediately to a stable state but rather to radioactive decay products within a decay chain before ultimately reaching a stable state.
[3edit]Pharmacokinetics
Actinides Pu f 250Cf
238
244
Cm U
f
241
243
137
232
4n
240
Pu
4n
248
Cm U Pu Th
Pu f is for Cf Amf fissile 241 251 Am Cf f 229 Th 246Cm 243Am 245 Cmf 250Cm 239Pu f 233 U f 230Th 231Pa 234 U 4n+1 242 Pu 4n+3 237 Np
249
f 242
151 6990 y Sm nc 141351 431898 No fission product has half-life 102 57 ky to 2105 years 824 ky 32160 126 Sn 79Se 211290 99Tc 340373 Long-lived fission products 12 My 93Zr 135Cs nc 107
4n+2 4n+1
238
247
Cmf 623 My U
f
Pd
129
235
Exposure to high levels of radioactive waste may cause serious harm or death. Treatment of an adult animal with radiation or some other mutation-causing effect, such as a cytotoxic anti-cancer drug, may cause cancer in the animal. In humans it has been calculated that a 5 sievert dose is usually fatal, and the lifetime risk of dying from radiation-induced cancer from a single dose of 0.1 sieverts is 0.8%, increasing by the same amount for each additional 0.1 sievert increment of dosage.[6] Ionizing radiation causes deletions in chromosomes.[7] If a developing organism such as an unborn child is irradiated, it is possible abirth defect may be induced, but it is unlikely this defect will be in a gamete or a gamete-forming cell. The incidence of radiation-induced mutations in humans is undetermined, due to flaws in studies done to date.[8] Depending on the decay mode and the pharmacokinetics of an element (how the body processes it and how quickly), the threat due to exposure to a given activity of a radioisotope will differ. For instance iodine-131 is a short-lived beta and gamma emitter, but because it concentrates in the thyroid gland, it is more able to cause injury than caesium-137 which, being water soluble, is rapidly excreted in urine. In a similar way, the alpha emitting actinides andradium are considered very harmful as they tend to have long biological halflives and their radiation has a high linear energy transfer value. Because of such differences, the rules determining biological injury differ widely according to the radioisotope, and sometimes also the nature of the chemical compound which contains the radioisotope.
[4edit]Sources
of waste
Radioactive waste comes from a number of sources. The majority of waste originates from the nuclear fuel cycle and nuclear weapons reprocessing. However, other sources include medical and industrial wastes, as well as naturally occurring radioactive materials (NORM) that can be concentrated as a result of the processing or consumption of coal, oil and gas, and some minerals, as discussed below.
[5edit]Nuclear
fuel cycle
Main articles: Nuclear fuel cycle and Spent nuclear fuel This article is about radioactive waste, for contextual information, see Nuclear power.
[6edit]Front end
Waste from the front end of the nuclear fuel cycle is usually alpha-emitting waste from the extraction of uranium. It often contains radium and its decay products. Uranium dioxide (UO2) concentrate from mining is not very radioactive - only a thousand or so times as radioactive as the granite used in buildings. It is refined from yellowcake (U3O8), then converted to uranium hexafluoride gas (UF6). As a gas, it undergoes enrichment to increase the U-235 content from 0.7% to about 4.4% (LEU). It is then turned into a hard ceramic oxide (UO2) for assembly as reactor fuel elements.[9] The main by-product of enrichment is depleted uranium (DU), principally the U-238 isotope, with a U-235 content of ~0.3%. It is stored, either as UF6 or as U3O8. Some is used in applications where its extremely high density makes it valuable, such as the keels of yachts, and anti-tank shells.[10] It is also used with plutonium for making mixed oxide fuel (MOX) and to dilute, or downblend, highly enriched uranium from weapons stockpiles which is now being redirected to become reactor fuel.
[7edit]Back end
See also: Nuclear reprocessing The back end of the nuclear fuel cycle, mostly spent fuel rods, contains fission products that emit beta and gamma radiation, and actinides that emit alpha particles, such as uranium-234, neptunium-237,plutonium238 and americium-241, and even sometimes some neutron emitters such as californium (Cf). These isotopes are formed in nuclear reactors. It is important to distinguish the processing of uranium to make fuel from the reprocessing of used fuel. Used fuel contains the highly radioactive products of fission (see high level waste below). Many of these are neutron absorbers, called neutron poisons in this context. These eventually build up to a level where they absorb so many neutrons that the chain reaction stops, even with the control rods completely removed. At that point the fuel has to be replaced in the reactor with fresh fuel, even though there is still a substantial quantity of uranium-235 and plutonium present. In the United States, this used fuel is stored, while in countries such as Russia, the United Kingdom, France, Japan and India, the fuel is reprocessed to
remove the fission products, and the fuel can then be re-used. This reprocessing involves handling highly radioactive materials, and the fission products removed from the fuel are a concentrated form of high-level waste as are the chemicals used in the process. While these countries reprocess the fuel carrying out single plutonium cycles, India is the only country known to be planning multiple plutonium recycling schemes.[11]
Long-lived radioactive waste from the back end of the fuel cycle is especially relevant when designing a complete waste management plan for spent nuclear fuel (SNF). When looking at long term radioactive decay, the actinides in the SNF have a significant influence due to their characteristically long half-lives. Depending on what anuclear reactor is fueled with, the actinide composition in the SNF will be different. An example of this effect is the use of nuclear fuels with thorium. Th-232 is a fertile material that can undergo a neutron capture reaction and two beta minus decays, resulting in the production of fissile U-233. The SNF of a cycle with thorium will contain U-233. Its radioactive decay will strongly influence the longterm activity curve of the SNF around 1 million years. A comparison of the activity associated to U-233 for three different SNF types can be seen in the figure on the top right. The burnt fuels are thorium with reactor-grade plutonium (RGPu), thorium with weapons-grade plutonium (WGPu) and Mixed Oxide fuel (MOX). For RGPu and WGPu, the initial amount of U-233 and its decay around 1 million years can be seen. This has an effect in the total activity curve of the three fuel types. The
absence of U-233 and its daughter products in the MOX fuel results in a lower activity in region 3 of the figure on the bottom right, whereas for RGPu and WGPu the curve is maintained higher due to the presence of U-233 that has not fully decayed. The use of different fuels in nuclear reactors results in different SNF composition, with varying activity curves.
[9edit]Proliferation concerns
See also: Nuclear proliferation and Reactor-grade plutonium Since uranium and plutonium are nuclear weapons materials, there have been proliferation concerns. Ordinarily (in spent nuclear fuel), plutonium is reactor-grade plutonium. In addition to plutonium-239, which is highly suitable for building nuclear weapons, it contains large amounts of undesirable contaminants: plutonium-240,plutonium-241, and plutonium-238. These isotopes are difficult to separate, and more cost-effective ways of obtaining fissile material exist (e.g. uranium enrichment or dedicated plutonium production reactors).[12] High-level waste is full of highly radioactive fission products, most of which are relatively short-lived. This is a concern since if the waste is stored, perhaps in deep geological storage, over many years the fission products decay, decreasing the radioactivity of the waste and making the plutonium easier to access. The undesirable contaminant Pu-240 decays faster than the Pu-239, and thus the quality of the bomb material increases with time (although its quantity decreases during that time as well). Thus, some have argued, as time passes, these deep storage areas have the potential to become "plutonium mines", from which material for nuclear weapons can be acquired with relatively little difficulty. Critics of the latter idea point out that the half-life of Pu-240 is 6,560 years and Pu-239 is 24,110 years, and thus the relative enrichment of one isotope to the other with time occurs with a half-life of 9,000 years (that is, it takes 9000 years for the fraction of Pu-240 in a sample of mixed plutonium isotopes, to spontaneously decrease by halfa typical enrichment needed to turn reactor-grade into weapons-grade Pu). Thus "weapons grade plutonium mines" would be a problem for the very far future (>9,000 years from now), so that there remains a great deal of time for technology to advance to solve it.[13] Pu-239 decays to U-235 which is suitable for weapons and which has a very long half-life (roughly 109 years). Thus plutonium may decay and leave uranium-235. However, modern reactors are only moderately enriched with U-235 relative to U-238, so the U-238 continues to serve as a denaturation agent for any U-235 produced by plutonium decay. One solution to this problem is to recycle the plutonium and use it as a fuel e.g. in fast reactors. In pyrometallurgical fast reactors, the separated plutonium and uranium are contaminated by actinides and cannot be used for nuclear weapons.
[edit]Nuclear
weapons decommissioning
Waste from nuclear weapons decommissioning is unlikely to contain much beta or gamma activity other than tritium and americium. It is more likely to contain alpha-emitting actinides such as Pu-239 which is a
fissile material used in bombs, plus some material with much higher specific activities, such as Pu-238 or Po. In the past the neutron trigger for an atomic bomb tended to be beryllium and a high activity alpha emitter such as polonium; an alternative to polonium is Pu-238. For reasons of national security, details of the design of modern bombs are normally not released to the open literature. Some designs might contain a radioisotope thermoelectric generator using Pu-238 to provide a long lasting source of electrical power for the electronics in the device. It is likely that the fissile material of an old bomb which is due for refitting will contain decay products of the plutonium isotopes used in it, these are likely to include U-236 from Pu-240 impurities, plus some U-235 from decay of the Pu-239; due to the relatively long half-life of these Pu isotopes, these wastes from radioactive decay of bomb core material would be very small, and in any case, far less dangerous (even in terms of simple radioactivity) than the Pu-239 itself. The beta decay of Pu-241 forms Am-241; the in-growth of americium is likely to be a greater problem than the decay of Pu-239 and Pu-240 as the americium is a gamma emitter (increasing external-exposure to workers) and is an alpha emitter which can cause the generation of heat. The plutonium could be separated from the americium by several different processes; these would include pyrochemical processes and aqueous/organic solvent extraction. A truncated PUREX type extraction process would be one possible method of making the separation. naturally occuring uranium is not fissionable because it contains 99.3% of U-238 and only 0.7% of U-235.
[edit]Legacy
waste
Due to historic activities typically related to radium industry, uranium mining, and military programs, there are numerous sites that contain or are contaminated with radioactivity. In the United States alone, theDepartment of Energy states there are "millions of gallons of radioactive waste" as well as "thousands of tons of spent nuclear fuel and material" and also "huge quantities of contaminated soil and water."[14]Despite copious quantities of waste, the DOE has stated a goal of cleaning all presently contaminated sites successfully by 2025.[14] The Fernald, Ohio site for example had "31 million pounds of uranium product", "2.5 billion pounds of waste", "2.75 million cubic yards of contaminated soil and debris", and a "223 acre portion of the underlying Great Miami Aquifer had uranium levels above drinking standards."[14] The United States has at least 108 sites designated as areas that are contaminated and unusable, sometimes many thousands of acres.[14][15] DOE wishes to clean or mitigate many or all by 2025, however the task can be difficult and it acknowledges that some may never be completely remediated. In just one of these 108 larger designations, Oak Ridge National Laboratory, there were for example at least "167 known contaminant release sites" in one of the three subdivisions of the 37,000-acre (150 km2) site.[14] Some of the U.S. sites were smaller in nature, however, cleanup issues were simpler to address, and DOE has successfully completed cleanup, or at least closure, of several sites.[14] It is a common misconception that nuclear waste has to be stored in a cave after its 20-year decommissioning process.
[edit]Medical
Radioactive medical waste tends to contain beta particle and gamma ray emitters. It can be divided into two main classes. In diagnostic nuclear medicine a number of short-lived gamma emitters such astechnetium-99m are used. Many of these can be disposed of by leaving it to decay for a short time before disposal as normal waste. Other isotopes used in medicine, with half-lives in parentheses, include: Y-90, used for treating lymphoma (2.7 days) I-131, used for thyroid function tests and for treating thyroid cancer (8.0 days) Sr-89, used for treating bone cancer, intravenous injection (52 days) Ir-192, used for brachytherapy (74 days) Co-60, used for brachytherapy and external radiotherapy (5.3 years) Cs-137, used for brachytherapy, external radiotherapy (30 years)
[edit]Industrial
Industrial source waste can contain alpha, beta, neutron or gamma emitters. Gamma emitters are used in radiography while neutron emitting sources are used in a range of applications, such as oil well logging.[16]
[edit]Naturally
Annual release of uranium and thoriumradioisotopes from coal combustion, predicted by ORNL to cumulatively amount to 2.9 million tons over the 1937-2040 period, from the combustion of an estimated 637 billion tons of coal worldwide.[17]
Processing of substances containing natural radioactivity is often known as NORM. A lot of this waste is alpha particle-emitting matter from the decay chains of uraniumand thorium. The main source of radiation in the human body is potassium-40 (40K), typically 17 milligrams in the body at a time and 0.4 milligrams/day intake.[18] Most rocks, due to their components, have a certain, but low, level of radioactivity. Usually ranging from 1 milli-Sievert to 13 milli-Sievert (mSv) annually depending on location, average
radiation exposure from natural radioisotopes is 2.0 mSv per person a year worldwide. [19] Such is most of typical total dosage (with mean annual exposure from other sources amounting to 0.4 mSv from cosmic rays, 0.007 mSv from the legacy of past atmospheric nuclear testing along with the Chernobyl disaster, 0.0002 mSv from the nuclear fuel cycle, and, averaged over the whole populace, 0.6 mSv medical tests and 0.005 mSv occupational exposure).[19]
[edit]Coal
Coal contains a small amount of radioactive uranium, barium, thorium and potassium, but, in the case of pure coal, this is significantly less than the average concentration of those elements in the Earth's crust. The surrounding strata, if shale or mudstone, often contain slightly more than average and this may also be reflected in the ash content of 'dirty' coals.[17][20] The more active ash minerals become concentrated in the fly ash precisely because they do not burn well.[17] The radioactivity of fly ash is about the same as black shale and is less than phosphate rocks, but is more of a concern because a small amount of the fly ash ends up in the atmosphere where it can be inhaled.[21] According to U.S. NCRP reports, population exposure from 1000-MWe power plants amounts to 490 person-rem/year for coal power plants and 4.8 person-rem/year for nuclear plants during normal operation, the latter being 136 person-rem/year for the complete nuclear fuel cycle.[17]
[edit]Classification
of radioactive waste
Classifications of nuclear waste varies by country. The IAEA, which publishes the Radioactive Waste Safety Standards (RADWASS), also plays a significant role.[23]
[edit]Uranium
tailings
Uranium tailings are waste by-product materials left over from the rough processing of uraniumbearing ore. They are not significantly radioactive. Mill tailings are sometimes referred to as 11(e)2 wastes, from the section of the Atomic Energy Act of 1946 that defines them. Uranium mill tailings typically also contain chemically hazardous heavy metal such as lead and arsenic. Vast mounds of uranium mill tailings are left at many old mining sites, especially in Colorado, New Mexico, and Utah. See also: Uranium Mill Tailings Remedial Action
[edit]Low-level
waste
Low level waste (LLW) is generated from hospitals and industry, as well as the nuclear fuel cycle. Lowlevel wastes include paper, rags, tools, clothing, filters, and other materials which contain small amounts of mostly short-lived radioactivity. Materials that originate from any region of an Active Area are commonly designated as LLW as a precautionary measure even if there is with only a remote possibility of being contaminated with radioactive materials. Such LLW typically exhibits no higher radioactivity than one would expect from the same material disposed of in a non-active area, such as a normal office block. Some high-activity LLW requires shielding during handling and transport but most LLW is suitable for shallow land burial. To reduce its volume, it is often compacted or incinerated before disposal. Low-level waste is divided into four classes: class A, class B, class C, and Greater Than Class C (GTCC).
[edit]Intermediate-level
waste
Spent Fuel Flasks are transported by railway in the United Kingdom. Each flask is constructed of 14 in (360 mm) thick solid steel and weighs in excess of 50 tons
Intermediate-level waste (ILW) contains higher amounts of radioactivity and in some cases requires shielding. Intermediate-level wastes includes resins, chemical sludgeand 1metal reactor nuclear fuel cladding, as well as contaminated materials from reactor decommissioning. It may be solidified in concrete or bitumen for disposal. As a general rule, short-lived waste (mainly non-fuel materials from reactors) is buried in shallow repositories, while long-lived waste (from fuel and fuel reprocessing) is deposited ingeological repository. U.S. regulations do not define this category of waste; the term is used in Europe and elsewhere.
[edit]High-level
waste
High-level waste (HLW) is produced by nuclear reactors. It contains fission products and transuranic elements generated in the reactor core. It is highly radioactive and often thermally hot. HLW accounts for over 95 percent of the total radioactivity produced in the process of nuclear electricity generation. The amount of HLW worldwide is currently increasing by about 12,000 metric tons every year, which is the equivalent to about 100 double-decker buses or a two-story structure with a footprint the size of abasketball court.[24] A 1000-MWe nuclear power plant produces about 27 tonnes of spent nuclear fuel (unreprocessed) every year.[25]
[edit]Transuranic
waste
Transuranic waste (TRUW) as defined by U.S. regulations is, without regard to form or origin, waste that is contaminated with alpha-emitting transuranic radionuclides withhalf-lives greater than 20 years and concentrations greater than 100 nCi/g (3.7 MBq/kg), excluding high-level waste. Elements that have an atomic number greater than uranium are called transuranic ("beyond uranium"). Because of their long halflives, TRUW is disposed more cautiously than either low- or intermediate-level waste. In the U.S., it arises mainly from weaponsproduction, and consists of clothing, tools, rags, residues, debris and other items contaminated with small amounts of radioactive elements (mainly plutonium). Under U.S. law, transuranic waste is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of radiation dose measured at the surface of the waste container. CH TRUW has a surface dose rate not greater than 200 Roentgen equivalent man per hour (to millisievert/hr), whereas RH TRUW has a surface dose rate of 200 Rntgen equivalent man per hour (2 mSv/h) or greater. CH TRUW does not have the very high radioactivity of high-level waste, nor its high heat generation, but RH TRUW can be highly radioactive, with surface dose rates up to 1000000 Rntgen equivalent man per hour (10000 mSv/h). The U.S. currently permanently disposes of TRUW generated from nuclear power plants and military facilities at the Waste Isolation Pilot Plant.[26]
[edit]Prevention
of waste
Further information: MYRRHA Due to the many advances in reactor design, it is today possible to reduce the radioactive waste by a factor of 100. This can be done by using new reactor types such as Generation IV reactors. This reduction of nuclear waste is possible because these new reactor types are capable of burning the lower actinides.
[edit]Management
of waste
See also: High-level radioactive waste management, List of nuclear waste treatment technologies, and Environmental effects of nuclear power Of particular concern in nuclear waste management are two long-lived fission products, Tc-99 (half-life 220,000 years) and I-129 (half-life 17 million years), which dominate spent fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Np-237 (half-life two million years) and Pu-239 (half-life 24,000 years).[27] Nuclear waste requires sophisticated treatment and management to successfully isolate it from interacting with the biosphere. This usually necessitates treatment, followed by a long-term management strategy involving storage, disposal or transformation of the waste into a non-toxic form.[28] Governments around the world are considering a range of waste management and disposal options, though there has been limited progress toward long-term waste management solutions.[29]
[edit]Initial
treatment of waste
[edit]Vitrification
Long-term storage of radioactive waste requires the stabilization of the waste into a form which will neither react nor degrade for extended periods of time. One way to do this is through vitrification.[30] Currently
at Sellafield the high-level waste (PUREX first cycle raffinate) is mixed with sugar and then calcined. Calcination involves passing the waste through a heated, rotating tube. The purposes of calcination are to evaporate the water from the waste, and de-nitrate the fission products to assist the stability of the glass produced.[31] The 'calcine' generated is fed continuously into an induction heated furnace with fragmented glass.[32] The resulting glass is a new substance in which the waste products are bonded into the glass matrix when it solidifies. This product, as a melt, is poured into stainless steel cylindrical containers ("cylinders") in a batch process. When cooled, the fluid solidifies ("vitrifies") into the glass. Such glass, after being formed, is highly resistant to water.[33] After filling a cylinder, a seal is welded onto the cylinder. The cylinder is then washed. After being inspected for external contamination, the steel cylinder is stored, usually in an underground repository. In this form, the waste products are expected to be immobilized for a long period of time (many thousands of years).[34] The glass inside a cylinder is usually a black glossy substance. All this work (in the United Kingdom) is done using hot cell systems. The sugar is added to control theruthenium chemistry and to stop the formation of the volatile RuO4 containing radioactive ruthenium isotopes. In the west, the glass is normally a borosilicate glass (similar to Pyrex), while in the former Soviet bloc it is normal to use a phosphate glass. The amount of fission products in the glass must be limited because some (palladium, the other Pt group metals, and tellurium) tend to form metallic phases which separate from the glass. Bulk vitrification uses electrodes to melt soil and wastes, which are then buried underground.[35] In Germany a vitrification plant is in use; this is treating the waste from a small demonstration reprocessing plant which has since been closed down.[31][36]
[edit]Ion exchange
It is common for medium active wastes in the nuclear industry to be treated with ion exchange or other means to concentrate the radioactivity into a small volume. The much less radioactive bulk (after treatment) is often then discharged. For instance, it is possible to use a ferric hydroxide floc to remove radioactive metals from aqueous mixtures.[37] After the radioisotopes are absorbed onto the ferric hydroxide, the resulting sludge can be placed in a metal drum before being mixed with cement to form a solid waste form.[38] In order to get better long-term performance (mechanical stability) from such forms, they may be made from a mixture of fly ash, or blast furnace slag, and Portland cement, instead of normal concrete (made with Portland cement, gravel and sand).
[edit]Synroc
The Australian Synroc (synthetic rock) is a more sophisticated way to immobilize such waste, and this process may eventually come into commercial use for civil wastes (it is currently being developed for US military wastes). Synroc was invented by the late Prof Ted Ringwood (a geochemist) at the Australian National University.[39] The Synroc contains pyrochlore and cryptomelane type minerals. The original form of Synroc (Synroc C) was designed for the liquid high level waste (PUREX raffinate) from a light water
reactor. The main minerals in this Synroc are hollandite (BaAl2Ti6O16), zirconolite (CaZrTi2O7) and perovskite(CaTiO3). The zirconolite and perovskite are hosts for the actinides. The strontium and barium will be fixed in the perovskite. The caesium will be fixed in the hollandite.
[edit]Long
See also: Economics of new nuclear power plants#Waste disposal The time frame in question when dealing with radioactive waste ranges from 10,000 to 1,000,000 years,[40] according to studies based on the effect of estimated radiation doses.[41] Researchers suggest that forecasts of health detriment for such periods should be examined critically.[42] [43] Practical studies only consider up to 100 years as far as effective planning[44] and cost evaluations[45] are concerned. Long term behavior of radioactive wastes remains a subject for ongoing research projects.[46]
[edit]Above-ground disposal
Dry cask storage typically involves taking waste from a spent fuel pool and sealing it (along with an inert gas) in a steel cylinder, which is placed in a concrete cylinder which acts as a radiation shield. It is a relatively inexpensive method which can be done at a central facility or adjacent to the source reactor. The waste can be easily retrieved for reprocessing.[47]
[edit]Geologic disposal
The process of selecting appropriate deep final repositories for high level waste and spent fuel is now under way in several countries (Schacht Asse II and the Waste Isolation Pilot Plant) with the first expected to be commissioned some time after 2010. The basic concept is to locate a large, stable geologic formation and use mining technology to excavate a tunnel, or large-bore tunnel boring machines (similar to those used to drill the Channel Tunnel from England to France) to drill a shaft 5001,000 meters below the surface where rooms or vaults can be excavated for disposal of high-level radioactive waste. The goal is to permanently isolate nuclear waste from the human environment. Many people remain uncomfortable with the immediate stewardship cessation of this disposal system, suggesting perpetual management and monitoring would be more prudent. Because some radioactive species have half-lives longer than one million years, even very low container leakage and radionuclide migration rates must be taken into account.[48] Moreover, it may require more than one half-life until some nuclear materials lose enough radioactivity to cease being lethal to living things. A 1983 review of the Swedish radioactive waste disposal program by the National Academy of Sciences found that countrys estimate of several hundred thousand yearsperhaps up to one million yearsbeing necessary for waste isolation fully justified.[49] Aside from dilution, chemically toxic stable elements in some waste such as arsenic remain toxic for up to billions of years or indefinitely.[50] Sea-based options for disposal of radioactive waste[51] include burial beneath a stable abyssal plain, burial in a subduction zone that would slowly carry the waste downward into the Earth's mantle,[52][53] and burial beneath a remote natural or human-made island. While these approaches all have merit and would
facilitate an international solution to the problem of disposal of radioactive waste, they would require an amendment of the Law of the Sea.[54] Article 1 (Definitions), 7., of the 1996 Protocol to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter, (the London Dumping Convention) states: Sea means all marine waters other than the internal waters of States, as well as the seabed and the subsoil thereof; it does not include sub-seabed repositories accessed only from land. The proposed land-based subductive waste disposal method disposes of nuclear waste in a subduction zone accessed from land,[55] and therefore is not prohibited by international agreement. This method has been described as the most viable means of disposing of radioactive waste, [56] and as the state-of-the-art as of 2001 in nuclear waste disposal technology.[57] Another approach termed Remix & Return[58] would blend high-level waste with uranium mine and mill tailings down to the level of the original radioactivity of the uranium ore, then replace it in inactive uranium mines. This approach has the merits of providing jobs for miners who would double as disposal staff, and of facilitating a cradle-to-grave cycle for radioactive materials, but would be inappropriate for spent reactor fuel in the absence of reprocessing, due to the presence in it of highly toxic radioactive elements such as plutonium. Deep borehole disposal is the concept of disposing of high-level radioactive waste from nuclear reactors in extremely deep boreholes. Deep borehole disposal seeks to place the waste as much as five kilometers beneath the surface of the Earth and relies primarily on the immense natural geological barrier to confine the waste safely and permanently so that it should never pose a threat to the environment. The Earth's crust contains 120 trillion tons of thorium and 40 trillion tons of uranium (primarily at relatively trace concentrations of parts per million each adding up over the crust's 3 * 1019 ton mass), among other natural radioisotopes.[59][60][61] Since the fraction of nuclides decaying per unit of time is inversely proportional to an isotope's half-life, the relative radioactivity of the lesser amount of human-produced radioisotopes (thousands of tons instead of trillions of tons) would diminish once the isotopes with far shorter half-lives than the bulk of natural radioisotopes decayed.
[edit]Transmutation
Main article: Nuclear transmutation There have been proposals for reactors that consume nuclear waste and transmute it to other, lessharmful nuclear waste. In particular, the Integral Fast Reactor was a proposed nuclear reactor with a nuclear fuel cycle that produced no transuranic waste and in fact, could consume transuranic waste. It proceeded as far as large-scale tests, but was then canceled by the US Government. Another approach, considered safer but requiring more development, is to dedicate subcritical reactors to the transmutation of the left-over transuranic elements. An isotope that is found in nuclear waste and that represents a concern in terms of proliferation is Pu239. The estimated world total of plutonium in the year 2000 was of 1,645 MT, of which 210 MT had
been separated by reprocessing. The large stock of plutonium is a result of its production inside uranium-fueled reactors and of the reprocessing of weapons-grade plutonium during the weapons program. An option for getting rid of this plutonium is to use it as a fuel in a traditional Light Water Reactor (LWR). Several fuel types with differing plutonium destruction efficiencies are under study. See Nuclear transmutation. Transmutation was banned in the US in April 1977 by President Carter due to the danger of plutonium proliferation,[62] but President Reagan rescinded the ban in 1981.[63] Due to the economic losses and risks, construction of reprocessing plants during this time did not resume. Due to high energy demand, work on the method has continued in the EU. This has resulted in a practical nuclear research reactor called Myrrhain which transmutation is possible. Additionally, a new research program called ACTINET has been started in the EU to make transmutation possible on a large, industrial scale. According to President Bush's Global Nuclear Energy Partnership (GNEP) of 2007, the US is now actively promoting research on transmutation technologies needed to markedly reduce the problem of nuclear waste treatment.[64] There have also been theoretical studies involving the use of fusion reactors as so called "actinide burners" where a fusion reactor plasma such as in a tokamak, could be "doped" with a small amount of the "minor" transuranic atoms which would be transmuted (meaning fissioned in the actinide case) to lighter elements upon their successive bombardment by the very high energy neutrons produced by the fusion of deuteriumand tritium in the reactor. A study at MIT found that only 2 or 3 fusion reactors with parameters similar to that of the International Thermonuclear Experimental Reactor (ITER) could transmute the entire annual minor actinide production from all of the light water reactors presently operating in the United States fleet while simultaneously generating approximately 1 gigawatt of power from each reactor.[65]
[edit]Re-use of waste
Main article: Nuclear reprocessing Another option is to find applications for the isotopes in nuclear waste so as to reuse them.[66] Already, caesium-137, strontium-90 and a few other isotopes are extracted for certain industrial applications such asfood irradiation and radioisotope thermoelectric generators. While reuse does not eliminate the need to manage radioisotopes, it reduces the quantity of waste produced. The Nuclear Assisted Hydrocarbon Production Method,[67] Canadian patent application 2,659,302, is a method for the temporary or permanent storage of nuclear waste materials comprising the placing of waste materials into one or more repositories or boreholes constructed into an unconventional oil formation. The thermal flux of the waste materials fracture the formation, alters the chemical and/or physical properties of hydrocarbon material within the subterranean formation to allow removal of the altered material. A mixture of hydrocarbons, hydrogen, and/or other formation fluids are produced from the formation. The radioactivity of high-level radioactive waste affords proliferation resistance to plutonium placed in the periphery of the repository or the deepest portion of a borehole.
Breeder reactors can run on U-238 and transuranic elements, which comprise the majority of spent fuel radioactivity in the 1000-100000 year time span.
[edit]Space disposal
Space disposal is an attractive notion because it permanently removes nuclear waste from the environment. It has significant disadvantages, not least of which is the potential for catastrophic failure of a launch vehicle which would spread radioactive material into the atmosphere and around the world. The high number of launches that would be required (because no individual rocket would be able to carry very much of the material relative to the total which needs to be disposed of) makes the proposal impractical (for both economic and risk-based reasons) using current rockets,[68] resulting in some suggestions for developing a mass driver for disposal instead. To further complicate matters, international agreements on the regulation of such a program would need to be established. [69]
[edit]National
management plans
See also: High-level radioactive waste management Most countries are considerably ahead of the United States in developing plans for high-level radioactive waste disposal. Sweden and Finland are furthest along in committing to a particular disposal technology, while many others reprocess spent fuel or contract with France or Great Britain to do it, taking back the resulting plutonium and high-level waste. An increasing backlog of plutonium from reprocessing is developing in many countries... It is doubtful that reprocessing makes economic sense in the present environment of cheap uranium.[70] In many European countries (e.g., Britain, Finland, the Netherlands, Sweden and Switzerland) the risk or dose limit for a member of the public exposed to radiation from a future high-level nuclear waste facility is considerably more stringent than that suggested by the International Commission on Radiation Protection or proposed in the United States. European limits are often more stringent than the standard suggested in 1990 by the International Commission on Radiation Protection by a factor of 20, and more stringent by a factor of ten than the standard proposed by the US Environmental Protection Agency (EPA) for Yucca Mountain nuclear waste repository for the first 10,000 years after closure.[71] The U.S. EPAs proposed standard for greater than 10,000 years is 250 times more permissive than the European limit.[71] The U.S. EPA proposed a legal limit of a maximum of 3.5 milliSieverts (350 millirem) each annually to local individuals after 10,000 years, which would be up to several percent of the exposure currently received by some populations in the highest natural background regions on Earth, though the U.S. DOE predicted that received dose would be much below that limit.[72] Over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km2) by 1 part in 10 million over the cumulative
amount of natural radioisotopes in such a volume, but the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average.[73]
[edit]Illegal
dumping
Main article: Radioactive waste dumping by the 'Ndrangheta Authorities in Italy are investigating a 'Ndrangheta mafia clan accused of trafficking and illegally dumping nuclear waste. According to a turncoat, a manager of the Italys state energy research agency Enea paid the clan to get rid of 600 drums of toxic and radioactive waste from Italy, Switzerland, France, Germany, and the US, with Somalia as the destination, where the waste was buried after buying off local politicians. Former employees of Enea are suspected of paying the criminals to take waste off their hands in the 1980s and 1990s. Shipments to Somalia continued into the 1990s, while the 'Ndrangheta clan also blew up shiploads of waste, including radioactive hospital waste, and sending them to the sea bed off the Calabrian coast.[74] According to the environmental group Legambiente, former members of the 'Ndrangheta have said that they were paid to sink ships with radioactive material for the last 20 years.[75]
[edit]Accidents
Main article: Nuclear and radiation accidents A number of incidents have occurred when radioactive material was disposed of improperly, shielding during transport was defective, or when it was simply abandoned or even stolen from a waste store.[76] In the Soviet Union, waste stored in Lake Karachay was blown over the area during a dust storm after the lake had partly dried out.[77] At Maxey Flat, a low-level radioactive waste facility located in Kentucky, containment trenches covered with dirt, instead of steel or cement, collapsed under heavy rainfall into the trenches and filled with water. The water that invaded the trenches became radioactive and had to be disposed of at theMaxey Flat facility itself. In other cases of radioactive waste accidents, lakes or ponds with radioactive waste accidentally overflowed into the rivers during exceptional storms.[citation needed] In Italy, several radioactive waste deposits let material flow into river water, thus contaminating water for domestic use.[78] In France, in the summer of 2008 numerous incidents happened;[79] in one, at the Areva plant in Tricastin, it was reported that during a draining operation, liquid containing untreated uranium overflowed out of a faulty tank and about 75 kg of the radioactive material seeped into the ground and, from there, into two rivers nearby;[80] in another case, over 100 staff were contaminated with low doses of radiation.[81] Scavenging of abandoned radioactive material has been the cause of several other cases of radiation exposure, mostly in developing nations, which may have less regulation of dangerous substances (and sometimes less general education about radioactivity and its hazards) and a market for scavenged goods and scrap metal. The scavengers and those who buy the material are almost always unaware that the material is radioactive and it is selected for its aesthetics or scrap value.[82] Irresponsibility on the part of the radioactive material's owners, usually a hospital, university
or military, and the absence of regulation concerning radioactive waste, or a lack of enforcement of such regulations, have been significant factors in radiation exposures. For an example of an accident involving radioactive scrap originating from a hospital see the Goinia accident.[82] Transportation accidents involving spent nuclear fuel from power plants are unlikely to have serious consequences due to the strength of the spent nuclear fuel shipping casks.[83]
Greenhouse effect
From Wikipedia, the free encyclopedia
(Redirected from Green house effect)
A representation of the exchanges of energy between the source (the Sun), the Earth's surface, the Earth's atmosphere, and the ultimate sink outer space. The ability of the atmosphere to capture and recycle energy emitted by the Earth surface is the defining characteristic of the greenhouse effect.
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases, and is re-radiated in all directions. Since part of this re-radiation is back towards the surface, energy is transferred to the surface and the lower atmosphere. As a result, the temperature there is higher than it would be if direct heating by solar radiation were the only warming mechanism.[1][2] Solar radiation at the high frequencies of visible light passes through the atmosphere to warm the planetary surface, which then emits this energy at the lower frequencies of infrared thermal radiation. Infrared radiation is absorbed by greenhouse gases, which in turn re-radiate much of the energy to the surface and lower atmosphere. The mechanism is named after the effect of solar radiation passing through glass and warming a greenhouse, but the way it retains heat is fundamentally different as a greenhouse works by reducing airflow, isolating the warm air inside the structure so that heat is not lost by convection.[2][3][4] The greenhouse effect was discovered by Joseph Fourier in 1824, first reliably experimented on by John Tyndall in 1858, and first reported quantitatively by Svante Arrhenius in 1896.[5] If an ideal thermally conductive blackbody was the same distance from the Sun as the Earth is, it would have a temperature of about 5.3 C. However, since the Earth reflects about 30%[6] (or 28%[7]) of the incoming sunlight, the planet's effective temperature (the temperature of a blackbody that would emit the same amount of radiation) is about 18 or 19 C,[8][9] about 33C below the actual surface temperature of about 14 C or 15
C.[10] The mechanism that produces this difference between the actual surface temperature and the effective temperature is due to the atmosphere and is known as the greenhouse effect. Global warming, a recent warming of the Earth's surface and lower atmosphere,[11] is believed to be the result of a strengthening of the greenhouse effect mostly due to human-produced increases in atmospheric greenhouse gases.[12]
Contents
[hide]
1 Basic mechanism 2 Greenhouse gases 3 Role in climate change 4 Real greenhouses 5 Bodies other than Earth 6 Literature 7 References
Basic mechanism
The Earth receives energy from the Sun in the form UV, visible, and near IR radiation, most of which passes through the atmosphere without being absorbed. Of the total amount of energy available at the top of the atmosphere (TOA), about 50% is absorbed at the Earth's surface. Because it is warm, the surface radiates far IR thermal radiation that consists of wavelengths that are predominantly much longer than the wavelengths that were absorbed. Most of this thermal radiation is absorbed by the atmosphere and re-radiated both upwards and downwards; that radiated downwards is absorbed by the Earth's surface. This trapping of long-wavelength thermal radiation leads to a higher equilibrium temperature than if the atmosphere were absent. This highly simplified picture of the basic mechanism needs to be qualified in a number of ways, none of which affect the fundamental process.
The solar radiation spectrum for direct light at both the top of the Earth's atmosphere and at sea level
The incoming radiation from the Sun is mostly in the form of visible light and nearby wavelengths, largely in the range 0.24 m, corresponding to the Sun's radiative temperature of 6,000 K.[13] Almost half the radiation is in the form of "visible" light, which our eyes are adapted to use.[14]
About 50% of the Sun's energy is absorbed at the Earth's surface and the rest is reflected or absorbed by the atmosphere. The reflection of light back into spacelargely by cloudsdoes not much affect the basic mechanism; this light, effectively, is lost to the system.
The absorbed energy warms the surface. Simple presentations of the greenhouse effect, such as the idealized greenhouse model, show this heat being lost as thermal radiation. The reality is more complex: the atmosphere near the surface is largely opaque to thermal radiation (with important exceptions for "window" bands), and most heat loss from the surface is by sensible heat and latent heat transport. Radiative energy losses become increasingly important higher in the atmosphere largely because of the decreasing concentration of water vapor, an important greenhouse gas. It is more realistic to think of the greenhouse effect as applying to a "surface" in the mid-troposphere, which is effectively coupled to the surface by a lapse rate.
Within the region where radiative effects are important the description given by the idealized greenhouse model becomes realistic: The surface of the Earth, warmed to a temperature around 255 K, radiates longwavelength, infrared heat in the range 4100 m.[13] At these wavelengths, greenhouse gases that were largely transparent to incoming solar radiation are more absorbent.[13] Each layer of atmosphere with greenhouses gases absorbs some of the heat being radiated upwards from lower layers. To maintain its own equilibrium, it re-radiates the absorbed heat in all directions, both upwards and downwards. This results in more warmth below, while still radiating enough heat back out into deep space from the upper layers to maintain overall thermal equilibrium. Increasing the concentration of the gases increases the amount of absorption and re-radiation, and thereby further warms the layers and ultimately the surface below.[9]
Greenhouse gasesincluding most diatomic gases with two different atoms (such as carbon monoxide, CO) and all gases with three or more atomsare able to absorb and emit infrared radiation. Though more than 99% of the dry atmosphere is IR transparent (because the main constituentsN2, O2, and Arare not able to directly absorb or emit infrared radiation), intermolecular collisions cause the energy absorbed and emitted by the greenhouse gases to be shared with the other, non-IR-active, gases.
The simple picture assumes equilibrium. In the real world there is the diurnal cycle as well as seasonal cycles and weather. Solar heating only applies during daytime. During the night, the atmosphere cools somewhat, but not greatly, because its emissivity is low, and during the day the atmosphere warms. Diurnal temperature changes decrease with height in the atmosphere.
Greenhouse gases
Main article: Greenhouse gas By their percentage contribution to the greenhouse effect on Earth the four major gases are:[15][16]
water vapor, 3670% carbon dioxide, 926% methane, 49% ozone, 37%
The major non-gas contributor to the Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on radiative properties of the atmosphere.[16]
Strengthening of the greenhouse effect through human activities is known as the enhanced (or anthropogenic) greenhouse effect.[17] This increase in radiative forcing from human activity is attributable mainly to increased atmospheric carbon dioxide levels.[18] CO2 is produced by fossil fuel burning and other activities such as cement production and tropical deforestation.[19] Measurements of CO2 from the Mauna Loa observatory show that concentrations have increased from about 313 ppm [20] in 1960 to about 389 ppm in 2010. The current observed amount of CO2 exceeds the geological record maxima (~300 ppm) from ice core data.[21] The effect of combustionproduced carbon dioxide on the global climate, a special case of the greenhouse effect first described in 1896 by Svante Arrhenius, has also been called the Callendar effect. Because it is a greenhouse gas, elevated CO2 levels contribute to additional absorption and emission of thermal infrared in the atmosphere, which produce net warming. According to the latest Assessment Report from the Intergovernmental Panel on Climate Change, "most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations".[22] Over the past 800,000 years,[23] ice core data shows unambiguously that carbon dioxide has varied from values as low as 180 parts per million (ppm) to the pre-industrial level of 270ppm.[24] Paleoclimatologists consider variations in carbon dioxide to be a fundamental factor in controlling climate variations over this time scale. [25][26]
Real greenhouses
The "greenhouse effect" is named by analogy to greenhouses. The greenhouse effect and a real greenhouse are similar in that they both limit the rate of thermal energy flowing out of the system, but the mechanisms by which heat is retained are different.[27] A greenhouse works primarily by preventing absorbed heat from leaving the structure through convection, i.e. sensible heat transport. The greenhouse effect heats the earth because greenhouse gases absorb outgoing radiative energy and re-emit some of it back towards earth. A greenhouse is built of any material that passes sunlight, usually glass, or plastic. It mainly heats up because the Sun warms the ground inside, which then warms the air in the greenhouse. The air continues to heat because it is confined within the greenhouse, unlike the environment outside the greenhouse where warm air near the surface rises and mixes with cooler air aloft. This can be demonstrated by opening a small window near the roof of a greenhouse: the temperature will drop considerably. It has also been demonstrated experimentally (R. W. Wood, 1909) that a "greenhouse" with a cover of rock salt (which is transparent to infra red) heats up an enclosure similarly to one with a glass cover.[3] Thus greenhouses work primarily by preventing convective cooling.[4][28] In the greenhouse effect, rather than retaining (sensible) heat by physically preventing movement of the air, greenhouse gases act to warm the Earth by re-radiating some of the energy back towards the surface. This process may exist in real greenhouses, but is comparatively unimportant there.
Global warming
From Wikipedia, the free encyclopedia
For scientific and political disputes, see Global warming controversy, Scientific opinion on climate change and Public opinion on climate change. For past climate change see Paleoclimatology and Geologic temperature record. For the Sonny Rollins album see Global Warming (album).
Global mean land-ocean temperature change from 1880-2010, relative to the 1951-1980 mean. The black line is the annual mean and the red line is the 5-year running mean. The green bars show uncertainty estimates. Source: NASA GISS
The map shows the 10-year average (2000-2009) global mean temperature anomaly relative to the 1951-1980 mean. The largest temperature increases are in the Arctic and the Antarctic Peninsula. Source: NASA Earth Observatory [1]
Fossil fuel related CO2 emissions compared to five of IPCC's emissions scenarios. The dips are related to global recessions. Data from 4043222222223806928%IPCC SRES scenarios; 22%2%Data spreadsheet included with International Energy Agency's "CO2 Emissions from Fuel Combustion 2010 - Highlights"; andSupplemental IEA data. Image source:Skeptical Science
Global warming is the continuing rise in the average temperature of Earth's atmosphere and oceans. Global warming is caused by increased concentrations ofgreenhouse gases in the atmosphere, resulting from human activities such as deforestation and burning of fossil fuels.[2][3] This finding is recognized by the national science academies of all the major industrialized countries and is not disputed by any scientific body of national or international standing.[4][5][A]
The instrumental temperature record shows that the average global surface temperature increased by 0.74 C (1.33 F) during the 20th century.[6] Climate modelprojections are summarized in the 2007 Fourth Assessment Report (AR4) by the Intergovernmental Panel on Climate Change (IPCC). They indicate that during the 21st century the global surface temperature is likely to rise a further 1.5 to 1.9 C (2.7 to 3.4 F) for their lowest emissions scenario and 3.4 to 6.1 C (6.1 to 11 F) for their highest.[7] The ranges of these estimates arise from the use of models with differing sensitivity to greenhouse gas concentrations.[8][9] An increase in global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, and a probable expansion of subtropical deserts.[10]Warming is expected to be strongest in the Arctic and would be associated with continuing retreat of glaciers, permafrost and sea ice. Other likely effects of the warming include more frequent occurrence of extreme weather events including heatwaves, droughts and heavy rainfall events, species extinctions due to shifting temperature regimes, and changes in agricultural yields. Warming and related changes will vary from region to region around the globe, though the nature of these regional changes is uncertain.[11] In a 4 C world, the limits for human adaptation are likely to be exceeded in many parts of the world, while the limits for adaptation for natural systems would largely be exceeded throughout the world. Hence, the ecosystem services upon which human livelihoods depend would not be preserved.[12] Proposed responses to global warming include mitigation to reduce emissions, adaptation to the effects of global warming, and geoengineering to remove greenhouse gases from the atmosphere or reflect incoming solar radiation back to space. The main international mitigation effort is the Kyoto Protocol, which seeks to stabilize greenhouse gas concentration to prevent a "dangerous anthropogenic interference".[13] As of May 2010, 192 states had ratified the protocol.[14] The only members of theUNFCCC that were asked to sign the treaty but have not yet ratified it are the USA and Afghanistan.
Contents
[hide]
1 Temperature changes 2 External forcings 2.1 Greenhouse gases 2.2 Particulates and soot 2.3 Solar variation
3 Feedback 4 Climate models 5 Attributed and expected effects 5.1 Natural systems 5.2 Ecological systems 5.3 Species migration
7 Views on global warming 7.1 Global warming controversy 7.2 Politics 7.3 Public opinion 7.4 Other views
Temperature changes
Main article: Temperature record
Two millennia of mean surface temperatures according to different reconstructions from climate proxies, each smoothed on a decadal scale, with theinstrumental temperature record overlaid in black.
Evidence for warming of the climate system includes observed increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level.
[15][16][17]
The Earth's
average surface temperature, expressed as a linear trend, rose by 0.74 0.18 C over the period 19062005. The rate of warming over the last half of that period was almost double that for the period as a whole (0.13 0.03 C per decade, versus 0.07 C 0.02 C per decade). The urban heat island effect is estimated to
account for about 0.002 C of warming per decade since 1900.[18] Temperatures in the lower troposphere have increased between 0.13 and 0.22 C (0.22 and 0.4 F) per decade since 1979, according to satellite temperature measurements. Climate proxies show the temperature to have been relatively stable over the one or two thousand years before 1850, with regionally varying fluctuations such as the Medieval Warm Period and the Little Ice Age.[19] Recent estimates by NASA's Goddard Institute for Space Studies (GISS) and the National Climatic Data Center show that 2005 and 2010 tied for the planet's warmest year since reliable, widespread instrumental measurements became available in the late 19th century, exceeding 1998 by a few hundredths of a degree.[20][21][22] Current estimates by the Climatic Research Unit (CRU) show 2005 as the second warmest year, behind 1998 with 2003 and 2010 tied for third warmest year, however, the error estimate for individual years ... is at least ten times larger than the differences between these three years.[23] The World Meteorological Organization (WMO) statement on the status of the global climate in 2010 explains that, The 2010 nominal value of +0.53C ranks just ahead of those of 2005 (+0.52C) and 1998 (+0.51C), although the differences between the three years are not statistically significant...[24] Temperatures in 1998 were unusually warm because the strongest 311El Nio in the past century occurred during that year.[25] Global temperature is subject to short-term fluctuations that overlay long term trends and can temporarily mask them. The relative stability in temperature from 2002 to 2009 is consistent with such an episode.[26][27] Temperature changes vary over the globe. Since 1979, land temperatures have increased about twice as fast as ocean temperatures (0.25 C per decade against 0.13 C per decade).[28] Ocean temperatures increase more slowly than land temperatures because of the larger effective heat capacity of the oceans and because the ocean loses more heat by evaporation.[29] TheNorthern Hemisphere warms faster than the Southern Hemisphere because it has more land and because it has extensive areas of seasonal snow and sea-ice cover subject to ice-albedo feedback. Although more greenhouse gases are emitted in the Northern than Southern Hemisphere this does not contribute to the difference in warming because the major greenhouse gases persist long enough to mix between hemispheres.[30] The thermal inertia of the oceans and slow responses of other indirect effects mean that climate can take centuries or longer to adjust to changes in forcing. Climate commitment studies indicate that even if greenhouse gases were stabilized at 2000 levels, a further warming of about 0.5 C (0.9 F) would still occur.[31]
External forcings
Greenhouse effect schematic showing energy flows between space, the atmosphere, and earth's surface. Energy exchanges are expressed in watts per square meter (W/m 2).
This graph, known as the "Keeling Curve", shows the long-term increase of atmospheric carbon dioxide (CO2) concentrations from 1958-2008. Monthly CO2 measurements display seasonal oscillations in an upward trend; each year's maximum occurs during theNorthern Hemisphere's late spring, and declines during its growing season as plants remove some atmospheric CO2.
External forcing refers to processes external to the climate system (though not necessarily external to Earth) that influence climate. Climate responds to several types of external forcing, such as radiative forcing due to changes in atmospheric composition (mainly greenhouse gas concentrations), changes in solar luminosity, volcaniceruptions, and variations in Earth's orbit around the Sun.[32] Attribution of recent climate change focuses on the first three types of forcing. Orbital cycles vary slowly over tens of thousands of years and at present are in an overall cooling trend which would be expected to lead towards an ice age, but the 20th century instrumental temperature recordshows a sudden rise in global temperatures.[33]
Greenhouse gases
Main articles: Greenhouse gas, Greenhouse effect, Radiative forcing, and Carbon dioxide in Earth's atmosphere The greenhouse effect is the process by which absorption and emission of infrared radiation by gases in the atmosphere warm a planet's lower atmosphere and surface. It was proposed by Joseph Fourier in 1824 and was first investigated quantitatively by Svante Arrhenius in 1896.[34] Naturally occurring amounts of greenhouse gases have a mean warming effect of about 33 C (59 F).[35][C] The major greenhouse gases are water vapor, which causes about 3670 percent of the greenhouse effect; carbon dioxide (CO2), which causes 926 percent; methane (CH4), which causes 49 percent; and ozone (O3), which causes 37 percent.[36][37][38] Clouds also affect the radiation balance through cloud forcings similar to greenhouse gases.
Human activity since the Industrial Revolution has increased the amount of greenhouse gases in the atmosphere, leading to increased radiative forcing from CO2, methane, tropospheric ozone, CFCs and nitrous oxide. The concentrations of CO2 and methane have increased by 36% and 148% respectively since 1750.[39] These levels are much higher than at any time during the last 800,000 years, the period for which reliable data has been extracted from ice cores.[40][41][42][43] Less direct geological evidence indicates that CO2 values higher than this were last seen about 20 million years ago.[44] Fossil fuel burning has produced about three-quarters of the increase in CO2 from human activity over the past 20 years. The rest of this increase is caused mostly by changes in land-use, particularly deforestation.[45]
Over the last three decades of the 20th century, gross domestic product per capita and population growth were the main drivers of increases in greenhouse gas emissions.[46] CO2 emissions are continuing to rise due to the burning of fossil fuels and land-use change.[47][48]:71 Emissions can be attributed to different regions. The two figures opposite show annual greenhouse gas emissions for the year 2005, including land-use change. Attribution of emissions due to land-use change is a controversial issue.[49]:93[50]:289 Emissions scenarios, estimates of changes in future emission levels of greenhouse gases, have been projected that depend upon uncertain economic, sociological, technological, and natural developments.[51] In most scenarios, emissions continue to rise over the century, while in a few, emissions are reduced. [52][53] Fossil fuel reserves are abundant, and will not limit carbon emissions in the 21st century. [54] Emission scenarios, combined with modelling of the carbon cycle, have been used to produce estimates of how atmospheric concentrations of greenhouse gases might change in the future. Using the six IPCC SRES "marker" scenarios, models suggest that by the year 2100, the atmospheric concentration of CO2 could range between 541 and 970 ppm.[55] This is an increase of 90250% above the concentration in the year 1750. The popular media and the public often confuse global warming with the ozone hole, i.e., the destruction of stratospheric ozone by chlorofluorocarbons.[56][57] Although there are a few areas of linkage, the relationship between the two is not strong. Reduced stratospheric ozone has had a slight cooling influence on surface temperatures, while increased tropospheric ozone has had a somewhat larger warming effect.[58]
Ship tracks over the Atlantic Ocean on the east coast of the United States. The climatic impacts from particulate forcing could have a large effect on climate through the indirect effect.
Global dimming, a gradual reduction in the amount of global direct irradiance at the Earth's surface, has partially counteracted global warming from 1960 to the present.[59]The main cause of this dimming is particulates produced by volcanoes and human made pollutants, which exerts a cooling effect by increasing the reflection of incoming sunlight. The effects of the products of fossil fuel combustionCO2 and aerosolshave largely offset one another in recent decades, so that net warming has been due to the increase in nonCO2 greenhouse gases such as methane.[60] Radiative forcing due to particulates is temporally limited due to wet deposition which causes them to have an atmospheric lifetime of one week. Carbon dioxide has a lifetime of a century or more, and as such, changes in particulate concentrations will only delay climate changes due to carbon dioxide.[61] In addition to their direct effect by scattering and absorbing solar radiation, particulates have indirect effects on the radiation budget.[62] Sulfates act as cloud condensation nuclei and thus lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets, known as the Twomey effect.[63] This effect also causes droplets to be of more uniform size, which reduces growth of raindrops and makes the cloud more reflective to incoming sunlight, known as the Albrecht effect.[64] Indirect effects are most noticeable in marine stratiform clouds, and have very little radiative effect on convective clouds. Indirect effects of particulates represent the largest uncertainty in radiative forcing. [65] Soot may cool or warm the surface, depending on whether it is airborne or deposited. Atmospheric soot directly absorb solar radiation, which heats the atmosphere and cools the surface. In isolated areas with high soot production, such as rural India, as much as 50% of surface warming due to greenhouse gases may be masked byatmospheric brown clouds.[66] When deposited, especially on glaciers or on ice in arctic regions, the lower surface albedo can also directly heat the surface.[67] The influences of particulates, including black carbon, are most pronounced in the tropics and sub-tropics, particularly in Asia, while the effects of greenhouse gases are dominant in the extratropics and southern hemisphere.[68]
Solar variation
Main article: Solar variation Variations in solar output have been the cause of past climate changes.[69] The effect of changes in solar forcing in recent decades is uncertain, but small, with some studies showing a slight cooling effect,[70] while others studies suggest a slight warming effect.[32][71][72][73] Greenhouse gases and solar forcing affect temperatures in different ways. While both increased solar activity and increased greenhouse gases are expected to warm thetroposphere, an increase in solar activity should warm the stratosphere while an increase in greenhouse gases should cool the stratosphere.[32] Radiosonde (weather balloon) data show the stratosphere has cooled over the period since observations began (1958), though there is greater uncertainty in the early radiosonde record. Satellite observations, which have been available since 1979, also show cooling. [74] A related hypothesis, proposed by Henrik Svensmark, is that magnetic activity of the sun deflects cosmic rays that may influence the generation of cloud condensation nuclei and thereby affect the climate. [75] Other research has found no relation between warming in recent decades and cosmic rays.[76][77] The influence of cosmic rays on cloud cover is about a factor of 100 lower than needed to explain the observed changes in clouds or to be a significant contributor to present-day climate change.[78] Studies in 2011 have indicated that solar activity may be slowing, and that the next solar cycle could be delayed. To what extent is not yet clear; Solar Cycle 25 is due to start in 2020, but may be delayed to 2022 or even longer. It is even possible that Sol could be heading towards another Maunder Minimum. While there is not yet a definitive link between solar sunspot activity and global temperatures, the scientists conducting the solar activity study believe that global greenhouse gas emissions would prevent any possible cold snap. [79]
Feedback
Main article: Climate change feedback Feedback is a process in which changing one quantity changes a second quantity, and the change in the second quantity in turn changes the first. Positive feedback increases the change in the first quantity whilenegative feedback reduces it. Feedback is important in the study of global warming because it may amplify or diminish the effect of a particular process.
The main positive feedback in the climate system is the water vapor feedback. The main negative feedback is radiative cooling through the 2%StefanBoltzmann law, which increases as the fourth power of temperature. Positive and negative feedbacks are not imposed as assumptions in the models, but are instead emergent properties that result from the interactions of basic dynamical and thermodynamic processes. Imperfect understanding of feedbacks is a major cause of uncertainty and concern about global warming. [citation
needed]
A wide range of potential feedback process exist, such as Arctic methane release and ice-albedo
feedback. Consequentially, potential tipping points may exist, which may have the potential to cause abrupt climate change.[80] For example, the "emission scenarios" used by IPCC in its 2007 report primarily examined greenhouse gas emissions from human sources. In 2011, a joint study by NSIDC-(US) and NOAA-(US) calculated the additional greenhouse gas emissions that would emanate from melted and decomposing permafrost, even if policymakers attempt to reduce human emissions from the currently-unfolding A1FI scenario to the A1B scenario.[81] The team found that even at the much lower level of human emissions, permafrost thawing and decomposition would still result in 190 64 Gt C of permafrost carbon being added to the atmosphere on top of the human sources. Importantly, the team made three extremely conservative assumptions: (1) that policymakers will embrace the A1B scenario instead of the currently-unfolding A1FI scenario, (2) that all of the carbon would be released as carbon dioxide instead of methane, which is more likely and over a 20 year lifetime has 72x the greenhouse warming power of CO2, and (3) their model did not project additional temperature rise caused by the release of these additional gases.[81][82] These very conservative permafrost carbon dioxide emissions are equivalent to about 1/2 of all carbon released from fossil fuel burning since the dawn of the Industrial Age,[83] and is enough to raise atmospheric concentrations by an additional 87 29 ppm, beyond human emissions. Once initiated, permafrost carbon forcing (PCF) is irreversible, is strong compared to other global sources and sinks of atmospheric CO2, and due to thermal inertia will continue for many years even if atmospheric warming stops.[81] A great deal of this permafrost carbon is actually being released as highly flammable methane instead of carbon dioxide.[84] IPCC 2007's temperature projections did not take any of the permafrost carbon emissions into account and therefore underestimate the degree of expected climate change.[81][82] Other research published in 2011 found that increased emissions of methane could instigate significant feedbacks that amplify the warming attributable to the methane alone. The researchers found that a 2.5-fold increase in methane emissions would cause indirect effects that increase the warming 250% above that of the methane alone. For a 5.2-fold increase, the indirect effects would be 400% of the warming from the methane alone.[85]
Climate models
Main article: Global climate model
Calculations of global warming prepared in or before 2001 from a range of climate models under the SRES A2 emissions scenario, which assumes no action is taken to reduce emissions and regionally divided economic development.
The geographic distribution of surface warming during the 21st century calculated by the 33HadCM3 climate model if a business as usual scenario is assumed for economic growth and greenhouse gas emissions. In this figure, the globally averaged warming corresponds to 3.0 C (5.4 F).
A climate model is a computerized representation of the five components of the climate system: Atmosphere, hydrosphere, cryosphere, land surface, and biosphere.[86]Such models are based on physical principles including fluid dynamics, thermodynamics and radiative transfer. There can be components which represent air movement, temperature, clouds, and other atmospheric properties; ocean temperature, salt content, and circulation; ice cover on land and sea; the transfer of heat and moisture from soil and vegetation to the atmosphere; chemical and biological processes; and others.[87] Although researchers attempt to include as many processes as possible, simplifications of the actual climate system are inevitable because of the constraints of available computer power and limitations in knowledge of the climate system. Results from models can also vary due to different greenhouse gas inputs and the model's climate sensitivity. For example, the uncertainty in IPCC's 2007 projections is caused by (1) the use of multiple models with differing sensitivity to greenhouse gas concentrations, (2) the use of differing estimates of humanities' future greenhouse gas emissions, (3) any additional emissions from climate feedbacks that were not included in the models IPCC used to prepare its report, i.e., greenhouse gas releases from permafrost.[81] The models do not assume the climate will warm due to increasing levels of greenhouse gases. Instead the models predict how greenhouse gases will interact with radiative transfer and other physical processes. One of the mathematical results of these complex equations is a prediction whether warming or cooling will occur.[88] Recent research has called special attention to the need to refine models with respect to the effect of clouds[89] and the carbon cycle.[90][91][92]
Models are also used to help investigate the causes of recent climate change by comparing the observed changes to those that the models project from various natural and human-derived causes. Although these models do not unambiguously attribute the warming that occurred from approximately 1910 to 1945 to either natural variation or human effects, they do indicate that the warming since 1970 is dominated by man-made greenhouse gas emissions.[32] The physical realism of models is tested by examining their ability to simulate current or past climates.[93] Current climate models produce a good match to observations of global temperature changes over the last century, but do not simulate all aspects of climate.[45] Not alleffects of global warming are accurately predicted by the climate models used by the IPCC. Observed Arctic shrinkage has been faster than that predicted.[94]Precipitation increased proportional to atmospheric humidity, and hence significantly faster than current global climate models predict.[95][96]
Sparse records indicate that glaciers have been retreating since the early 1800s. In the 1950s measurements began that allow the monitoring of glacial mass balance, reported to the WGMS and the NSIDC.
Natural systems
Global warming has been detected in a number of systems. Some of these changes, e.g., based on the instrumental temperature record, have been described in the section on temperature changes. Rising sea levels and observed decreases in snow and ice extent are consistent with warming.[17] Most of the increase in global average temperature since the mid-20th century is, with high probability,[D] attributable to human-induced changes in greenhouse gas concentrations.[99] Even with current policies to reduce emissions, global emissions are still expected to continue to grow over the coming decades.[100] Over the course of the 21st century, increases in emissions at or above their current rate would very likely induce changes in the climate system larger than those observed in the 20th century.
In the IPCC Fourth Assessment Report, across a range of future emission scenarios, model-based estimates of sea level rise for the end of the 21st century (the year 2090-2099, relative to 1980-1999) range from 0.18 to 0.59 m. These estimates, however, were not given a likelihood due to a lack of scientific understanding, nor was an upper bound given for sea level rise. Over the course of centuries to millennia, the melting of ice sheets could result in sea level rise of 46 m or more.[101] Changes in regional climate are expected to include greater warming over land, with most warming at high northern latitudes, and least warming over the Southern Oceanand parts of the North Atlantic Ocean.[100] Snow cover area and sea ice extent are expected to decrease, with the Arctic expected to be largely ice-free in September by 2037.[102] The frequency of hot extremes, heat waves, and heavy precipitation will very likely increase.
Ecological systems
In terrestrial ecosystems, the earlier timing of spring events, and poleward and upward shifts in plant and animal ranges, have been linked with high confidence to recent warming.[17] Future climate change is expected to particularly affect certain ecosystems, including tundra, mangroves, and coral reefs.[100] It is expected that most ecosystems will be affected by higher atmospheric CO2 levels, combined with higher global temperatures.[103] Overall, it is expected that climate change will result in the extinction of many species and reduced diversity of ecosystems.[104]
Species migration
In 2010, a gray whale was found in the Mediterranean Sea, even though the species had not been seen in the North Atlantic Ocean since the 18th century. The whale is thought to have migrated from the Pacific Ocean via the Arctic. Climate Change & European Marine Ecosystem Research (1CLAMER) has also reported that the Neodenticula seminae alga has been found in the North Atlantic, where it had gone extinct nearly 800,000 years ago. The alga has drifted from the Pacific Ocean through the Arctic, following the reduction in polar ice.[105] In the Siberian sub-arctic, species migration is contributing to another warming albedo-feedback, as needleshedding larch trees are being replaced with dark-foliage evergreen conifers which can absorb some of the solar radiation that previously reflected off the snowpack beneath the forest canopy.[106][107]
Social systems
Vulnerability of human societies to climate change mainly lies in the effects of extreme weather events rather than gradual climate change.[108] Impacts of climate change so far include adverse effects on small islands,[109] adverse effects on indigenous populations in high-latitude areas,[110] and small but discernable effects on human health.[111] Over the 21st century, climate change is likely to adversely affect hundreds of millions of people through increased coastal flooding, reductions in water supplies, increased malnutrition and increased health impacts.[112]
Future warming of around 3 C (by 2100, relative to 1990-2000) could result in increased crop yields in midand high-latitude areas, but in low-latitude areas, yields could decline, increasing the risk of malnutrition. [109] A similar regional pattern of net benefits and costs could occur for economic (market-sector) effects.[111] Warming above 3 C could result in crop yields falling in temperate regions, leading to a reduction in global food production.[113] Most economic studies suggest losses of world gross domestic product (GDP) for this magnitude of warming.[114][115] Some areas of the world would start to surpass the wet-bulb temperature limit of human survivability with global warming of about 6.7C (12F) while a warming of 11.7C (21F) would put half of the world's population in an uninhabitable environment.[116][117] In practice the survivable limit of global warming in these areas is probably lower and in practice some areas may experience lethal wet bulb tempatures even earlier, because this study conservatively projected the survival limit for persons who are out of the sun, in gale-force winds, doused with water, wearing no clothing, and not working.[117]
Adaptation
Main article: Adaptation to global warming
Other policy responses include adaptation to climate change. Adaptation to climate change may be planned, e.g., by local or national government, or spontaneous, i.e., done privately without government intervention.[123] The ability to adapt is closely linked to social and economic development.[119] Even societies with high capacities to adapt are still vulnerable to climate change. Planned adaptation is already occurring on a limited basis. The barriers, limits, and costs of future adaptation are not fully understood.
Geoengineering
Another policy response is deliberate modifications to the climate system, known as (geoengineering). This policy response is sometimes grouped together with mitigation.[124] Although some proposed geoengineering techniques are well understood, the most promising are under ongoing development, and reliable cost estimates for them have not yet been published.[125] Geoengineering encompasses a range of techniques to remove CO2 from the atmosphere or to reflect incoming sunlight. As most geoengineering techniques would affect the entire globe, deployment would likely require global public acceptance and an adequate global legal and regulatory framework, as well as significant further scientific research.[126]
Politics
Most countries are Parties to the United Nations Framework Convention on Climate Change (UNFCCC).[132] The ultimate objective of the Convention is to prevent "dangerous" human interference
of the climate system.[133] As is stated in the Convention, this requires that GHG concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can proceed in a sustainable fashion. The Framework Convention was agreed in 1992, but since then, global emissions have risen. [127][134] During negotiations, the G77 (a lobbying group in the United Nations representing 133 developing nations)[135]:4pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions. [136] This was justified on the basis that: the developed world's emissions had contributed most to the stock of GHGs in the atmosphere; per-capita emissions (i.e., emissions per head of population) were still relatively low in developing countries; and the emissions of developing countries would grow to meet their development needs.[50]:290 This mandate was sustained in the Kyoto Protocol to the Framework Convention,[50]:290 which entered into legal effect in 2005.[137] In ratifying the Kyoto Protocol, most developed countries accepted legally binding commitments to limit their emissions. These first-round commitments expire in 2012.[137] US President George W. Bush rejected the treaty on the basis that "it exempts 80% of the world, including major population centers such as China and India, from compliance, and would cause serious harm to the US economy."[135]:5 At the 15th UNFCCC Conference of the Parties, held in 2009 at Copenhagen, several UNFCCC Parties produced the Copenhagen Accord.[138] Parties associated with the Accord (140 countries, as of November 2010)[139]:9 aim to limit the future increase in global mean temperature to below 2 C.[140] A preliminary assessment published in November 2010 by the United Nations Environment Programme (UNEP) suggests a possible "emissions gap" between the voluntary pledges made in the Accord and the emissions cuts necessary to have a "likely" (greater than 66% probability) chance of meeting the 2 C objective.[139]:10-14 The UNEP assessment takes the 2 C objective as being measured against the pre-industrial global mean temperature level. To having a likely chance of meeting the 2 C objective, assessed studies generally indicated the need for global emissions to peak before 2020, with substantial declines in emissions thereafter. The 16th Conference of the Parties (COP16) was held at Cancn in 2010. It produced an agreement, not a binding treaty, that the Parties should take urgent action to reduce greenhouse gas emissions to meet a goal of limiting global warming to 2 C above pre-industrial temperatures. It also recognized the need to consider strengthening the goal to a global average rise of 1.5 C.[141]
Public opinion
In 20072008 Gallup Polls surveyed 127 countries. Over a third of the world's population was unaware of global warming, with people in developing countries less aware than those in developed, and those in Africa the least aware. Of those aware, Latin America leads in belief that temperature changes are a result of human activities while Africa, parts of Asia and the Middle East, and a few countries from the Former Soviet Union lead in the opposite belief.[142] In the Western world, opinions over the concept and the appropriate responses are divided. Nick Pidgeon of Cardiff University said that "results show the different stages of engagement about global warming on each side of the Atlantic", adding, "The debate in Europe is about what action needs to be taken, while many in the U.S. still debate whether climate change is happening." [143][144] A 2010 poll by
the Office of National Statistics found that 75% of UK respondents were at least "fairly convinced" that the world's climate is changing, compared to 87% in a similar survey in 2006.[145] A January 2011 ICM poll in the UK found 83% of respondents viewed climate change as a current or imminent threat, while 14% said it was no threat. Opinion was unchanged from an August 2009 poll asking the same question, though there had been a slight polarisation of opposing views.[146] A survey in October, 2009 by the Pew Research Center for the People & the Press showed decreasing public perception in the United States that global warming was a serious problem. All political persuasions showed reduced concern with lowest concern among Republicans, only 35% of whom considered there to be solid evidence of global warming.[147] The cause of this marked difference in public opinion between the United States and the global public is uncertain but the hypothesis has been advanced that clearer communication by scientists both directly and through the media would be helpful in adequately informing the American public of the scientific consensus and the basis for it.[148] The U.S. public appears to be unaware of the extent of scientific consensus regarding the issue, with 59% believing that scientists disagree "significantly" on global warming.[149] By 2010, with 111 countries surveyed, Gallup determined that there was a substantial decrease in the number of Americans and Europeans who viewed Global Warming as a serious threat. In the United States, a little over half the population (53%) now viewed it as a serious concern for either themselves or their families; a number 10 percentage points below the 2008 poll (63%). Latin America had the biggest rise in concern, with 73% saying global warming was a serious threat to their families.[150] That global poll also found that people are more likely to attribute global warming to human activities than to natural causes, except in the USA where nearly half (47%) of the population attributed global warming to natural causes.[151] On the other hand, in May 2011 a joint poll by Yale and George Mason Universities found that nearly half the people in the USA (47%) attribute global warming to human activities, compared to 36% blaming it on natural causes. Only 5% of the 35% who were "disengaged", "doubtful", or "dismissive" of global warming were aware that 97% of publishing US climate scientists agree global warming is happening and is primarily caused by humans.[152] Researchers at the University of Michigan have found that the public's belief as to the causes of global warming depends on the wording choice used in the polls.[153] In the United States, according to the Public Policy Institute of California's (PPIC) eleventh annual survey on environmental policy issues, 75% said they believe global warming is a very serious or somewhat serious threat to the economy and quality of life in California.[154]
Other views
Most scientists accept that humans are contributing to observed climate change.[47][155] National science academies have called on world leaders for policies to cut global emissions.[156] However, some scientists and non-scientists question aspects of climate-change science.[157][158][155]
Organizations such as the libertarian Competitive Enterprise Institute, conservative commentators, and some companies such as ExxonMobil have challenged IPCC climate change scenarios, funded scientists who disagree with the scientific consensus, and provided their own projections of the economic cost of stricter controls.[159][160][161][162] In the finance industry, Deutsche Bank has set up an institutional climate change investment division (DBCCA),[163] which has commissioned and published research[164] on the issues and debate surrounding global warming.[165] Environmental organizations and public figures have emphasized changes in the current climate and the risks they entail, while promoting adaptation to changes in infrastructural needs and emissions reductions.[166] Some fossil fuel companies have scaled back their efforts in recent years,[167] or called for policies to reduce global warming.[168]
Etymology
The term global warming was probably first used in its modern sense on 8 August 1975 in a science paper by Wally Broecker in the journal Science called "Are we on the brink of a pronounced global warming?".[169][170][171] Broecker's choice of words was new and represented a significant recognition that the climate was warming; previously the phrasing used by scientists was "inadvertent climate modification," because while it was recognized humans could change the climate, no one was sure which direction it was going.[172] The National Academy of Sciences first used global warming in a 1979 paper called the Charney Report, it said: "if carbon dioxide continues to increase, [we find] no reason to doubt that climate changes will result and no reason to believe that these changes will be negligible."[173] The report made a distinction between referring to surface temperature changes as global warming, while referring to other changes caused by increased CO2 as climate change.[172] Global warming became more widely popular after 1988 when NASA climate scientist James Hansen used the term in a testimony to Congress.[172] He said: "global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and the observed warming."[174] His testimony was widely reported and afterward global warming was commonly used by the press and in public discourse.[172]