Human Influences on Weather and Climate
Human activity affects weather, climate, and the environment. Some human activity is harmless, but much human activity degrades the environment. While the environment can absorb some abuse without long-term effects, much harmful human activity exceeds the environment's ability to recover. The most significant way that humans damage the environment is by emitting harmful chemicals into the air and water. This activity has wide-ranging results, such as increasing the temperature of the planet, lowering the quality of the air we breathe, and killing forests and aquatic animals. (Global warming, the worldwide increase in atmospheric temperature, is discussed in detail in "Climate Change and Global Warming".) Other environmental consequences include air pollution and smog, acid rain, and ozone depletion. Each of these have multiple effects and there is mounting evidence that they result in increased impact on the environment over the last few years. The search for solutions to these environmental problems has yielded some results, including promising research on alternative energy sources.
Air pollution is the presence of high concentrations of undesirable gases and particles in the air. Although some air pollution is caused by natural processes, such as volcanic eruptions and wildfires, much is the result of human activity and is emitted by smokestacks and car exhaust systems. According to the U.S. Centers for Disease Control, air pollution contributes to between 50,000 and 120,000 deaths in the United States (primarily due to asthma, heart disease, and bronchitis) each year. Estimated yearly costs of treating illnesses in the United States triggered by air pollution are between $40 and $50 billion. In addition to posing a hazard to the health of humans and all other living things, air pollution creates unpleasant odors and diminishes the planet's natural beauty.
While air quality has improved in the last three decades, half of all U.S. citizens live in counties where air pollution exceeds national health standards. Every large city in the world currently experiences some degree of air pollution. In a report released in 2004, the American Lung Association reported that particulate matter (fine particles suspended in the air) is an especially difficult problem. According to the report:
- 81 million Americans live in areas with unhealthy short-term levels of particulates
- 66 million Americans live in areas with unhealthy year-round levels of particle pollution
- 136 million Americans live in counties with unhealthy levels of ozone
- 159 million Americans live in counties with one of the three conditions: unhealthy levels of ozone, unhealthy short-term levels of particulates, or unhealthy year-round levels of particulates
- 46 million Americans live in counties where all three levels are unhealthy
In a report published in 2001, the World Health Organization estimates that 1.5 billion city dwellers face levels of outdoor air pollution that are above the maximum recommended levels. According to the report, about half a million deaths each year can be attributed just to particulate matter and to sulfur dioxide in outdoor air. While air pollution is usually considered a problem of developed countries, as a result of their high level of industrial activity and vehicle use, more than 70 percent
Top of Article
Page 775 | Top of Articleof deaths from outdoor air pollution occur in the developing world. In developing nations, populations tend to be larger and pollution standards often are less strict than in the more developed nations.
Air pollution in history
Air pollution by humans is as old as the discovery of how to make fire. Air pollution first emerged as a public concern around the twelfth century, when legislation was passed in England restricting coal burning. By the mid-1600s, coal burning had noticeably worsened the quality of the air, particularly in London. By the mid-1800s, London's air was so thick with soot and smoke it was described as "pea soup."
London's air pollution was not only unsightly, it was deadly. In two incidents, one in 1873 and the other in 1911, the "pea-soup fog," sometimes called smog (combining the words "smoke" and "fog"), claimed nearly two thousand lives. It was not until a five-day bout of smog killed nearly four thousand Londoners in 1952 that legislation was passed to curb the pollution. The Clean Air Act of 1956 was an act of the Parliament of the United Kingdom in response to the Great Smog of 1952. The act banned the burning of peat and soft coal, which are relatively cheap and easy to obtain, but produce excessive amounts of soot and smoke. Instead, households were encouraged to burn gas, oil, or hard coal (anthracite), or convert to electrical heat.
Air pollution is not limited to Britain. The problem exists worldwide, especially where industrialization is coupled with lax, unenforced, or ineffective environmental regulations.
The United States has had a problem with air pollution for the last century or more. The soot from burning coal blanketed St. Louis, Missouri, and Pittsburgh, Pennsylvania, in the first half of the twentieth century. This type of pollution has given way in many large cities to photochemical smog, which is formed by the interaction of sunlight with unburned hydrocarbons from industrial process, gas stations, petroleum processing, and automobile exhaust. The problem is not limited to urban areas; polluted air blankets many beautiful natural areas as well. Big Bend National Park in far west Texas is plagued each summer by haze that may come from as far away as the petroleum processing facilities along the Texas Gulf Coast.
An air pollutant is any harmful substance that exists in the atmosphere at concentrations great enough to endanger the health of
Top of Article
living organisms. An air pollutant may take the form of a gas, a liquid, or a solid (such as particulates). When a pollutant is emitted directly into the air, it is called a primary air pollutant. A primary air pollutant may undergo chemical reactions with water, sunlight, or other pollutants, and produce additional pollutants, called secondary air pollutants.
Some substances that exist naturally in the air in small concentrations, such as carbon monoxide and sulfur dioxide, are considered pollutants at higher concentrations. Other substances that do not occur naturally, such as benzene, can cause damage to organisms even at very low concentrations.
Pollutants make their way into the air either through natural processes or by human activities. Examples of natural processes that put pollutants in the air include forest fires, dispersed pollen, wind-blown soil, volcanic eruptions, and organic decay.
The primary human-related cause of air pollution is motor vehicle emissions. Other examples of human-created pollution include: the combustion of fuel for generating heat and electricity in "stationary" sources such as houses, power plants, and office buildings; industrial processes such as paper mills, oil refineries, chemical production plants, and ore smelting; the breakdown of organic waste at landfills; and crop dusting with pesticides or insecticides.
The amount of air pollution produced in the United States is difficult to calculate accurately. The best estimates are that in 2001, somewhere
Top of Article
between 150 million and 300 million tons (130 to 270 million metric tons) of air pollutants were produced, an average of around 0.5 to 1.0 tons (0.45 to 0.89 metric tons) per year for every man, woman, and child living in the United States. About 63 million tons (56 metric tons) of pollutants were produced by motor vehicles alone. Due in part to federal and state regulations designed to reduce air pollution, total emissions of select air pollutants in the United States decreased by 25 percent between 1970 and 2001.
Categories of major air pollutants and their effects
There are several different categories of air pollutants. One major category includes particulate matter, which are solid or liquid particles that are tiny enough to be suspended in the air. Air pollution created by particulate matter is the most visible type of air pollution. Some forms of particulate matter, such as dust, smoke, and pollen, are irritating to humans, but not toxic. Other types of particulate matter, such as asbestos fibers, arsenic, sulfuric acids, and a number of pesticides, are toxic. Long-term exposure to toxic particulate matter causes a variety of recurring health problems.
Certain types of heavy metal particulate matter, namely iron, copper, nickel, and lead, cause respiratory illnesses. Lead particles
Top of Article
accumulate in body tissues and can damage the central nervous system. At high enough concentrations, lead is fatal. Levels of lead in the air have been greatly reduced in recent years because of laws requiring the use of lead-free gasoline.
One of the most widespread gaseous air pollutants is carbon monoxide (CO). Carbon monoxide is produced by the combustion of carbon-containing fuels. It is emitted by car exhaust, home heating systems, and industrial smokestacks. Carbon monoxide is colorless, odorless, and poisonous. Due to recently adopted air-quality standards, CO levels in the United States have been reduced by about 40 percent since the early 1970s. When it collects in enclosed places, CO can cause unconsciousness, and even death.
A colorless, but not odorless, polluting gas is sulfur dioxide (SO2). Sulfur dioxide is produced during the burning of coal and oil, primarily in power plants, petroleum refineries, ore smelting plants, and paper mills. The gas causes a host of respiratory problems in humans and has been shown to reduce the yield of certain crops, such as lettuce and spinach. After clean air standards were enacted, SO2 concentrations in the United States declined by 48 percent between 1980 and 2005.
Methane is another gaseous air pollutant. It is also a greenhouse gas, which means it traps heat in the atmosphere and therefore contributes to the greenhouse effect, which warms Earth. Methane belongs to a class of organic compounds called hydrocarbons whose molecules are formed from chains of one or more carbon atoms with hydrogen atoms attached to the chain. Hydrocarbons that evaporate into the air easily are in a class called volatile organic compounds (VOCs). There are thousands of VOCs, which may be in solid, liquid, or gaseous states at room temperature. Some, such as methane, are naturally occurring and pose little danger to human health at low concentrations other than as a greenhouse gas. Other VOCs, such as formaldehyde and chlorofluorocarbons, are dangerous. Some VOCs, such as benzene and benzopyrene, are carcinogens (cancer-causing substances). The primary sources of VOCs that pose a threat to human health are emissions by motor vehicles and industrial processes.
Nitrogen oxides are also a major air pollutant. The oxides of nitrogen include nitrogen dioxide (NO2) and nitrogen oxide (NO). These gases can be produced naturally, by the action of bacteria, but they are also produced during fuel combustion, through the combination of nitrogen and oxygen. Due to fuel combustion (which occurs in motor vehicles and industrial processes), nitrogen oxide levels are up to one hundred times greater in cities than they are outside of urban areas. While nitrogen oxides are usually colorless, at high enough concentrations (such as those that exist over Los Angeles), nitrogen dioxide takes on a reddish-brown color.
High levels of nitrogen oxides may cause a higher incidence of some types of cancer in humans and may make the body more susceptible to heart and lung disease. Nitrogen oxides also undergo reactions with other chemicals in the air that result in increased levels of photochemical smog.
In the United States, photochemical smog is commonly referred to simply as "smog." It is different from the pea-soup smog of London, which is a combination of sulfurous smoke and fog. Photochemical smog is a layer of air pollution near Earth's surface. This is the type of smog familiar to residents of Los Angeles and other large cities. The main component of smog is ozone, an odorless, colorless gas composed of three atoms of oxygen. Near-surface ozone is formed when nitrogen oxides and hydrocarbons (chemicals emitted by car
Top of Article
exhaust systems, coal-burning power plants, chemical plants, oil refineries, aircraft, and other sources) react with strong sunlight.
Surface ozone is characterized by the Environmental Protection Agency (EPA), the U.S. government agency that deals with air pollution, as "the most widespread and persistent urban pollution problem." Surface ozone differs from "good" ozone in Earth's upper atmosphere. Upper atmosphere ozone shields us from the Sun's harmful ultraviolet rays.
Photochemical smog is difficult to see at ground level. When looking down at it from above, however, it appears as a brown haze. Photochemical smog is irritating to the eyes and throat.
Smog in the United States, while down from its peak in 1988, is a continuing problem. In the 1980s, the federal government imposed a series of laws mandating cleaner exhaust from cars, smokestacks, and other sources of pollution. Those measures are credited for keeping the smog problem in check. At the same time that emission systems have become cleaner, however, there has also been a rise in the consumption of fossil fuels, like coal, oil, and natural gas. The increasing number of motorists, using greater amounts of gasoline, as well as the burning of increasing quantities of coal by utility companies, at least partially offsets any gains made in air quality.
The persistence of pollution by near-surface ozone (a primary component of photochemical smog) is particularly puzzling because emissions of the two ozone precursors, oxides of nitrogen and volatile organic compounds, have been declining in the United States and are tightly regulated by federal and state governments. Levels of other air pollutants regulated by the EPA, such as particulates, lead, sulfur dioxide, nitrogen dioxide, and carbon monoxide, have fallen by half or more since 1970.
In 2005, the EPA completed phasing in new standards for ozone, the so-called "smog threshold." The old standard allowed 120 parts per billion (ppb) of ozone averaged over a one-hour period. The new standard sets a limit of 80 ppb, averaged over a longer eight-hour period. (The old standard remains in effect for a region until that region meets or exceeds the old standard for three years in a row.) The U.S. government imposes penalties, such as withholding highway funds, on states that frequently surpass the threshold. Days on which the threshold is surpassed are called "ozone days."
Ozone concentrations of 80 ppb or higher are considered unhealthy to children, people with respiratory problems (such as asthma or emphysema), and adults who exercise or work vigorously outdoors. The EPA considers those categories of people "sensitive groups." Days on which ozone levels reach 80 ppb are considered "unhealthy air days." Ozone concentrations of 105 ppb or higher pose a health risk to the general population.
When the ozone concentration reaches 105 ppb, the government issues health advisories, cautioning people to remain indoors as much as possible and to avoid exertion outdoors. Such days are known as "shut-in days." The year 1988 logged the greatest number of ozone days for many eastern cities since smog record-keeping began in 1974. New York City, for example, had forty-three ozone days in 1988. Since 1998, the number of ozone days has generally declined. This is probably due to a reduction of smokestack emissions and different weather conditions in smog-prone areas.
The Air Quality Index
To simplify air quality reporting, the EPA has developed the Air Quality Index (AQI) for specific locations. The index is based on concentrations of each of four major air pollutants regulated by the Clean Air Act, passed by Congress in 1970 and updated several times since then: ground-level ozone, particle pollution (also known as particulate matter), carbon monoxide, and sulfur dioxide. Each is each averaged over
Top of Article
some period of time, such as eight hours for ozone. According to the EPA, the use of a standardized measure makes it easier for people decide when to take precautions. The AQI is a "yardstick" used to measure health risks. The AQI is set so that a value of 100 corresponds to the national air quality standard for that pollutant. An AQI below 50 represents good air quality. When the AQI is higher than 100, people in sensitive groups are encouraged to limit their time outdoors. An AQI higher than 151 is considered unhealthy for the general population, in which case everyone is advised to limit their outdoor activity.
Nitrogen dioxide is no longer reported separately because the concentrations of nitrogen dioxide have remained very low for several years. However, nitrogen dioxide remains an important contributor to the formation of ozone, which is reported.
Under the Clean Air Act, metropolitan areas with populations over 350,000 are required to publicize the AQI when pollution levels are high; AQI reports are typically included in weather pages of newspapers. Some state and local air quality agencies declare "ozone action days" when the AQI reaches unhealthy levels. On ozone action days, residents are encouraged to limit automobile use, fill gas tanks only after dusk, conserve electricity, and reduce the use of air conditioners.
Record levels of smog set in 2002
Despite advances in air pollution, record smog was recorded in 2002. According to a report released by the Public Interest Research Group (PIRG), a private research organization, Page 783 | Top of Articlesmog monitors in forty-one states and the District of Columbia recorded unhealthy levels of air pollution on some 8,800 separate occasions in 2002. This was a 90 percent increase over 2001. The report found that every region of the country exceeded the national health standard for ozone more often in 2002 than 2001, with the largest increases in midwestern, southeastern, and central states. California, Texas, and Tennessee led the nation in 2002 with the most smog days, which PIRG defines as days on which at least one ozone monitor in the state exceeds the national health standard. In addition, sixteen monitors at eleven national parks, including the Great Smokey Mountains and Yosemite, recorded levels of ozone in excess of the national health standard a total of 418 times during 2002.
Los Angeles: The bad news and the good news
Due to its large population and high number of sunny, warm days, Los Angeles, California, has the worst smog problem in the United States. Between 1980 and 2006 the population of the city of Los Angeles grew 38 percent, to over four million. During the 1980s, the number of miles being logged by motorists increased 75 percent; however, between 1990 and 2000 the increase was only 13 percent. Los Angeles leads the nation in the number of motor vehicles with an average 1.8 registered automobiles per licensed driver. But the main problem in Los Angeles is not the number of cars, miles driven, or number of drivers. The problem is widely considered to be congestion. Los Angeles drivers do not travel an excessive amount, but due to congestion, the number of hours spent idling in traffic is much greater. As a consequence, the skyline of downtown Los Angeles is blanketed in the summer by a nearly permanent reddish-brown haze.
There were sixty-two days in 1998 in which smog levels were above the federal threshold of 120 ppb in Los Angeles. Conditions improved Page 784 | Top of Articleslightly in 1999, and over the summer Los Angeles passed on the notorious distinction of having the worst summertime smog day to Texas City, Texas. The respite for Los Angeles was only temporary, however. Since 1999, Los Angeles and Southern California have regained the dubious status as the smoggiest place in the United States. In 2004, the American Lung Association listed the ten worst places to live. The Los Angeles area topped the list. Four of the worst five counties were in Southern California. The Texas Gulf Coast industrial complex was bumped back down to fifth place.
- Los Angeles-Riverside-Orange County, CA
- Fresno, CA
- Bakersfield, CA
- Visalia-Porterville, CA
- Houston-Baytown-Huntsville, TX
- Merced, CA
- Sacramento-Arden-Arcade-Truckee, CA-NV
- Hanford-Corcoran, CA
- Knoxville-Sevierville, LA and Follette, TN
- Dallas-Fort Worth, TX
In 1967, the U.S. Congress gave California the right to require tighter auto-emissions standards than the rest of the country. Los Angeles's cars and trucks, and its industry, are now among the cleanest in the nation; yet its pollution remains among the worst. Why is the smog in Los Angeles so bad?
Ozone, the main constituent of smog, is created when ultraviolet (UV)rays from the Sun stimulate chemical reactions between nitrogen oxides and volatile organic compounds (VOCs). These photochemical reactions increase as the concentrations of nitrogen oxides, VOCs, UV, and the air temperature all increase. The climate and topography (shape of the land surface) in Los Angeles combine to make it a near-perfect candidate for smog. For half the year, Los Angeles has hot and dry weather. Mountains to the north, east and south trap the air from generally light ocean breezes. The nitrogen oxides and VOCs from traffic and industry accumulate and smog forms under the action of the famous Southern California sun. It rarely rains in Southern California in the summer, so smog is not washed out of the air.
California continues to take steps to battle smog. The EPA recently granted California's request for a "preemption" of federal air standards Page 785 | Top of Articlefor small nonroad engines such as lawn mowers. In addition, in 2005 California adopted a more stringent standard for ozone of 70 ppb averaged over an eight-hour period (the federal standard is 80 ppb averaged over an eight-hour period). From 2007 on, small engines sold in California will have to meet much stricter emissions requirements.
The effects of air pollution are most intense when an inversion exists in the atmosphere. An inversion occurs when a layer of warm air exists above a colder layer of surface air. The warm air acts like a lid, preventing the surface air from rising. An inversion may last for several days.
Normally, the concentration of pollutants in urban areas is moderated by the upward motion of air, as well as the horizontal motion of winds. When an inversion forms, however, it traps air pollutants close to the ground. After several days of an inversion, the concentration of smog Page 786 | Top of Articlemay increase to the point that cars must drive with their lights on, even during the day, and people with asthma suffer a marked increase in attacks.
Smog and human health
Photochemical smog takes the greatest toll on children, people with asthma or other respiratory disorders, and people who do strenuous work or exercise outdoors. About one-third of all Americans belong to these groups, considered "sensitive groups" by the EPA. People in the sensitive groups are urged to limit their time outdoors and not exert themselves when ozone levels are high.
Ozone irritates the respiratory system, causing symptoms such as coughing, sneezing, sore throat, and difficulty taking deep breaths. Some people respond to smog with chest pains, burning eyes, headaches, and dizziness. In the presence of high levels of ozone, people with asthma are more prone to asthma attacks. One reason for this is that the allergic response to elements that trigger asthma attacks, such as dust mites, pollens, fungus, pets, and cockroaches, is heightened when ozone is strong.
Ozone also has the potential to damage the lining of the lung. When cells in the lung die after a high ozone day, the lung repairs itself by manufacturing new cells. Repeated assault on the lungs by high levels of ozone, however, can cause permanent lung damage. Ozone also aggravates emphysema, bronchitis, and other lung diseases, and sometimes weakens the body's ability to fight off bacterial infection in the lungs. If children are repeatedly exposed to dangerous levels of ozone, scientists fear that they may experience reduced lung function as adults. A 1998 study in Los Angeles found that when ozone levels are high, there is a marked increase in the numbers of people hospitalized with lung and heart ailments.
In 1970, the U.S. Congress enacted the first Clean Air Act: a law regulating the air pollutants emitted by cars, Page 787 | Top of Articlefactories, and other sources. The act required the Environmental Protection Agency (EPA) to set air quality standards, to enforce those standards in every state, and to update the standards as necessary to "protect public health with an adequate margin of safety."
In 1977, when it became apparent that most states were failing to meet the clean-air standards set in 1970, the Clean Air Act was amended with new target dates for compliance. The amended law also exempted older, coal-burning power plants from many of the act's requirements. EPA officials reasoned that the plants would soon be retired, and therefore that expensive smokestack renovations would not be worth the money. However, many of the older power plants, especially those in the Midwest were still running twenty years later.
In 1997, EPA administrator Carol Browning announced a tough new set of clean-air regulations that would be implemented in 2008. The allowable level for ozone was lowered. The 1997 rules also called for a sharp reduction of harmful emissions (specifically nitrogen oxides, the main cause of smog) coming from twenty-two states east of the Mississippi River, beginning in 2003. The new rules took aim at power plants in the Midwest and East, especially those exempted by the 1977 rule.
Lawyers for utility companies sought relief from the new air-quality rules in the District of Columbia Circuit Court of Appeals. On May 14, 1999, the court ruled in favor of the utility companies and set aside the EPA's stricter emissions restrictions. The court also voided the new ozone limit of 80 ppb.
In response, EPA officials worked hard to restore the 1997 amendments, petitioning the courts to rehear parts of the case. After further legal Page 788 | Top of Articlearguments, in March 2000, the D.C. Circuit Court issued a ruling that supported most portions of EPA's rules on limits of nitrogen oxides. However, the court required the EPA to reexamine several matters before moving ahead. By 2004, the EPA had finalized all the steps it had to take in response to the courts rulings, including separating the rules on nitrogen oxides into two phases.
In 2005, the EPA established other clean-air rules—the Clean Air Interstate Rule (CAIR), the Clean Air Mercury Rule (CAMR), and the Clean Air Visibility Rule (CAVR). The CAIR limits emissions of sulfur dioxide and nitrogen oxides in the eastern United States while the CAVR limits sulfur dioxide and nitrogen oxides emissions in the western United States. The CAMR limits nationwide mercury emissions.
However, legal battles and legislative debates continue. Of particular concern to many is the EPA's handling of New Source Review (NSR). On December 31, 2002, exactly thirty-two years after President Richard Nixon signed the Clean Air Act into law, the George W. Bush administration announced provisions that many considered significant rollbacks to NSR pollution control provisions. According to critics, the new rules would allow virtually all pollution increases from old, high-polluting sources to go unregulated and communities would not have any way to know when a nearby power plant is increasing the amount of pollutants pumped into the air. The new regulations went into effect in March 2003.
The Clean Air Act remains a work in progress. Further legal and legislative actions are possible as different parties gain control of the political process and environmental laws and regulations continue to evolve.
A secondary effect of air pollution is the acidification of rain. Acid rain (or more accurately acid precipitation) is rain, sleet, snow, fog, or hail that is made more acidic by pollutants in the air (rain is naturally slightly acidic). This occurs as a result of acid deposition, which is the deposit of acid particles by either wet (rain or snow) or dry (dust or gas) means. The primary pollutants responsible for acid deposition, sulfur dioxide and nitrogen oxides, are both by-products of the burning of fossil fuels. These pollutants are emitted by car exhaust systems, coal-burning power plants, Page 789 | Top of Articlechemical plants, oil refineries, aircraft, and other sources. They react with moisture in the air to produce sulfuric acid and nitric acid.
Acid rain raises the acidity of lakes and rivers in sensitive areas, making them inhospitable to many species of animals. It also kills trees and has been shown to harm human health. Sensitive areas include portions of the northeastern United States, eastern Canada, and northern Europe, where the bedrock is primarily granite. In areas where the bedrock is limestone, acid rain is not much of a problem because the acid combines with the limestone to produce harmless carbon dioxide.
Acid rain corrodes the surfaces of rocks, dissolving minerals, such as aluminum, that are harmful to living organisms. Aluminum is one of the most harmful substances dissolved by acid rain. When it washes into lake and rivers, it hampers the ability of fish to absorb oxygen through their gills. Dissolved aluminum has caused the deaths of entire fish populations in hundreds of highly acidic lakes and rivers in North America (especially in the northeastern United States and Canada) and in northern Europe.
Aluminum that is washed into the soil prevents the roots of trees from absorbing essential nutrients, ultimately killing the trees. Acid rain also makes plants more susceptible to frost, insects, and disease. In some areas where acid rain is a serious problem, entire forests have been wiped out. In Europe, so many trees have been stunted or killed that a new word, Waldsterben (forest death), has become part of the vocabulary.
Acid rain has also been shown to harm human health. In children, exposure to acid rain (and other forms of acid in the air or water such as acid fog, acid mist, acid snow, and acid dust) aggravates asthma. Even in healthy people, acid air pollutants can cause lung damage.
Acid rain damages property, too. It gradually dissolves marble—the stone from which many statues are made. Around the world, outdoor art classics are losing fingers, toes, and noses to acid rain. In Washington, D.C., in the mid-1990s, for instance, a marble sculpture of the Shakespearean character Puck lost an entire hand.
The acid rain express
The parts of the world most affected by acid rain at present are the northeastern United States, southeastern Canada, central Europe, and Scandinavia. These locations, however, are not necessarily home to the greatest producers of acid rain-forming pollution.
Top of Article
Airborne pollutants can travel for great distances before returning to Earth. For this reason, acid rain can affect ecosystems in even remote parts of the world, far from industrial centers. It is believed that Page 791 | Top of ArticleSweden's acid rain problem, for instance, originates in England's smokestacks.
In the United States, the primary generators of sulfur dioxide and nitrogen oxides are large electrical power plants in the Midwest. The coal used in those plants, mined from midwestern and Appalachian coalfields, is particularly high in sulfur. The pollution from the midwestern power plants rises high in the air and is carried eastward and northeastward by the wind. When the pollution descends, it may mix with precipitation to form acid rain or acid snow. In dry regions the pollution falls to the ground as acid gas or acid dust.
The acidification of lakes, rivers, and forests is a serious problem throughout the eastern portion of the United States. Some of the areas most affected by acid rain are Maine, Vermont, New York, the upper peninsula of Michigan, Virginia, West Virginia, Maryland, Tennessee, and North Carolina.
Lakes remain acidic even as sulfur emissions drop
Since the 1970s, when acid precipitation was identified as the cause of dying trees and the term "acid rain" was popularized, nations in North America and northern Europe have required factories to reduce their sulfur dioxide emissions. A study released in October 1999 showed that levels of sulfur compounds (the primary components of acid rain) in lakes and rivers at two hundred sites on both continents have decreased. The study's bad news, however, was that the acidity of many of the bodies of water tested has not declined and that bodies of water damaged by acid rain show no signs of recovery.
The study, undertaken by an international team of twenty-three scientists, found reduced acidity (on the order of 25 percent) in some lakes and rivers in Vermont, Quebec, and northern Europe. The remaining portions of North American included in the study, stretching from Maine to the Midwest, had unchanged levels of acidity in bodies of water.
The results of that study illustrate the difficulty of de-acidifying lakes and rivers. Researchers concluded that tackling the acid rain problem would likely require further cuts in sulfur dioxide emissions. "We've been creating acid rain for a long time," stated Dr. Gary Lovett of the Institute of Ecosystem Studies in The New York Times on October 7, 1999. "It may take a long time to recover from its effects."
Decline of bases adds to problem
One reason that acidity levels remain steady in lakes and rivers even as sulfur compounds decrease is that levels of calcium and magnesium are also on the decline. Magnesium and calcium are bases, that is, substances that react with acids to form salts. They neutralize acid rain, much as antacid medication neutralizes stomach acid. Calcium is also used by trees to build cell walls and magnesium is a component of chlorophyll, the substance that gives green plants the ability to photosynthesize (produce food from carbon dioxide, water, and sunlight).
The loss of calcium and magnesium has been attributed to acid rain. Acid rain removes those elements from the soil more quickly than they can be replaced by leaching from rocks. Throughout the northeastern United States, Ontario, and Quebec, magnesium and calcium levels in the soil have declined by around 50 percent since the 1960s.
While ozone in the lower atmosphere is bad, ozone in the upper atmosphere is good. Ozone in the upper atmosphere shields Earth from the Sun's harmful ultraviolet rays. The destruction of ozone by certain chemicals over the last few decades is a matter of great concern. Ozone is a colorless gas composed of three atoms of oxygen bonded together. Although ozone is commonly described as odorless, it does have a subtle and hard-to-describe odor. The "fresh" odor sometimes detected right after a summer thunderstorm may be from lingering ozone in the air.
Unlike surface ozone, which is a major air pollutant, upper-atmospheric ozone is not formed by an interaction of pollution and sunlight. The ozone layer high above Earth is formed by a reaction between molecular ozone (O2) and atomic ozone (O), in the presence of sunlight. Ozone has three atoms of oxygen per molecule. Its chemical formula is O3.
Earth's protective ozone layer lies between 10 and 25 miles (16 and 40 kilometers) above ground. It is in the upper part of Earth's stratosphere, a region of the atmosphere that is about 6 to 40 miles (14 to 62 kilometers) above ground. The concentration of ozone in the layer is a few parts per million. While low, this is thousands of times more concentrated than ozone near the surface. The ozone layer is important to humans and other living organisms because of its ability to absorb harmful ultraviolet radiation.
In recent years, the presence of certain chemical pollutants in the upper atmosphere has caused a reduction in concentration of ozone molecules in the ozone layer. This loss of stratospheric ozone has severe repercussions for human health, the most serious being a rise in cases of skin cancer. The term "skin cancer" refers to one of three diseases of the skin that are caused primarily by exposure to the ultraviolet in sunlight It is estimated that every 1 percent reduction in the ozone layer results in a 2 to 5 percent rise in the incidence of skin cancer. Other human consequences of the loss of protective ozone may include an increase in sun burns and eye cataracts, as well as the suppression of the immune system.
The most likely culprit behind ozone depletion is the class of human-made chemicals called chlorofluorocarbons (CFCs). CFCs can be liquids or gases, and appeared in all kinds of everyday products such as propellants in aerosol Page 794 | Top of Articlespray cans and foam-blowing canisters; in refrigerators and air conditioners; and in some cleaning solvents. When CFCs rise into the stratosphere, they form chlorine compounds that break down ozone molecules.
Levels of stratospheric ozone are kept in balance naturally. Ozone is produced in the stratosphere by the combination of molecular oxygen (O2) and atomic oxygen (O). Ozone molecules are broken by the absorption of ultraviolet rays and by collisions with other oxygen atoms. The dust and gases emitted by volcanic eruptions also break down ozone molecules. By introducing CFCs into the equation, however, the balance becomes tilted in favor of ozone destruction.
The consequences of ozone depletion are not limited to humans. A decrease in ozone has also been linked to a reduction of crop yields, health problems in animals, and a loss of ocean phytoplankton. Phytoplankton are microscopic ocean plants that are a crucial link in the food chain of marine animals.
The Antarctic ozone hole
Tests conducted above Antarctica in the late 1970s, at the end of the long, cold winters (in September and October), first revealed the problem of ozone depletion. In 1985, the concentration of ozone in the stratosphere above Antarctica decreased to the point where scientists began to refer to the area as the "ozone hole."
The ozone layer above Antarctica continued to thin at alarming rates throughout the 1980s, and by 1994 had been almost totally eliminated for a brief part of the year. At its worst, the ozone hole is double the size of the Antarctic continent. In many aspects the Antarctic ozone hole remains the most visible and striking example of how human-created pollution can damage the atmosphere.
Measurements of the ozone layer over Antarctica began in 1978. In September 2006, the Antarctic ozone hole was measured at a record 10.6 million square miles (27.3 million square kilometers). At that time, the Antarctic ozone hole was also the deepest ever recorded. Near-total ozone loss was recorded at 13 miles (21 kilometers) above ground. Prior to 1997, ozone destruction did not extend higher than 9 miles (14 kilometers) in altitude. Ozone holes have also appeared in recent winters over the North Pole, northern Europe, Australia, and New Zealand.
Why does ozone depletion happen over the poles, where virtually no one lives and CFC use is practically nonexistent? It happens because both ozone and CFCs are carried around the planet by upper-level winds. The ozone layer is normally thickest above the tropics and from there it is distributed to the poles. During the cold Antarctic winter, however, a dome of extremely cold air forms, which blocks the distribution of ozone to the South Pole. At the same time, ice clouds form in the stratosphere. CFCs blown in from other parts of the world become trapped in these ice clouds. It is this combination of events that sets the stage for the depletion of the ozone layer.
Other parts of the world
The stratosphere above other parts the world has also experienced a loss of ozone. Since 1979, the ozone layer over all parts of the world except the tropics has shown a marked depletion. The World Meteorological Organization has issued a series of research reports
Top of Article
on ozone depletion. Scientific Assessment of Ozone Depletion: 2002 confirmed that between 1979 and 1991, the ozone layer remained about 3 percent thinner over much of the Northern Hemisphere compared to pre-1980 levels. The Southern hemisphere mid-latitudes showed a 6 percent drop. The same study showed a 4 to 5 percent thinning of the ozone layer over the United States. New ozone holes also appear to be forming over the North Pole, Australia, and New Zealand. Even greater levels of ozone loss (up to 10 percent) have been found over Canada and northern Europe.
Chlorofluorocarbons destroy the ozone
The main chemicals responsible for thinning the ozone layer are CFCs. CFCs are human-made hydrocarbons, such as freon, in which some or all of the hydrogen atoms have been replaced by fluorine atoms. CFCs were formerly used Page 797 | Top of Articlein refrigerators and air conditioners; as propellants in aerosol spray cans (such as deodorants, spray paints, and hair sprays) and foam-blowing canisters; and in some cleaning solvents.
When released into the air, CFCs slowly rise through Earth's lower atmosphere and up to the stratosphere. There they are converted by the Sun's ultraviolet rays into chlorine compounds. The chlorine compounds react with stratospheric ozone molecules (O3), converting them into ordinary oxygen molecules (O2). The release of CFCs into the atmosphere depletes the beneficial ozone layer faster than ozone can be recharged by natural processes. These are fueled by the reaction between molecular ozone and atomic ozone, in the presence of sunlight.
Other chemicals that contribute to the destruction of the ozone layer include halons (used in fire extinguishers), methyl bromide (used for fumigating crops), carbon tetrachloride (used in solvents and the manufacture of chemicals), methyl chloroform (used in auto repair and maintenance products), and hydro CFCs (HCFCs; similar uses as CFCs but considered slightly less damaging to ozone).
Attempts to curb ozone depletion
Since the 1970s, when the world became aware the dangers of ozone depletion, there has been a flurry of national legislation and international treaties aimed at reducing the use of CFCs. In 1978, the United States government became one of the first nations to act, banning the use of CFCs in most aerosol cans.
The Montreal Protocol on Substances that Deplete the Ozone Layer, an international agreement drafted in 1987, called for the phasing-out of certain CFCs used in industrial processes by the year 2000. The protocol has been endorsed by ninety-three nations, including the major industrialized nations. In keeping with that accord, many countries have now greatly restricted the use of aerosol spray cans and other ozone-destroying chemicals. There is, however, a flourishing illegal trade in CFCs, especially in developing countries. In an effort to curb the trade, China Customs and the United Nations Environment Program launched Project Skyhole Patching in 2006. In the first six months of the program alone, nearly 72 tons (65 metric tones) of illegal ozone depleting substances were seized by authorities in China, India, and Thailand.
In the United States, the 1990 Amendments to the Clean Air Act set a timetable for the elimination of ozone-destroying chemicals. In 1993, Page 798 | Top of Articlethose phase-out dates were accelerated. According to the regulations, halons were eliminated in 1994; and CFCs, carbon tetrachloride, and methyl chloroform were eliminated in 1996. The most destructive form of HCFC, a substitute for CFC that also harms ozone, was scheduled to end production in 2003, but the manufacture of other forms of HCFC will be permitted until 2030 to give industrialists time to develop less harmful substitutes. After 2030, all forms of HCFC will be prohibited.
In September 1997, on the tenth anniversary of the Montreal Protocol gathering, representatives from more than one hundred nations met to reexamine the problem of ozone depletion. They agreed at that meeting to phase out methyl bromide, a chemical used in insecticides. Industrialized nations pledged to eliminate production of methyl bromide by Page 799 | Top of Article2005 and developing nations by 2015. As of 2007, methyl bromide production had decreased significantly, but the EPA still made some "critical use exemptions" to the phaseout.
The international efforts to protect the ozone appear to be working. Since 1988, there has been a substantial decline in the atmospheric buildup of CFCs. Experts suggest that concentrations of CFCs reached their peak before the turn of the twenty-first century, after which the ozone layer will begin the slow process of repairing itself. CFC molecules, however, survive in the atmosphere for fifty to one hundred years. As long as CFCs are present they will continue to damage the ozone.
Environmentally friendly sources of power
It is possible for even industrialized, automobile-dependent societies to meet basic needs and remain economically strong without harming the environment. What is needed to accomplish this are environmentally friendly sources of transportation, such as electric and hybrid cars, and cleaner sources of energy, such as solar power (electricity generated from the Sun) and wind power (electricity generated from wind).
Cars are responsible for one-third to one-half of all emissions that cause global warming, smog, and acid rain. Consequently, auto manufacturers face government regulations and public pressure to design and build cleaner cars. One result of this pressure has been the development of more efficient gasoline engines. Engines of recent-model cars put out just a small fraction of the pollutants that automobile engines emitted when the Clean Air Act was signed into law. They are also more powerful and more fuel-efficient. Auto manufacturers are also producing electric vehicles, hybrid gasoline-electric or diesel-electric vehicles, and vehicles powered by fuel cells (devices that generate electricity by combining hydrogen and oxygen).
Solar power and wind power are relatively clean, safe alternatives to burning fossil fuels. While solar and wind power are not completely harmless to the environment, they cause a fraction of the damage done by burning fossil fuels. There are numerous solar and wind power facilities operating throughout the world. In the United States in 1998, solar and wind power, together with hydroelectric power (power produced by moving water), accounted for approximately 8 percent of energy consumption. Fossil fuels and nuclear power made up the other 92 percent. The United States and many European nations have stated Page 800 | Top of Articletheir intention to greatly increase the use of solar and wind power through the twenty-first century.
Electric cars are automobiles that run on electric motors instead of gasoline-powered engines. The power that drives the electric current is stored in batteries. When the power runs low, an electric car's batteries must be recharged.
While electric vehicles are commonly thought of as a new technology, they have actually been around for a long time: They were first produced in the late 1880s. Electric cars, trucks, and buses, as well as electric trolleys and trains (with electricity supplied by overhead wires), were in widespread use at the beginning of the twentieth century. Electric vehicles were preferred over gasoline-powered vehicles because the latter class of vehicles were difficult to start, noisy, and required more maintenance.
The balance shifted in favor gasoline-powered vehicles in the 1910s, with the invention of the Kettering electrical self-starter. The starter eliminated the need for crank-starting gasoline-powered cars. By 1924, not a single electric vehicle was exhibited at the National Automobile Show.
While electric vehicles virtually disappeared in the United States, electric buses and trucks continued to be used in other parts of the world. For instance, in the latter part of the twentieth century there were thirty thousand electric vehicles in use in England and thirteen thousand in Japan.
The rise and fall of the modern all-electric vehicle
Motivated by shrinking petroleum reserves and the polluting effects of gasoline emissions, auto manufacturers in the 1960s once again began looking toward electric automobiles. The motivation to produce all-electric automobiles also came from the Zero Emission Vehicle mandate (ZEV), a law passed by the state of California in 1990. This was a radical challenge to the auto industry to produce automobiles with zero tail pipe emissions. In 2003, the California Air Resources Board reduced the regulations outlined in the ZEV mandate, but while it was in effect this legislation jump-started the automakers' development of cleaner vehicles.
In 1996, General Motors began marketing a nearly silent, electric compact car called the General Motors EV1. The EV1 could accelerate from 0 to 60 miles per hour (0 to 97 kph) in 9 seconds, which is comparable to gasoline-powered cars. It was extremely fuel efficient, Page 801 | Top of Articleand could go 140 miles (218 kilometers) between charges. Honda also produced an electric car, the EV Plus. It was the first electric vehicle to be powered by nickel metal-hydride batteries (NiMH) instead of the traditional lead-acid batteries. However, these battery-powered cars did not perform well in cold weather. Ford produced an all-electric version of its Ranger pickup truck as well. All of these vehicles were produced in very limited quantities and production has ceased.
Most other major car makers have also ceased production of their all-electric vehicles. The motivation to produce zero-emission vehicles was reduced after the revision to California's ZEV mandate. Nearly all of the vehicles have been repossessed by the manufacturers and recycled.
At present, due to the high costs and limited capabilities of batteries, electric cars are only produced and sold in small numbers. However, the development of new technologies may bring the cost and performance of electric cars in line with gasoline-powered cars in the not-too-distant future.
The future of all-electric vehicles
The primary obstacle to electric vehicle use has been the limitations of battery technology. Charging a battery can take several hours, and typical chargers can be expensive. Recent improvements to batteries and chargers, however, are making the process faster and easier. The widespread availability of NiMH batteries, which have a much better energy-to-weight ratio than lead-acid batteries, coupled with escalating oil prices in the early years of the twenty-first century, seems to have revitalized an interest in all electric vehicles. By late 2006, several companies had introduced new electrical vehicle prototypes and promised production would soon begin.
Hybrid technology cars caught on with both consumers and manufacturers during the first years of the twenty-first century. A hybrid car contains a relatively small petroleum fueled engine, with a combined electric motor and generator. Electrical energy is stored in battery packs. The gasoline or diesel engine automatically recharges the battery packs as the car is being driven or while stopped. Hybrids (and all-electric vehicles) also use a technology called regenerative braking. When the brakes are applied, the energy of motion is converted by the motor/generator back to electrical energy and stored in the battery pack.
Hybrid cars smoothly switch back and forth between gas and electrical systems. Many hybrid vehicles use less petroleum fuel than traditional vehicles. They may get up to 70 miles (109 kilometers) per gallon, Page 802 | Top of Articlemaking refueling necessary only every 700-870 miles (1092-1357 kilometers). In 2007, most automobile and light truck manufacturers had at least one hybrid vehicle in their production line.
Fuel cell-powered cars
Fuel cells are a promising, new source of power for vehicles. They run on liquid hydrogen or hydrogen-rich materials such as ethanol and methanol. Fuel cells work by combining hydrogen and oxygen (from the air) to produce electricity and water. The main drawback to fuel cells, at this point, is cost. In 2006, the price of constructing a fuel-cell vehicle was around $2 million. (The primary reason for the high cost is that two components of fuel cells, platinum and graphite, are very expensive.) Fuel cells are also very fragile and do not stand up well to the bumps of daily driving. Auto companies are researching ways to make a cheaper, more robust fuel cell. Fuel cells run on pure hydrogen are considered zero-emissions vehicles. The exhaust is pure water. Fuel cells that run on hydrocarbons or alcohols will still have some "tailpipe" emissions, including carbon dioxide.
Chicago, Illinois, and Vancouver, British Columbia, have prototype fuel-cell buses in their public transportation fleets. Other cities around the globe are following suit. In California, automakers, fuel companies, and government agencies are working in partnership to test fuel cell vehicle technology and are expected to produce more than sixty demonstration vehicles over the next few years. There are currently around five hundred fuel cell-powered cars in operation worldwide. They are all prototypes and are serving as test platforms for this emerging technology. A number of durability, performance, and cost issues must be overcome before fuel cell vehicles are ready for mass production.
Solar radiation is the most plentiful, permanent source of energy in the world. Energy from the Sun is nonpolluting. It can be used directly for heating and lighting, or harnessed and used to generate electricity.
The sunlight that strikes Earth provides far more power than the world's inhabitants can use. The challenge of using solar power, however, is in concentrating and storing the energy. Storage is necessary for times when the Sun is not shining, such as at night and on cloudy days. In the absence of storage capabilities, solar energy alone cannot meet all of a community's energy needs—it must be supplemented by other sources of energy.
Great strides have been made in the development of solar power technologies since the early 1970s. France, Japan, Israel, the United States, and other countries are actively seeking ways to use solar energy as a major source of power. A handful of large-scale solar power stations are operational around the world. In addition, small-scale solar power systems provide electricity to more than 250,000 households worldwide, including a growing number of isolated areas and developing countries.
One of the biggest obstacles to widespread use of solar power has been cost. At the start of the 1980s, the cost of electricity from a photo-voltaic panel (device that converts sunlight to electricity) was about one hundred times more expensive than electricity from conventional power
Top of Article
plants. By the end of the 1990s, the price of solar energy-generated electricity in especially sunny locations was almost the same as the price of electricity from conventional power plants. In other areas, however, solar-generated electricity was around two to five times the cost of conventional electrical power. There is also a high cost to install the systems needed to provide solar-electric energy in homes and businesses.
Despite the relatively high cost of solar-generated electrical power, demand continues to increase. The solar power industry worldwide grew at an average annual rate of 16 percent between 1990 and 1997 (in the United States the industry tripled in size during that time period). According to the Worldwatch Institute, a nonprofit organization that monitors the environment and economic development, solar power is the world's fastest growing energy source.
Passive solar collectors
There are the two types of systems that collect and store the Sun's heat: passive solar collectors and active solar collectors. Passive collectors have no moving parts, while active solar collectors use pumps and motors. Passive systems are usually used for home heating and active systems are generally used for producing hot water.
Passive solar collectors operate on the simple principle that when placed in the sunlight, an object will heat up. One passive home-heating system is called a Trombe Wall. It consists a black concrete wall with air Page 805 | Top of Articlevents at the top and bottom, set on the south side of a building. A double-glazed pane of glass is placed just outside of the wall. Heat passes through the glass and becomes trapped between the glass and the wall.
Cool air from inside the room is drawn into the bottom air vents and enters the space between the wall and the glass. The air is heated, rises, and returns to the room through the top vent.
Active solar collectors
Active solar collectors use pumps and motors to heat water. A solar water heater (also called a solar thermal device) is a type of active solar heating system that used to supplement a traditional home water heater. The solar water heater consists of a network of copper tubes filled with antifreeze, placed on the roof of a house. The tubes are covered with insulated black panels that absorb the heat of the Sun.
A pump circulates the antifreeze through the tubes. The antifreeze is heated as it passes beneath the rooftop panels. It then flows through a heat exchanger (also called a heat pump), an instrument that extracts the heat from the antifreeze and transfers it to water in a storage tank. The cooled antifreeze is then pumped back to the rooftop tubes.
In some active systems a fluid is heated in order to produce steam, which is then used to spin a turbine (a machine with spinning blades) and generate electricity.
Sunlight can also be directly converted into electricity. This is accomplished with photovoltaic cells (also called solar cells). Photovoltaic cells contain semiconductor crystals, such as crystalline silicon, that conduct an electric current under certain conditions. When sunlight strikes the semiconductor, its molecular structure is altered: Electrons move about and an electric current is created.
The electric current runs through a wire on the back of the cell and either travels into a device for immediate use, into a battery where it is stored for short-term use, or into the local power grid.
Photovoltaic arrays (panels containing large numbers of photovoltaic cells) supplement traditional means of electricity production in some regions where sunlight is plentiful. Outside of San Luis Obispo, California, for example, an electricity-generating array of photovoltaic cells generates enough power to supply 2,300 homes.
Photovoltaic arrays cannot be relied upon as a community's sole source of electricity because they do not function when the Sun is not shining. There is as yet no battery or other system that can store enough energy to get through long periods of lack of sunshine.
Uses of photovoltaic cells
In some isolated locations (such as research stations), rural areas, and less-developed countries that are not serviced by power lines, photovoltaic cells provide the only source of electrical power. Some 70 percent of the solar cells produced in the United States are shipped to developing nations.
Photovoltaic cells are also used in the desert to power machines such as water pumps, air conditioners, and telephones. Some motor vehicles have been outfitted with photovoltaic cells, which provide a portion of the vehicle's power. There are also computers, lights, televisions, heaters, air conditioners, and video games that are powered, in part, by photovoltaic cells.
Some buildings receive all or some of their electricity from photovoltaic cells. One example is the Four Times Square building, a modern skyscraper in New York City. Photovoltaic panels that line the exterior walls of the upper levels, satisfying a large portion of the building's energy needs.
Solar shingles, first offered for sale in 1996, are photovoltaic arrays that can be placed on rooftops. It takes 20 square feet (1.8 square meters) of shingles to power a 100-watt light bulb. In sunny locations, solar shingles covering a roof that is at least 400 square feet (37 square meters) are capable of producing enough energy for an average household.
Solar power plants
There are numerous solar power plants in sunny locations around the world, supplying electricity to surrounding communities. Two experimental, large-scale solar power plants, called Solar One and Solar Two, operated in Southern California (in the Mojave Desert, east of Los Angeles) between 1983 and 1999. Both plants were capable of producing 10 megawatts of electricity. (A watt is a unit of electric power; 1 megawatt equals 1,000,000 watts.)
Solar One was in operation from 1983 to 1988. A series of mirrors (eighteen hundred in Solar One and nearly two thousand in Solar Two) tracked the movement of the Sun across the sky. The mirrors intensified the sunlight and directed it onto a "power tower," which was a tower Page 808 | Top of Articlecovered with pipes. The fluid running through the pipes became heated; that heat was transferred to a tank of water and turned the water into steam. The steam drove a turbine and the turbine's rotating blades powered an electrical generator. (In a generator, a magnet is turned through a coil of wire and produces electricity.) The electricity was fed into the local utility grid and transported to homes, factories, and businesses.
In 1995, Solar One was converted into Solar Two. It operated from 1996 until early 1999. Solar Two operated in essentially the same manner as Solar One except that it stored solar energy in a tank of molten salt. That system enabled Solar Two to generate electricity continuously, even during nonsunshine hours. "We're proud of Solar Two's success," stated U.S. Energy Secretary Bill Richardson in a news release after Solar Two was discontinued, "as it marks a significant milestone in the development of large-scale solar energy projects. It takes us a step closer to making renewable energy a significant contributor to the global energy mix, while helping to make our environment cleaner."
Nevada Solar One, another solar power plant, shares a similar name to Solar One, however it is quite different in structure. It uses solar receivers, (heating tubes filled with liquid) instead of a power tower. Nevada Solar One is being built in Boulder City, Nevada, by the U.S. Department of Energy, the National Renewable Energy Laboratory, and a private company, and when completed in 2007, will generate 64 megawatts of electricity. Solar Tres, located in Spain, uses technology developed for Solar One and Solar Two. However, Solar Tres is three times larger than Solar Two.
The use of wind as a source of energy goes back thousands of years. Since the Middle Ages, windmills have been used to pump water and perform other types of simple mechanical work. Even today, windmills are common fixtures on American farms. In the early half of this century there were around six million such windmills used for pumping water and generating electricity. However, windmills were quickly phased out as other forms of energy became available to farmers. By the end of the 1970s, there were only about 150,000 windmills still in use on farms across the United States.
In recent years, interest in wind energy has been rekindled. This is partially due to the polluting effect and growing scarcity of fossil fuels, combined with the apparent dangers of nuclear power. Wind energy, in Page 809 | Top of Articlecontrast, is nonpolluting and inexhaustible (it can never be used up). Interest in wind energy has also been driven by continuous improvements in wind turbine and windmill efficiency since the 1970s. Today, a single modern wind turbine, placed in a location where winds are a fairly constant 10 to 12 miles per hour (16 to 20 kph), can meet all the electricity needs of one home. A "wind farm," consisting of hundreds or thousands of windmills in an area with strong winds, can provide enough electricity for an entire community.
The main challenge to the widespread use of wind power is its cost. New technologies, however, are continually being developed that promise to make wind power as cost-efficient as power from fossil fuels. The cost of electricity from utility-scale wind systems has dropped by more than 80 percent over the last twenty years. The price in the United States is now lower than the cost of fuel-generated electric power in some areas. The downward trend in cost is expected to continue as larger multi-megawatt turbines are mass-produced.
Use of wind power is growing quickly. As of 2003, wind power was the fastest-growing form of electricity generation on a percentage basis in the United States. Globally, wind power generation more than quadrupled between 1999 and 2005. The world leaders in wind power production in 2007 were Germany, Spain, the United States, India, and Denmark.
Wind turbines and wind farms
Wind turbines have long blades, enabling them to extract large amounts of energy from the wind. An electrical generator captures the energy of the spinning blade and converts it to electricity. The electricity travels through wires to the bottom of the tower and to an electrical substation, after which it is fed into the local utility grid.
Twenty-eight percent of the United States' wind power is generated in Texas and California. According to a U. S. Department of Energy study, the windy states of Texas, North Dakota, and Kansas could generate enough electricity with wind energy to furnish the entire nation's electricity needs.
Most of the largest wind farms in Texas are situated on high mesas in far west Texas. In this region, there is a strong prevailing wind from the west. The wind speeds up as it rises to cross these mesas, and the placement of the wind turbines along the ridge tops takes advantage of this phenomenon. Also, Texas has planned two offshore wind farms along the Gulf Coast.
One of California's largest wind farms (constructed in the early 1980s) sits at the edge of a mountain gap called Altamont Pass, about 30 miles (50 kilometers) east of San Francisco. Wind is naturally funneled through the gap at high speeds. Altamont Pass contains more than seven thousand 80-foot-tall (24-meter-tall) wind turbines. Each turbine produces between 40 and 750 kilowatts of power—enough electricity for 130,000 homes (1 kilowatt equals 1,000 watts). There are also two large wind farms in Southern California, at Tehachapi and Palm Springs.
Bigger and better wind turbines
Wind power developers are designing larger, more powerful wind turbines, which are able to produce electricity more inexpensively than older models. The first generation of wind turbines, produced between 1978 and 1981, had blades that were 16 feet (5 meters) long, with rotor (rotating cylindrical device consisting of blades on a shaft) diameters of 33 to 36 feet (10 to 11 meters). They were capable of producing 22 to 30 kilowatts of power. The size and capability of wind turbines has steadily increased since that time.
By the late 1990s, rotor diameters were up to 217 feet (66 meters) and turbines were generating 1.65 to 2 megawatts of power. It is estimated that wind turbines will continue increasing in size and power to the practical limit of 5 megawatts and 490 feet (150 meters) in diameter. In 2005, the largest wind turbine in the world was a 6-megawatt giant installed in Germany. It has a rotor diameter of 410 feet (126 meters). Page 811 | Top of ArticleAs turbines continue to get larger and more powerful, costs of wind technology are expected to continue falling.
The future of wind power
The U.S. Department of Energy estimates that offshore wind farms alone could eventually supply all of the energy needs of the United States. That estimate is as high as 30 percent for other parts of the world. In 1999, the American Wind Energy Association, a group of wind-energy producers, predicted that by the year 2005 global use of wind turbines would have increased ten-fold and wind energy will be producing 18.5 gigawatts (one gigawatt equals one billion watts) of electricity worldwide. That estimate proved to be very low. In 2005, world wind energy production was over 65 gigawatts and growing rapidly.
In 2007, more than 46,221 megawatts of wind power were being produced throughout Europe and many new projects were in the works. Germany alone has 16,000 wind turbines, mostly in the north of the country—including three of the biggest in the world—with plans for even more expansion. Canada is also investing in wind power. By early 2007, wind power (primarily in Quebec) supplied 1,451 megawatts of electricity.
Small-scale wind power also looks promising. Wind turbines have been used for household electricity generation for many decades. Some early systems were built by hobbyists working with airplane propellers and automobile generators. Recently, small commercial systems have become widely available and are in use throughout the world. Household generator units of more than 1 kilowatt are now functioning in several countries. By using a combination of wind power, photovoltaics, and battery storage, a remote village, small island, offshore platform, or Australian ranch station can be independent of grid-supplied power.
Saving the planet
With the knowledge of the most serious environmental threats presently faced by Earth and its inhabitants comes the responsibility to find solutions. In an effort to reverse the trend of environmental degradation, scientists and environmental advocates are recommending that we, as a society, do the following:
- Decrease our consumption of coal and oil
- Develop alternative forms of energy, such as solar power and wind power
- Increase the efficiency of automobiles, so they can travel for more miles on each gallon of gasoline
- Stop deforestation and replant trees on cleared lands
We can each make a difference by choosing to consume less and pollute less. That means, for instance, reusing and recycling items rather than disposing of them; minimizing our use of toxic chemicals, from lawn and garden fertilizers to household cleaning agents; and switching our means of transportation, whenever possible, from cars to bicycles or buses.
For More Information
Botkin, Daniel, and Edward Keller. Environmental Science: Earth as a Living Planet. 4th ed. Hoboken, NJ: John Wiley & Sons, Inc. 2004.
Cotton, William R., and Roger A. Pielke Sr. Human Impacts on Weather and Climate. 2nd ed. New York: Cambridge University Press, 2007
Flannery, Tim. The Weather Makers: How Man Is Changing the Climate and What It Means for Life on Earth. New York: Atlantic Monthly Press, 2006
Gore, Al. An Inconvenient Truth. 2nd ed. New York: Rodale Books, 2006.
Wright, Richard T. Environmental Science: Toward a Sustainable Future. 10th ed. Upper Saddle River, NJ: Prentice Hall, 2007.
"How Might Global Climate Change Affect Life on Earth?" McDougal Littel: Exploring Earth Investigations. 〈http://www.classzone.com/books/earth_science/terc/content/investigations/esu501/esu501page01.cfm?chapter_no=investigation〉 (accessed March 25, 2007).
"NCAR/UCAR/UOP Home." National Center for Atmospheric Research. 〈http://www.ucar.edu/〉 (accessed March 25, 2007).