ReadSpeaker:
ListenLarger documents may require additional load time.
Artificial Intelligence
Encyclopedia of Emerging Industries. Ed. Lynn M. Pearce. 6th ed. Detroit, MI: Gale, 2011. p73-80.
Copyright: COPYRIGHT 2011 Gale, Cengage Learning
Full Text: 
Page 73

Artificial Intelligence

SIC CODE(S)

7371

7373

7379

INDUSTRY SNAPSHOT

In the very simplest of terms, the artificial intelligence (AI) industry seeks to create machines that are capable of learning and intelligent thinking. It includes the development of computer-based systems that can learn from past behaviors and apply that knowledge to solving future problems. AI draws from a variety of academic fields, including mathematics, computer science, linguistics, engineering, physiology, philosophy, and psychology, and predates the modern computer age. Although it did not truly emerge as a stand-alone field of study until the late 1940s, logicians, philosophers, and mathematicians formed the foundation upon which modern AI rests during the eighteenth and nineteenth centuries.

The field of artificial intelligence gradually evolved during the last half of the twentieth century, when major research departments were established at prominent U.S. universities, beginning with the Massachusetts Institute of Technology (MIT). The U.S. government has been a dominant player in this market for many years, providing significant funding for military projects. However, private enterprise also is a major stakeholder. AI technology is used in such varied fields as robotics, information management, computer software, transportation, e-commerce, military defense, medicine, manufacturing, finance, security, emergency preparedness, and others.

After years of frequent failures and narrow successes, artificial intelligence was going mainstream, according Gary Morgenthaler’s 2010 article, “AI’s Time Has Arrived.” Morgenthaler attributed this to a “confluence of trends,” including expanding broadband capability, cloud computing, improved AI algorithms, smartphones, and the steady expansion of raw processing power dictated by Moore’s Law.

ORGANIZATION AND STRUCTURE

The AI industry is powered by a blend of small and large companies, government agencies, and academic research centers. Major research organizations within the United States include the Brown University Department of Computer Science, Carnegie Mellon University’s School of Computer Science, the University of Massachusetts Experimental Knowledge Systems Laboratory, NASA’s Jet Propulsion Laboratory, MIT, the Stanford Research Institute’s Artificial Intelligence Research Center, and the University of Southern California’s Information Sciences Institute.

In addition, a large number of small and large companies also fuel research efforts and the development of new products and technologies. Software giants such as IBM Corp., Microsoft Corp., Oracle Corp., PeopleSoft Inc., SAS AB, and Siebel Systems Inc. are heavily involved in the development and enhancement of business intelligence, data mining, and customer relationship management software.

Large corporate enterprises often have their own research arms devoted to advancing AI technologies. For example, Microsoft operates its Decision Theory and Adaptive Systems Group, AT&T operates AT&T Labs-Research (formerly AT&T Bell Labs), and Xerox Corp. is home to the Palo Alto Research Center.

Page 74  |  Top of Article

Associations. The artificial intelligence industry is supported by the efforts of the Association for the Advancement of Artificial Intelligence, or AAAI (formerly the American Association for Artificial Intelligence), a nonprofit scientific society based in Menlo Park, California. According to the AAAI, which was established in 1979, it is “devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.” Along these lines, the AAAI included students, researchers, companies, and libraries among its more than 6,000 members.

In addition to its annual National Conference on Artificial Intelligence, the AAAI hosts fall and spring symposia, workshops, and an annual Innovative Applications of Artificial Intelligence conference. It awards scholarships and grants, and publishes the quarterly AI Magazine, the annual Proceedings of the National Conference on Artificial Intelligence, and various books and reports.

BACKGROUND AND DEVELOPMENT

The history of artificial intelligence predates modern computers. In fact, its roots stretch back to very early instances of human thought. The first formalized deductive reasoning system—known as syllogistic logic—was developed in the fifth century B.C. by Aristotle. In subsequent centuries, advancements were made in the fields of mathematics and technology that contributed to AI. These included the development of mechanical devices such as clocks and the printing press. By 1642 the French scientist and philosopher Blaise Pascal had invented a mechanical digital calculating machine.

During the eighteenth century, attempts were made to create mechanical devices that mimicked living things. Among them was a mechanical automaton developed by the mechanician Jacques de Vaucanson that was capable of playing the flute. Later, Vaucanson created a life-sized mechanical duck that was constructed of gold-plated copper. In an except from Living Dolls: A Magical History of the Quest for Mechanical Life that appeared in the February 16, 2002, issue of The Guardian, author Gaby Wood described the duck this way: “It could drink, muddle the water with its beak, quack, rise and settle back on its legs and, spectators were amazed to see, it swallowed food with a quick, realistic gulping action in its flexible neck. Vaucanson gave details of the duck’s insides. Not only was the grain, once swallowed, conducted via tubes to the animal’s stomach, but Vaucanson also had to install a ‘chemical laboratory’ to decompose it. It passed from there into the ‘bowels, then to the anus, where there is a sphincter which permits it to emerge.’”

Other early developments included a form of binary algebra developed by English mathematician George Boole that gave birth to the symbolic logic used in later computer technology. About the same time, the Analytical Engine was developed. This programmable mechanical calculating machine was used by Ada Byron (Lady Lovelace) and another English mathematician named Charles Babbage.

British mathematician Alan Turing was a computing pioneer whose interests and work contributed to the AI movement. In 1936 he wrote an article that described the Turing Machine—a hypothetical general computer. In time, this became the model for general purpose computing devices, prompting the Association of Computing Machinery to bestow an annual award in his honor. During the late 1930s Turing defined algorithms—instruction sets used during problem solving—and envisioned how they might be applied to machines. In addition, Turing worked as a cryptographer to decipher German communications for the Allied forces during World War II, creating a machine named Colossus that was used for this purpose. In 1950 he developed the now famous Turing Test, arguing that if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent.”

Thanks to the efforts of other early computing pioneers, including John von Neumann, the advent of electronic computing in the early 1940s allowed the modern AI field to begin in earnest. However, the term “artificial intelligence” was not actually coined until 1956. That year, a Dartmouth University mathematics professor named John McCarthy hosted a conference that brought together researchers from different fields to talk about machine learning. By this time the concept was being discussed in such varied disciplines as mathematics, linguistics, physiology, engineering, psychology, and philosophy. Other key AI players, including MIT scientist Marvin Minsky, attended the summer conference at Dartmouth. Although researchers were able to meet and share information, the conference failed to produce any breakthrough discoveries.

A number of milestones were reached during the 1950s that set the stage for later developments, including an AI program called Logic Theorist. Created by the research team of Herbert A. Simon and Allan Newell, the program was capable of proving theorems. It served as the basis for another program the two men created called General Problem Solver, which in turn set the stage for the creation of so-called expert systems. Also known as rule-based systems, experts systems consist of one or more computer programs that focus on knowledge of a specific Page 75  |  Top of Articlediscipline or field (also known as a domain). The system then functions as an expert within the domain. Another noteworthy development was the creation of the List Processing (LISP) programming language in applications.

The AI field benefited from government funding during the 1960s, including a $2.2 million grant to MIT from the Department of Defense’s Advanced Research Projects Agency. A number of new AI programs were developed in the 1960s and 1970s, including the very first expert systems. DENDRAL was created to interpret spectrographic data for identifying the structure of organic chemical compounds. MYCIN, another early expert system, was developed at Stanford University and introduced in 1974. It was applied to the domain of medical diagnosis, as was the INTERNIST program developed at the University of Pittsburgh in 1979.

By the 1980s, the AI industry was still very much in development. However, it had established links to the corporate sector and continued to evolve along with technology in general. Of special interest to the business market were expert systems, which were in place at the likes of Boeing and General Motors. One estimate placed the value of AI software and hardware sales at $425 million in 1986. By 1993 the U.S. Department of Commerce reported that the AI market was valued at $900 million. At that time some 70 to 80 percent of Fortune 500 companies were applying AI technology in various ways.

According to the AAAI, important AI advancements during the 1990s occurred in the areas of case-based reasoning, data mining, games, intelligent tutoring, machine learning, multi-agent planning, natural language understanding, scheduling, translation, uncertain reasoning, virtual reality, and vision. In addition to private sector adoption, AI continued to evolve within the defense market during the 1990s. Applications included missile systems used during Operation Desert Storm. Some of the more dramatic AI milestones during the late 1990s included chess champion Garry Kasparov’s 1997 loss to the Deep Blue chess program, as well as the creation of an interactive, drum-playing humanoid robot named COG by the MIT Artificial Intelligence Lab’s Rodney Brooks in 1998. Heading into the new millennium, AI technology was being adopted at a rapid pace in computer software, medicine, defense, security, manufacturing, and other areas.

In the early 2000s terrorism and fraudulent business behavior such as insider trading were among the most pressing problems worldwide. AI technologies were being used to address both these and other issues. Following the terrorist attacks against the United States of September 11, 2001, teams of graduate students brought robots to Ground Zero and joined in the search and rescue efforts there. This was the first time robots had been used in a real situation such as this, according to Dr. Robin Murphy, a computer science professor at the University of South Florida who was involved in the search and rescue efforts. The robots proved useful, leading to the discovery of ten sets of remains.

AI technology also was being used to combat bioterrorism. Realtime Outbreak and Disease Surveillance (RODS), a system developed by Carnegie Mellon University and the University of Pittsburgh, was capable of analyzing statewide data from immediate care facilities and hospital emergency rooms for patterns indicative of bio-terrorism. In the event that a pattern was found, RODS was configured to notify the appropriate public health officials via pager. This system was used in Salt Lake City for the 2002 Olympics. For that project, 80 percent of the health systems in the Salt Lake region were connected to share data. The system eventually covered 80 percent of the state of Utah.

AI also was being used to monitor the investment world for fraud. For example, in December 2001 the National Association of Securities Dealers began using an AI application called Securities Observation, News Analysis and Regulation (SONAR) to monitor the NASDAQ, as well as the over-the-counter and futures markets, for questionable situations. SONAR won an Innovative Applications of Artificial Intelligence Award from the AAAI. In a news release, the AAAI explained that each day, SONAR monitors some 1,000 quarterly and annual corporate SEC filings, between 8,500 and 18,000 news wire stories, and some 25,000 securities price-volume models. This analysis results in 50 to 60 daily alerts, a few of which are ultimately referred to federal authorities. SONAR used AI technologies such as knowledge-based data representation, data mining, intelligent software agents, rule-based inference, and natural language processing.

By 2004 companies like Sony marketed intelligent consumer robot toys like the four-legged AIBO, which could learn tricks, communicate with human owners via the Internet, and recognize people with AI technology such as face and voice recognition. Burlington, Massachusetts-based iRobot sold a robotic vacuum cleaner called Roomba for $200.

Some other applications included business intelligence and customer relationship management, along with defense and domestic security, education, and finance. Technical concentrations that were growing the fastest included belief networks, neural networks, and expert systems.

Discussions have continued regarding the day when machines would become conscious. Although some disagreed, several leading researchers argued that it was only a matter of time and that human beings were themselves only complex machines. Although science had yet to develop a machine capable of passing the Turing Test, Page 76  |  Top of ArticleAI technology was being incorporated into systems and tools that, it was asserted, made the world a better, more enjoyable, and safer place. These applications included everything from hearing aids and digital cameras to systems that analyzed enormous volumes of data to find patterns that human beings might not otherwise see.

Heading into the mid-2000s, leading AI researchers such as Ray Kurzweil and Jeff Hawkins continued to cast their respective short- and long-term visions of the future. In 2005 Kurzweil published The Singularity Is Near: When Humans Transcend Biology, and Hawkins published On Intelligence. In the January 2006 issue of Strategic Finance, Hawkins indicated that intelligent machines were perhaps only ten years away, remarking: “It took fifty years to go from room-size computers to ones that fit in your pocket. But because we are starting from an advanced technological position, the same transition for intelligent machines should go much faster.”

New developments in artificial intelligence continued on a daily basis toward the end of the decade. For example, in June 2008, Roadrunner, a supercomputer built by IBM and housed at Los Alamos National Laboratory, became the world’s first computer to achieve sustained operating speeds of one petaflop (a petaflop is a million billion, or a quadrillion). In other words, Roadrunner could process a million billion calculations per second. One of the new supercomputer’s potential applications was to perform calculations to certify the reliability of the U.S. nuclear weapons stockpile, with no need for underground nuclear tests. It also was slated to be used for other complex science and engineering applications. In the week after Roadrunner achieved its petaflop speed, researchers tested the code known as PetaVision, which models the human vision system, and found that, for the first time, a computer was able to match human performance on certain visual tasks such as distinguishing a friend from a stranger in a crowd of people or faultlessly detecting an oncoming car on a highway. Said Terry Wallace of the Los Alamos National Laboratory, “Just a week after formal introduction of the machine to the world, we are already doing computational tasks that existed only in the realm of imagination a year ago.”

An area in which AI was growing rapidly was robotics. Robots were being used to perform a vast array of tasks, particularly those related to military and industrial operations. In the manufacturing industry, robots perform such duties as assembling, packing, loading, and transferring items. As of 2008 approximately 178,000 robots were working in U.S. factories, making the United States second only to Japan in robot applications. According to the Robotic Industries Association, robotics companies based in North America saw a 25 percent increase in orders in 2007, one of the largest increases in years. About 65 percent of orders go to automakers and suppliers. Worldwide, the population of robots was more than 1 million, with more growth expected.

Experts predict that robots will become more common in the life of the everyday consumer as well, not just in military and industrial settings. ABI Research forecast that by 2015, the personal robot market will reach $15 billion, and many people will be willing to pay as much for a multitasking humanoid robot as they would for a new car. These updated personal robots will not only be able to perform household chores but will entertain users and help with personal care. Nissan has even developed a robot that will ride along in the car with a driver and monitor his or her mood, then offer encouragement or suggestions.

In addition to robots being used in more applications and settings, experts predict that they will become more human-like. For example, a robot from MIT, Nexi, can display a range of facial expressions, in addition to being able to see and hear. A researcher at the Delft (Netherlands) University of Technology has developed a robot, Flame, which walks like a human. Although walking robots have been around since the 1970s, Flame uses a more fluid, human-like stride, rather than the rigid, careful movement of earlier walking robots. Another new robot, named Zeno and produced by Hanson Robotics in Dallas, Texas, can lie down, rise to a standing position, gesture with its arms, smile, make eye contact, and open and close its eyes and mouth. Using computer software, Zeno’s creators claim he can “think” and grow smarter with time, as more data is input into the system.

PIONEERS

John McCarthy Many consider John McCarthy to be the father of modern AI research. Born on September 4, 1927, in Boston, McCarthy’s father was a working-class Irish immigrant and his mother was a Jewish Lithuanian. Both were politically active and were involved with the Communist party during the 1930s. McCarthy subsequently developed an interest in political activism, although he rejected Marxism.

After skipping three grades in public school, McCarthy graduated from the California Institute of Technology in 1948. A doctoral degree in mathematics followed from Princeton in 1951, where he also accepted his first teaching position. In 1953, McCarthy moved to Stanford to work as an acting assistant professor of mathematics. Another move came in 1955, when he accepted a professorship at Dartmouth.

The following year, McCarthy made a significant mark on the field of artificial intelligence by coining its very name. He did this at a summer conference, which he Page 77  |  Top of Articlehosted to explore the concept of machine learning with researchers from other institutions and disciplines. Another milestone was reached in 1958, when McCarthy joined MIT as an associate professor and established the first research lab devoted to AI.

At MIT, McCarthy developed the List Processing Language (LISP), which became the standard language used by the AI community to develop applications. His work at MIT involved trying to give computers common sense. After moving back to Stanford University in 1962, McCarthy established an AI research lab there and continued to work in the AI concentrations of common sense and mathematical logic.

In honor of his accomplishments, the Association for Computing Machinery presented McCarthy with the Alan Mathison Turing Award in 1971. He also received the Kyoto Prize in 1988, the National Medal of Science in 1990, and the Benjamin Franklin Medal in Computer and Cognitive Science in 2003. In addition to assuming the Charles M. Pigott Chair at the Stanford University School of Engineering in 1987, McCarthy accepted a professorship in the university’s Computer Science Department and was named director of the Stanford Artificial Intelligence Laboratory. In addition to his academic work, McCarthy also is a former president of the AAAI.

Marvin Minsky Another founding father of the AI movement, Marvin Lee Minsky was born in New York on August 9, 1927, to eye surgeon Dr. Henry Minsky and Zionist Fannie Reiser. After attending the Bronx High School of Science and then graduating from the Phillips Academy in Andover, Massachusetts, Minsky spent one year in the Navy. In 1946 he enrolled at Harvard with the intent of earning a degree in physics. Instead, he pursued an eclectic mix of courses in subjects such as genetics, mathematics, and psychology. He became intrigued with understanding how the mind works, and was exposed to the theories of behavioral psychologist B.F. Skinner. Minsky did not accept Skinner’s theories and developed a model of a stochastic (pertaining to a family of random variables or a probability theory) neural network in the brain, based on his grasp of mathematics. Minsky then switched his major, and ended his time at Harvard in 1950 with an undergraduate mathematics degree.

Next, Minsky attended Princeton, where he and Dean Edmonds built an electronic learning machine called the Snarc. Using a reward system, Snarc learned how to successfully travel through a maze. Upon earning a Doctorate in mathematics in 1954, Minsky worked briefly as a research associate at Tufts University and then accepted a three-year junior fellowship at Harvard, where he was able to further explore theories regarding intelligence. In 1958 Minsky joined MIT, where he worked on the staff of the Lincoln Laboratory. The following year he made a significant impact on AI when, along with John McCarthy, he established the Artificial Intelligence Project. Minsky worked as an assistant professor from 1958 to 1961, and then served as an associate professor until 1963, when he became a professor of mathematics.

Several important developments in Minsky’s career occurred in 1964. That year, he became a professor of electrical engineering, and the project he started in 1959 with McCarthy evolved into MIT’s Artificial Intelligence Laboratory. Minsky served as the lab’s director from 1964 to 1973, and it became a place where researchers were allowed to be adventurous. It was there that some of the first automatic robots were developed, along with new computational theories. In recognition of his efforts, Minsky received the Alan Mathison Turing Award from the Association for Computing Machinery in 1970.

In 1974 Minsky was named the Donner professor of science in MIT’s Department of Electrical Engineering and Computer Science. While he also continued to serve as a professor in the Artificial Intelligence Laboratory, Minsky began conducting his own AI research when the lab’s work went in theoretical directions that differed from his own. Indeed, he became critical of some of the lab’s research as being too narrow in focus.

Minsky has shared many of his thoughts about AI with the public by penning articles in magazines like Omni and Discover. In addition, his 1986 book, The Society of Mind, provides a mechanized and detailed theory about how the mind functions and how it may be duplicated one day. Minsky moved to MIT’s Media Laboratory in 1989 and was named the Toshiba Professor of Media Arts and Sciences. In 1992 he co-authored a science fiction novel with Harry Harrison titled The Turing Option, which was based on his theory. Beyond his academic work, Minsky founded several companies, including General Turtle Inc., Logo Computer Systems Inc., and Thinking Machines Corp.

CURRENT CONDITIONS

According to a 2009 published report by Global Industry Analysts Inc., the global artificial intelligence market is expected to exceed $36 billion by 2015. While the Asia-Pacific region is expected to offer the highest growth potential, the United States remains the largest market for AI. The market was aimed at improving existing applications to enhance capabilities in such domains as finance, transportation guidance systems, and medical technology.

By 2010 artificial intelligence had grown at such an accelerated pace that the AI field had developed its own Page 78  |  Top of Articledisciplines (machine learning, computer vision, speech recognition, natural language understanding, etc.), which worked both independently and in concert with one another. According to author Gary Morgenthaler, by 2013 to 2015, virtual personal assistants (VPAs) with improved AI will know one’s personal social graph and habitual patterns, even to the point of making suggestions (i.e., “Do you want me to invite your accountant to this meeting?”). People will be able to manage business and social calendars by having their VPAs handle more complicated tasks, such as finding the most convenient time for a four-person meeting in the upcoming week or reserving a 2:00 tee time.

Owners of newly-capable Roombas will be able to text ahead to tell their vacuum cleaners to vacuum specific rooms, knowing that the machines will recognize and avoid hazards like steps, pets, etc. Smartphones will allow people to locate the restaurants that are closest to their hotels and which offer specific menu items (e.g., ethnic favorites); the smartphone will then translate languages to allow placing an order in the ethnic language requested (e.g., Chinese, Latino, Hungarian). Moreover, most new smartphones will be voice-enabled rather than relying on thumb-typing. Customers will be able to search, text, email, collaborate, purchase, and schedule simply by talking to their smartphones. Even search engines on PCs will be enabled to answer very specific queries directly, instead of providing pages of hyperlinks.

Most of these developments are the result of cloud computing. Companies such as True Knowledge scour databases such as Wikepedia, Freebase, etc., and create deep pools of this easily accessed data, thus facilitating data mining and crowd sourcing. These data pools are intelligently sorted into categorical common sense facts about everyday life, taking into account language usage rather than literal meaning (e.g., “I want to drop off my car.”). Machine-learning algorithms automatically recognize such complex patterns of words and make intelligent decisions based on facts drawn from the data pools. The result is rapid feedback and clearly enhanced performance. Speech recognition companies such as Nuance and Vlingo draw on the data pools in the cloud to compare and analyze millions of speech utterances and then feed results back into their systems. As of 2010 there was commercially available dictation software that could capture virtually all words in the entire English language with nearly 100 percent accuracy.

In music and the performing arts, North Carolina-based Zenph Sound Innovations models the music performances of great musicians from old scratchy records and creates new recordings as though the original musicians were alive and performing. Zenph has designed special robotic pianos that take high resolution MIDI files created by software that simulates the style of the old classical and jazz performers, literally by depressing the piano keys using between 12c and 24 high-resolution MIDI attributes. To critical acclaim in live settings at Carnegie Hall, Steinway Hall, and the Live from the Lincoln Center shows, the robotic pianos have stunned and amazed crowds with their note-for-note renditions of historic performances of the past. Zenph hoped to develop clear versions of other muddy or distorted old recordings and convert them to software that would allow musicians to jam with virtual versions of famous musicians.

In 2010 engineers at the University of California, Berkeley, announced the development of a pressure-sensitive electronic material, made from semiconductor nanowires, which functions like human skin. It is the first material to be made from inorganic single crystalline semiconductors. Scientists first grew the germanium/silicon nanowires on a cylindrical drum, which was then rolled onto a sticky substrate, causing their impression to be imprinted on the material. The nanowires were then printed onto an 18-by-19 pixel square matrix containing a transistor. The nanowires’ transistors were then integrated with apressure-sensitive rubber coating to provide the sensing function. Intended to address one of the key challenges in robotics (adapting the amount of force needed to grip, hold, or manipulate objects of differing degrees of hardness, softness, or temperature), the “e-skin“may eventually be used to restore the sense of touch to persons with prosthetic limbs.

INDUSTRY LEADERS

Some emerging newcomers in 2010 included San Jose, California-based Siri, which licensed DARPA-funded SRI technology and developed a VDA operated by voice commands and queries, such as restaurant reservations and movie tickets, from the Web. Computer giant Apple acquired Siri in 2010. Google later announced its Voice Actions application for Android, seen by industry analysts as a response to Apple’s Siri acquisition.

Honda still maintained its lead in the most technically-advanced human-like robot, ASIMO, that continued to wow audiences with its varying capabilities and human-like responses, but the gap in technology between competitors was closing. Enter Watson, IBM’s supercomputer, which was dubbed as the world’s most advanced question-answering machine. In 2010 the company announced that the producers of Jeopardy! had agreed to pit Watson against some of the game’s best former players.

Page 79  |  Top of Article

Kurzweil Technologies Inc. Based in Wellesley Hills, Massachusetts, Kurzweil Technologies Inc. (KTI) is a private research and development enterprise headed by visionary, inventor, and entrepreneur Ray Kurzweil, author of such best-selling works as The Age of Intelligent Machines, The Age of Spiritual Machines, When Computers Exceed Human Intelligence, and The Singularity Is Near: When Humans Transcend Biology. KTI includes a number of Kurzweil’s various enterprises, including KurzweilAI. net, Ray & Terry’s Longevity Products, FatKat, Kurzweil Music Systems, Kurzweil Educational Systems, Kurzweil CyberArt Technologies, Kurzweil Computer Products, and Kurzweil AI. According to KTI, it is involved in “developing and marketing technologies in pattern recognition, artificial intelligence, evolutionary algorithms, signal processing, simulation of natural processes, and related areas.”

In terms of breakthrough commercial developments, Ray Kurzweil has been recognized for achieving a number of notable firsts. Among them are the first commercial large vocabulary speech recognition system, omni-font optical character recognition, a print-to-speech reading machine for blind individuals, the text-to-speech synthesizer, and the CCD flat-bed scanner. For his efforts, Kurzweil received the National Medal of Technology from President Clinton in 1999. His many other honors include the Association for Computing Machinery’s Grace Murray Hopper Award, as well as Inventor of the Year from MIT.

Southwest Research Institute Based in San Antonio, Texas, the Southwest Research Institute (SwRI) is a leading independent, nonprofit organization devoted to applied research and development. SwRI employs approximately 3,000 workers in 13 states and China. The institute’s 1,200-acre campus includes labs, offices, and other facilities that collectively span almost 2 million square feet.

SwRI has concentrated on several key research areas, including automotive products and emissions research; chemistry and chemical engineering; mechanical and materials engineering; training, simulation, and performance improvement; automation and data systems; engine and vehicle research; space science and engineering; nuclear waste regulatory analyses; applied physics; signal exploitation and geolocation; and aerospace electronics and information technology. In this latter category, the company has conducted research in AI and knowledge systems.

RESEARCH AND TECHNOLOGY

The buzzword for the early 2010s was cloud computing. As of 2010 data mining and processing speeds could increase the procession power of a single server by a factor of 1 million, compared to capabilities in the 1990s. Conversely, cloud computing may multiply AI-related processing power by a factor of 1 billion by 2020. Open source programs such as Hadoop (developed by Yahoo!) permitted AI systems to run data and algorithms simultaneously across multiple servers. This capability also will facilitate merging of the various parallel AI disciplines (speech recognition, dialog management, machine learning) and reassemble them into artificial-based logic.

Researchers in the AI field have dubbed these colonies of computers that gather cloud data as a new phenomenon: swarm Intelligence. One of the earliest applications of this was the joint effort of SRI International, DARPA, and several leading university research subcontractors to reassemble AI subdisciplines into an integrated whole. The resulting development will be applied to assist warfare with a mobile, VDA-like battlefield assistant.

In addition to Apple’s Siri efforts, both Google and Microsoft made intelligence-at-the-interface a key focus for research and development plans in 2010.

In the field of medicine, scientists have been making strides with robo-infants, different from other AI models in that they focus on simulating the reasoning processes of infants rather than mathematical problem solving, much the way that infants learn about their bodies and environments. Two ongoing projects involving robo-infants were Xpero (2010) and iCub (2006). A starfishlike robot was able to resort information about its body after several of its mechanical limbs were removed, and quickly redirected its remaining limbs to walk away. This successful experiment is expected to feed data mining efforts to find a cure for conditions such as Parkinson’s disease.

The U.S. Department of Defense funded a research team at the University of Texas at Dallas to improve existing facial recognition software for national security applications. As of 2010, algorithms varied greatly between software developers and most had not faced real-world challenges. Face Perception and Research Laboratories was working on combining millions of faces captured within databases and examining them under different conditions (e.g., illumination, angle, changes in facial and head hair, eyewear, etc.), to combine results for algorithm determinations.

At the University of Arizona’s Artificial Intelligence Lab, Dr. Hsinchun Chen and his team were designing computer software to mimic a financial analyst. As of 2010, the program performed text mining to scan stock prices and financial news to buy or sell stocks that it believed would gain more than 1 percent in the Page 80  |  Top of Articlesubsequent 20 minutes. (The system sold the stocks after 20 minutes.) When first tested (using 2005 stock price data) it accomplished a return of 8.5 percent.

Beyond a host of research projects with very practical applications, the climate of technological advancement seemed to indicate a future that was both exciting and frightening. Several of the world’s leading academic minds, including astronomer Stephen Hawking, predicted that machine intelligence would surpass human intelligence within only a few decades. As the processing speed and storage capacity of computers increased exponentially, and as projects to create biological microelectromechanical systems (bioMEMs) and nanobots—machines of microscopic size that can be inserted into the human body—were developed, other experts took a view that tested the very limits of human imagination.

On KurzweilAI.net, entrepreneur, author, and visionary Ray Kurzweil painted a picture of a posthuman world, where technology will allow humans to either eliminate many of their biological organs or replace them with artificial ones. In Kurzweil’s future, the distinction between man and machine becomes difficult to discern. In his February 17, 2003, article, “Human Body Version 2.0,” Kurzweil claims that by 2030—following the nanotechnology revolution and reverse engineering of the human brain—biological brains will merge with nonbiological intelligence, which will allow individuals to share conscious experiences, fully immerse themselves in virtual environments, and expand the limits of memory and thought.

Although AI pioneer Marvin Minsky once placed Kurzweil among the leading modern-day futurists, no one can completely predict the future of AI or the further evolution of mankind. However, it seems fairly certain that a number of very interesting developments can be expected during the first several decades of the twenty-first century.

AMERICA AND THE WORLD

Although the United States has historically led the world in AI research, the industry is a global one. For example, the Artificial Intelligence Applications Institute at the University of Edinburgh is a leading AI player in the United Kingdom. Through the university’s School of Informatics, AIAI offers both undergraduate and postgraduate degrees in artificial intelligence, and works to transfer a number of AI technologies to the commercial, industrial, and government sectors in four key areas: adaptive systems; bioinformatics; knowledge systems and knowledge modeling; and planning and activity management.

Since 1969, the International Joint Conference on Artificial Intelligence (IJCAI) has been the main conference for AI professionals throughout the world. It is hosted every other year in cooperation with one or more national AI societies from the hosting country. The IJCAI presents distinguished AI professionals with a number of awards and honors. These include the Award for Research Excellence, the IJCAI Computers and Thought Award for outstanding young AI scientists, the Donald E. Walker Distinguished Service Award, and the Distinguished Paper Awards.

BIBLIOGRAPHY

“About AAAI,” 20 June 2008. Available from http://www.aaai.org .

“AI Today and Tomorrow.” Strategic Finance, January 2006.

“A Robot that Can Smile or Frown: MIT Debuts Nexi.” Industry Week, June 2008.

“Engineers Make Artificial Skin Out of Nanowires.” Science Daily, 13 September 2010. Available from http://www.sciencedaily.com/releases/2010/09/100912151550.htm .

Gaudin, Sharon. “Personal Robot Market Expected to Balloon to $15B by 2015.” Computer World, 31 December 2007.

Global Industry Analysts Inc. “Global Artificial Intelligence Market to Exceed $36 Billion by 2015, According to New Report by Global Industry Analysts, Inc.” 23 June 2009. Available from http://www.strategyr.com/Artificial_Intelligence_AI_Market_Report.asp .

Lomas, Natasha. “Artificial Intelligence: 55 Years of Research Later—and Where Is AI Now?” Silicon, 8 February 2010.

Maloney, Lawrence. “A Tale of Two Robots.” Design News, 2 June 2008.

MIT Computer Science and Artificial Intelligence Laboratory. “CSAIL in the News.” January–February 2008. Available from http://www.csail.mit.edu/events/news/inthenews.html .

Moltenbrey, Karen. “Brain Power: An Inventor Uses Massive’s Software to Make his Creation Think.” Computer Graphics World, May 2008.

Morgenthaler, Gary. “AI’s Time Has Arrived.” Business Week, 21 September 2010.

“Roadrunner Supercomputer Puts Research at a New Scale.” PhysOrg.com, 12 June 2008. Available from http://www.physorg.com .

Spice, Byron. “Over the Holidays 50 Years Ago, Two Scientists Hatched Artificial Intelligence.” Pittsburgh Post-Gazette, 2 January 2006.

Teresko, John. “The Future Is Now for the Robot Revolution: The Next Wave of Robots Will Be Remarkably Human in Appearance and Function.” Industry Week, June 2008.

“The New AI: Turn Robnots Into Infant Scientists.” Discover Magazine, 25 August 2010.

Thompson, Clive. “Smarter Than You Think—Who Is Watson?” New York Times, 16 June 2010.

“TU Delft Robot Flame Walks Like a Human.” Space Daily, 3 June 2008.

Valentino-DeVries, Jennifer. “Using Artificial Intelligence to Digest News, Trade Stocks.” Wall Street Journal, 21 June 2010.

Von Buskirk, Eliot. “Virtual Musicians, Real Performances.” Wired, 2 March 2010.

Source Citation   (MLA 8th Edition) 
"Artificial Intelligence." Encyclopedia of Emerging Industries, edited by Lynn M. Pearce, 6th ed., Gale, 2011, pp. 73-80. Gale Virtual Reference Library, http%3A%2F%2Flink.galegroup.com%2Fapps%2Fdoc%2FCX1930200017%2FGVRL%3Fu%3Dmcc_pv%26sid%3DGVRL%26xid%3Dcd5adac2. Accessed 23 Oct. 2018.

Gale Document Number: GALE|CX1930200017

View other articles linked to these index terms:

Page locators that refer to this article are not hyper-linked.