Download Paper Presentation on Artificial Intelligence 1...
Chinmayi V., Navya K. Computer Science Engineering, Bhoj Reddy Engineering College for Women. Email ID:
[email protected] Email ID: chinmayi.meeragmail.com
ARTIFICIAL INTELLIGENCE Abstract: Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. Textbooks define the field as "the study and design of intelligent agents”. The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits have received the most attention, like Deduction, Reasoning, Problem solving, learning, motion capturing and manipulation, etc. Artificial intelligence has been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general
applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore. animated statues were seen in Egypt and
Introduction: Artificial intelligence (AI) is the
Greece and humanoid automatons were
intelligence of machines and the branch
built by Yan Shi, Hero of Alexandria,
of computer science that aims to create
Al-Jazari and Wolfgang von Kempelen.
it. Textbooks define the field as "the study and design of intelligent agents”. The field was founded on the claim that a
central
property
of
humans,
intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed
by
myth,
fiction
and
philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and
Pygmalion's
Galatea.
Human
likenesses believed to have intelligence were built in every major civilization:
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate
any
conceivable
act
of
mathematical deduction. This, along with recent discoveries in neurology, information
theory and
cybernetics,
inspired a small group of researchers to begin
to
seriously
consider
the
possibility of building an electronic brain. The field of AI research was founded at a
conference
on
the
campus
of
Dartmouth College in the summer of 1956. The attendees, including John McCarthy,
Marvin
Minsky,
Allen
Newell and Herbert Simon, became the
leaders of AI research for many decades.
subproblems, the creation of new ties
They and their students wrote programs
between AI and other fields working on
that were, to most people, simply
similar problems, and above all a new
astonishing: computers were solving
commitment by researchers to solid
word problems in algebra, proving
mathematical methods and rigorous
logical theorems and speaking English.
scientific standards.
By the middle of the 1960s, research in the U.S. was heavily funded by the
Problems
Department of Defense and laboratories had been established around the world.
The general problem of simulating (or
AI's
profoundly
creating) intelligence has been broken
optimistic about the future of the new
down into a number of specific sub-
field: Herbert Simon predicted that
problems. These consist of particular
"machines
within
traits or capabilities that researchers
twenty years, of doing any work a man
would like an intelligent system to
can do" and Marvin Minsky agreed,
display. The traits described below have
writing that "within a generation ... the
received the most attention.
founders
were
will be capable,
problem
of
intelligence'
creating will
'artificial
substantially
be
Deduction, reasoning, problem solving
solved". In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors:
the
incredible
power
of
computers today (see Moore's law), a greater emphasis on solving specific
Early
AI
researchers
developed
algorithms that imitated the step-by-step reasoning
that
human
were
often
assumed to use when they solve puzzles, play board games or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, most of these algorithms
can
computational
require
categories and relations between objects;
experience a "combinatorial explosion":
situations, events, states and time; causes
the amount of memory or computer time
and effects; knowledge about knowledge
required becomes astronomical when the
(what we know about what other people
problem goes beyond a certain size. The
know); and many other, less well
search
researched
more
—
to represent are: objects, properties,
most
for
resources
enormous
world. Among the things that AI needs
efficient
problem
domains.
A
complete
solving algorithms is a high priority for
representation of "what exists" is an
AI research.
ontology
(borrowing
a word
from
traditional philosophy), of which the Human beings solve most of their
most general are called upper ontologies.
problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "subsymbolic" problem solving: embodied agent
approaches
emphasize
the
importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise
Among the most difficult problems in knowledge representation are: 1. Default
reasoning
and
the
qualification problem 2. The breadth of commonsense knowledge 3. The subsymbolic form of some commonsense knowledge
to this skill.
Planning Knowledge representation Intelligent agents must be able to set Knowledge
representation
and
goals and achieve them. They need a
knowledge engineering are central to AI
way to visualize the future (they must
research.
problems
have a representation of the state of the
machines are expected to solve will
world and be able to make predictions
require extensive knowledge about the
about how their actions will change it)
Many
of
the
and be able to make choices that
mathematical
maximize the utility (or "value") of the
learning
available choices.
performance is a branch of theoretical
planning
uses
the
of
algorithms
computer Multi-agent
analysis
science
machine
and
their
known
as
computational learning theory.
cooperation and competition of many agents to achieve a given goal. Emergent
Natural language processing
behavior such as this is used by evolutionary
algorithms
and
swarm
Natural machines
intelligence.
language the
processing
ability
to
read
gives and
understand the languages that humans
Learning
speak. Many researchers hope that a sufficiently powerful natural language
Machine learning has been central to AI research
from
the
beginning.
Unsupervised learning is the ability to find patterns in a stream of input. Supervised
learning
includes
both
classification and numerical regression. Classification is used to determine what
processing system would be able to acquire knowledge on its own, by reading the existing text available over the
internet.
applications
Some of
straightforward
natural
language
processing include information retrieval (or text mining) and machine translation.
category something belongs in, after seeing a number of examples of things from several categories.
Motion and manipulation
Regression
takes a set of numerical input/output
The field of robotics is closely related to
examples and attempts to discover a
AI. Intelligence is required for robots to
continuous function that would generate
be able to handle such tasks as object
the
In
manipulation and navigation, with sub-
reinforcement learning the agent is
problems of localization (knowing where
rewarded
you are), mapping (learning what is
outputs
from
for
the
good
inputs.
responses
and
punished for bad ones. These can be
around
analyzed in terms of decision theory,
(figuring out how to get there).
using
concepts
like
utility.
The
you)
and
motion
planning
Perception
Creativity
Machine perception is the ability to use
A sub-field of AI addresses creativity
input from sensors (such as cameras,
both theoretically (from a philosophical
microphones, sonar and others more
and
exotic) to deduce aspects of the world.
practically (via specific implementations
Computer vision is the ability to analyze
of systems that generate outputs that can
visual input. A few selected subproblems
be considered creative). A related area of
are
computational
speech
recognition,
facial
psychological
perspective)
research
is
and
Artificial
recognition and object recognition.
Intuition and Artificial Imagination.
Social intelligence
General intelligence
Emotion and social skills play two roles
Most researchers hope that their work
for an intelligent agent. First, it must be
will eventually be incorporated into a
able to predict the actions of others, by
machine
understanding
and
(known as strong AI), combining all the
involves
skills above and exceeding human
elements of game theory, decision
abilities at most or all of them. A few
theory, as well as the ability to model
believe that anthropomorphic features
human emotions and the perceptual
like
skills to detect emotions.) Also, for good
artificial brain may be required for such
human-computer
a project.
emotional
their
states.
motives (This
interaction,
an
with
artificial
general
intelligence
consciousness
or
an
intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully
reproduce the author's intention (social intelligence).
Machine
translation,
Cybernetics
and
brain
simulation
therefore, is believed to be AI-complete: it may require strong AI to be done as
In the 1940s and 1950s, a number of
well as humans can do it.
researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such
Approaches
as W. Grey Walter's turtles and the There is no established unifying theory
Johns Hopkins Beast. Many of these
or paradigm that guides AI research.
researchers gathered for meetings of the
Researchers disagree about many issues.
Teleological
A few of the most long standing
University and the Ratio Club in
questions
remained
England. By 1960, this approach was
unanswered are these: should artificial
largely abandoned, although elements of
intelligence simulate natural intelligence,
it would be revived in the 1980s.
that
have
by studying psychology or neurology? Or is human biology as irrelevant to AI research
as
bird
biology
is
Society
at
Princeton
Symbolic
to
aeronautical engineering? Can intelligent
Cognitive simulation
behavior be described using simple,
Economist Herbert Simon and
elegant principles (such as logic or
Allen Newell studied human
optimization)? Or does it necessarily
problem
require solving a large number of
attempted to formalize them, and
completely unrelated problems? Can
their work laid the foundations of
intelligence be reproduced using high-
the field of artificial intelligence,
level symbols, similar to words and
as well as cognitive science,
ideas? Or does it require "sub-symbolic"
operations
research
and
processing?
management
science.
Their
solving
skills
and
research team used the results of
psychological
experiments
to
Researchers at MIT (such as
develop programs that simulated
Marvin Minsky and Seymour
the techniques that people used
Papert)
to solve problems. This tradition,
difficult problems in vision and
centered at Carnegie Mellon
natural
University
eventually
required ad-hoc solutions – they
culminate in the development of
argued that there was no simple
the Soar architecture in the
and general principle (like logic)
middle 80s.
that would capture all the aspects
would
Logic based
found
that
language
solving
processing
of intelligent behavior. Roger
Unlike Newell and Simon, John
Schank described their "anti-
McCarthy felt that machines did
logic" approaches as "scruffy"
not need to simulate human
(as
thought, but should instead try to
paradigms
find the essence of abstract
Stanford).
reasoning and problem solving,
knowledge bases (such as Doug
regardless of whether people
Lenat's Cyc) are an example of
used the same algorithms. His
"scruffy" AI, since they must be
laboratory at Stanford (SAIL)
built by hand, one complicated
focused on using formal logic to
concept at a time.
opposed
to
the
at
"neat"
CMU
and
Commonsense
solve a wide variety of problems, including
knowledge
representation,
planning
and
Knowledge based When
computers
large
learning. Logic was also focus of
memories
the work at the University of
around 1970, researchers from all
Edinburgh
in
three traditions began to build
the
knowledge into AI applications.
development of the programming
This "knowledge revolution" led
language Prolog and the science
to
of logic programming.
deployment of expert systems
Europe
and
which
"Anti-logic" or "scruffy"
elsewhere led
to
the
(introduced
became
with
available
development by
and
Edward
Feigenbaum),
the
first
truly
Statistical
successful form of AI software. The knowledge revolution was
In the 1990s, AI researchers developed
also driven by the realization that
sophisticated mathematical tools to solve
enormous amounts of knowledge
specific subproblems. These tools are
would be required by many
truly scientific, in the sense that their
simple AI applications.
results
are
verifiable,
Sub-symbolic
had achieved great success at simulating demonstration
thinking
in
small
programs. Approaches
based on cybernetics or neural networks were abandoned or pushed into the background. By the 1980s, however, progress in symbolic AI seemed to stall and
many
believed
and
measurable they
have
and been
responsible for many of AI's recent
During the 1960s, symbolic approaches high-level
both
that
symbolic
systems would never be able to imitate
successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."
Tools
all the processes of human cognition,
In the course of 50 years of research, AI
especially perception, robotics, learning
has developed a large number of tools to
and pattern recognition. A number of
solve the most difficult problems in
researchers began to look into "sub-
computer science. A few of the most
symbolic" approaches to specific AI
general of these methods are discussed
problems.
below.
1. Bottom-up, embodied, situated,
Search and optimization
behavior-based or nouvelle AI 2. Computational Intelligence
Many problems in AI can be solved in theory by intelligently searching through many possible solutions: Reasoning can
be reduced to performing a search. For
Logic
example, logical proof can be viewed as searching for a path that leads from
Logic
premises to conclusions, where each step
representation and problem solving, but
is the application of an inference rule.
it can be applied to other problems as
Planning algorithms search through trees
well. For example, the satplan algorithm
of goals and subgoals, attempting to find
uses logic for planning and inductive
a path to a target goal, a process called
logic programming is a method for
means-ends
learning.
algorithms
analysis. for
moving
Robotics limbs
and
grasping objects use local searches in configuration
space.
Many
learning
algorithms use search algorithms based on optimization.
is
used
for
knowledge
Several different forms of logic are used in
AI
research.
Propositional
or
sentential logic is the logic of statements which can be true or false. First-order logic also allows the use of quantifiers
A very different kind of search came to
and predicates, and can express facts
prominence in the 1990s, based on the
about objects, their properties, and their
mathematical theory of optimization. For
relations with each other. Fuzzy logic, is
many problems, it is possible to begin
a version of first-order logic which
the search with some form of a guess
allows the truth of a statement to be
and then refine the guess incrementally
represented as a value between 0 and 1,
until no more refinements can be made.
rather than simply True (1) or False (0).
These algorithms can be visualized as
Fuzzy systems can be used for uncertain
blind hill climbing: we begin the search
reasoning and have been widely used in
at a random point on the landscape, and
modern industrial and consumer product
then, by jumps or steps, we keep moving
control systems. Subjective logic models
our guess uphill, until we reach the top.
uncertainty in a different and more
Other
are
explicit manner than fuzzy-logic: a given
simulated annealing, beam search and
binomial opinion satisfies belief +
random optimization.
disbelief + uncertainty = 1 within a Beta
optimization
algorithms
distribution. By this method, ignorance can be distinguished from probabilistic
statements that an agent makes with high
decision
confidence.
information value theory.
Default
logics,
non-
theory,
decision
analysis,
monotonic logics and circumscription are forms of logic designed to help with
Classifiers
have been designed to handle specific domains
of
knowledge,
such
as:
description logics; situation calculus, event calculus and fluent calculus (for representing events and time); causal calculus; belief calculus; and modal logics.
The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring
methods
for
actions,
and
therefore
classification forms a central part of many
Probabilistic
statistical
learning methods
default reasoning and the qualification problem. Several extensions of logic
and
AI
systems.
Classifiers
are
functions that use pattern matching to determine a closest match. They can be
uncertain reasoning
tuned according to examples, making Many problems in AI (in reasoning,
them very attractive for use in AI. These
planning,
and
examples are known as observations or
robotics) require the agent to operate
patterns. In supervised learning, each
with
uncertain
pattern belongs to a certain predefined
information. AI researchers have devised
class. A class can be seen as a decision
a number of powerful tools to solve
that has to be made. All the observations
these problems using methods from
combined with their class labels are
probability theory and economics.
known as a data set. When a new
learning,
incomplete
perception or
observation is received, that observation A key concept from the science of economics is "utility": a measure of how
is
classified
based
on
previous
experience.
valuable something is to an intelligent agent. Precise mathematical tools have
A classifier can be trained in various
been developed that analyze how an
ways; there are many statistical and
agent can make choices and plan, using
machine learning approaches. The most
widely used classifiers are the neural
direction) and recurrent neural networks
network, kernel methods such as the
(which allow feedback). Among the
support
k-nearest
most popular feedforward networks are
neighbor algorithm, Gaussian mixture
perceptrons, multi-layer perceptrons and
model, naive Bayes classifier, and
radial basis networks. Among recurrent
decision tree. The performance of these
networks, the most famous is the
classifiers have been compared over a
Hopfield net, a form of attractor
wide
Classifier
network, which was first described by
performance depends greatly on the
John Hopfield in 1982. Neural networks
characteristics
be
can be applied to the problem of
classified. There is no single classifier
intelligent control (for robotics) or
that works best on all given problems;
learning, using such techniques as
this is also referred to as the "no free
Hebbian
lunch" theorem. Determining a suitable
learning.
vector
range
machine,
of
tasks.
of
the
data
to
learning
and
competitive
classifier for a given problem is still
Control theory
more an art than science.
Control
Neural networks
theory,
cybernetics, The study of artificial neural networks
has
the
grandchild
many
of
important
applications, especially in robotics.
began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perceptron developed
and the
Paul
Werbos
who
backpropagation
algorithm. The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one
Languages AI researchers have developed several specialized languages for AI research, including Lisp and Prolog.
Evaluating progress In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test.
This procedure allows almost all the
diagnosis, robot control, law, scientific
major problems of artificial intelligence
discovery and toys. However, many AI
to be tested. However, it is a very
applications are not perceived as AI: "A
difficult challenge and at present all
lot of cutting edge AI has filtered into
agents fail.
general applications, often without being called AI because once something
Artificial
intelligence
can
also
be
evaluated on specific problems such as small problems in chemistry, handwriting recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.
The broad classes of outcome for an AI test are: •
Optimal: it is not possible to perform better
•
Strong super-human: performs better than all humans
•
Super-human: performs better than most humans
•
Sub-human: performs worse than most humans
Applications
CONCLUSION: In
order
to
Artificial intelligence has been used in a
competitiveness,
wide range of fields including medical
compelled
to
maintain
their
companies
feel
adopt
productivity
increasing measures. Yet, they cannot
other, their joint contribution can be of
relinquish the flexibility their production
unquestionable
cycles need in order to improve their
understand a little better the importance
response, and thus, their positioning in
of ES within the production system
value
in
order
to
the market. To achieve this, companies must combine these two seemingly opposed principles. Thanks to new technological advances, this combination is already a working reality in some companies. It is made possible today by the
implementation
of
computer
integrated manufacturing (CIM) and artificial intelligence (AI) techniques,
References •
fundamentally by means of expert systems (ES) and robotics. Depending on how these (AI/CIM) techniques contribute
to
automation,
their
immediate effects are an increase in productivity and cost reductions. Yet also, the system's flexibility allows for easier adaptation and, as a result, an increased ability to generate value, in other
words,
competitiveness
is
improved. The authors have analyzed three studies to identify the possible benefits or advantages, as well as the inconveniences,
that
this
type
of
technique may bring to companies, specifically in the production field. Although the scope of the studies and their approach differ from one to the
•
Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13790395-2, http://aima.cs.berkeley.edu/ Kurtzweil, Ray (2005), The singularity is near : when humans transcend biology, New York: Viking, ISBN 9780670033843