Sponsored Links

Rabu, 06 Juni 2018

Sponsored Links

Joscha Bach - Philosophy of AI - Winter Intelligence/AGI12 Oxford ...
src: i.ytimg.com

The artificial intelligence philosophy tries to answer the following questions:

  • Can the machine act intelligently? Can it solve any whatever problem someone will solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can the machine have thoughts, mental states, and consciousness in the same way as humans can? Can it feel how things are ?

These three questions reflect different interests of AI researchers, linguists, cognitive scientists and philosophers respectively. The scientific answer to these questions depends on which definitions of "intelligence" and "consciousness" and "machinery" are being discussed.

Important propositions in AI philosophy include:

  • "benign conventions" Turing: If the machine behaves as intelligently as humans, then it is as intelligent as humans.
  • Dartmouth's Proposal: "Every aspect of learning or other intelligence features can be properly described so that a machine can be created to simulate it."
  • The hypothesis of the physical symbol system of Newell and Simon: "The system of physical symbols has the necessary and sufficient means for common acts of intelligence."
  • Strong Hypothesis AI Searle: "Properly programmed computers with the right input and output will have thoughts in the same sense as humans have thoughts."
  • The Mechanism of Hobbes: "For" reason "... is none other than" reckoning, "which adds and subtracts, the consequences of commonly agreed names to 'mark' and 'signify' from our mind... "


Video Philosophy of artificial intelligence



Can a machine show general intelligence?

Is it possible to create machines that can solve all problems that human beings solve using their intelligence? This question defines what machine scope will be possible in the future and guides the direction of AI research. It deals only with machine behavior and ignores issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether the machine is really thinking (as people think) or just acting like he is thinking of.

The basic position of most AI researchers is summarized in this statement, which appears in the proposal for a 1956 Dartmouth workshop:

  • Any aspect of learning or other intelligence features can be precisely described so that a machine can be created to simulate it.

The argument against the basic premise must show that building an working AI system is impossible, as there are some practical limitations to computer capabilities or that there are some special qualities of the human mind needed to think and yet can be duplicated by machines (or with current AI research methods ). Arguments that support the basic premise must show that such a system is possible.

The first step to answering the question is to clearly define "intelligence".

Intelligence

Turing test

Alan Turing reduces the problem of defining intelligence into a simple question of conversation. He suggests that: if a machine can answer any whatever asked to it, using the same words that ordinary people will do, then we can call the machine intelligent. The modern version of its experimental design will use an online chat room, where one of the participants is a real person and one of the participants is a computer program. This program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) has ever asked the question "can anyone think?" He writes "rather than arguing constantly about this, there is usually a polite convention that everybody thinks of". The Turing test extends this polite convention to the machine:

  • If the machine acts as human, then it is as intelligent as humans.

One criticism of the Turing test is that it is explicitly anthropomorphic. If our main goal is to make machines that are more intelligent than people, why should we insist that our machine should resemble people? Russell and Norvig wrote that "the text of the aviation technique does not define the purpose of their field as 'making machines that fly so exactly as doves so they can deceive other pigeons'".

Smart agent definitions

A.I. Recent research defines intelligence in terms of intelligent agents. An "agent" is something that sees and acts in an environment. The "performance measure" determines what counts as success for the agent.

  • If the agent acts to maximize the expected value of a performance measure based on past experience and knowledge then it is smart.

This definition tries to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test the human traits that we might not want to be savvy, such as the ability to be humiliated or the temptation to lie. They have the disadvantage that they fail to make a plausible distinction between "thinking things" and "things that are not." With this definition, even the thermostat has an imperfect intelligence.

Argument that machine can display general intelligence

The brain can be simulated

Hubert Dreyfus explains this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose, then.... we... must be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and clearly illustrated by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power would be adequate for a complete brain simulation by 2029. A non-real-time simulation a thalamocortical model with a human brain size (10 11 neuron) was performed in 2005 and took 50 days to simulate 1 second of brain dynamics in a group of 27 processors.

Some disagree that brain simulations may be in theory, even AI criticisms such as Hubert Dreyfus and John Searle. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its culmination leads to the conclusion that any process can technically be considered "calculation". "What we want to know is what distinguishes the mind from the thermostat and the heart," he wrote. Thus, simply imitating the functioning of the brain itself would be a recognition of ignorance of the intelligence and the nature of the mind.

Human thinking is the processing of symbols

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" is the essence of human and machine intelligence. They wrote:

  • The physical symbol system has the necessary and sufficient generic ways of intelligence.

This claim is very powerful: it implies that human thought is a kind of symbolic manipulation (since the symbol system is required for intelligence) and that the machine can be intelligent (because the symbol system is enough to intelligence). Another version of this position is described by the philosopher Hubert Dreyfus, who calls this "psychological assumption":

  • Thoughts can be seen as devices that operate on bits of information in accordance with formal rules.

Differences are usually made between high level symbol types that directly correspond to objects in the world, such as & lt; dog & gt; and & lt; tail & gt; and the more complex "symbols" that are present in machines such as neural networks. Initial research on AI, called "good old artificial intelligence" (GOFAI) by John Haugeland, focuses on this type of high level symbol.

Arguments against symbol processing

These arguments show that human thought does not consist solely of high-level manipulation of symbols. They do not show that artificial intelligence is not possible, it's just more than the processing of symbols required.

GÃÆ'¶delian anti-mechanical argument

In 1931, Kurt GÃÆ'¶del proved with the incompleteness theorem that it is always possible to construct a "GÃÆ'¶del statement" that a consistent formal logic system (such as a high-level symbol manipulation program) can not be proven. Despite the correct statement, Gödel's constructed statement can not be proved in the given system. (The truth of Gödel's statements that are built depends on the consistency of the given system: applying the same process to inconsistently inconsistent systems will seem successful, but will in fact result in a "speculative GÃÆ'¶del" statement. GÃÆ'¶del suspect that the human mind can rightly ultimately determine the truth or falsity of a reasoned mathematical statement (including possible GÃÆ'¶del statements), and therefore the power of the human mind can not be reduced to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanical argument. The GÃÆ'¶delian anti-mechanical argument tends to rely on the innocuous claim that the human mathematical system (or some idealized human mathematician) is consistent (completely error free) and trusts entirely in its own consistency (and can make all logical conclusions that follow consistency of his own, including belief in her GÃÆ'¶del statement). This is not possible for Turing machines (and, with informal extensions, known types of computer mechanics) to do; therefore, GÃÆ'¶delian concludes that human reasoning is too powerful to be captured in machines.

However, the modern consensus in the scientific and mathematical communities is that actual human reasoning is inconsistent; that every idealized idealized version of human reasoning would be compelled to adopt healthy, but counter-intuitive, open-minded skepticism about the consistency of H (if not H consistently inconsistent); and that GÃÆ'¶del's theorem does not lead to a valid argument that humans have mathematical reasoning abilities beyond what machines can duplicate. It is this consensus that the GÃÆ'¶delian anti-mechanical argument is destined to fail strongly in Artificial Intelligence: every effort to exploit (the incompleteness of GÃÆ'¶del) to attack the computationalist thesis does not legitimate, because this result is quite consistent with a computational thesis. "

More pragmatically, Russell and Norvig note that the GÃÆ'¶del argument applies only to what is theoretically verifiable, given the amount of memory and unlimited time. In practice, real machines (including humans) have limited resources and will have difficulty proving many theorems. No need to prove everything to be smart.

Less formal, Douglas Hofstadter, in his prize-winning Pulitzer Prize GÃÆ'¶del, Escher, Bach: An Eternal Gold Thread, states that "GÃÆ'¶del-statement" always refers to the system itself, drawing an analogy of ways Epoxides paradoxes use self-referential statements, such as "this statement is wrong" or "I'm lying". But, of course, the Epimenides paradox applies to anything that makes a statement, whether it be a machine or a human, even Lucas himself. Consider:

  • Lucas can not confirm the truth of this statement.

This statement is true but can not be confirmed by Lucas. This shows that Lucas himself is subject to the same limitations he describes to machines, as are everyone, and so Lucas's argument is useless.

Having concluded that human reasoning can not be calculated, Penrose goes on to controversy speculating that some kind of non-computational hypothetical process involving the collapse of the state of quantum mechanics gives humans a special advantage over existing computers. Existing quantum computers only reduce the complexity of Turing's computing tasks and are still limited to tasks within the Turing machine.. With Penrose and Lucas arguments, the quantum computers are insufficient, so Penrose looks for some other process involving new physics, for example the quantum gravity that may manifest new physics on the Planck mass scale through the spontaneous quantum collapse of the wave function. This state, he suggests, takes place both within the neuron and also includes more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain to utilize any kind of quantum calculation, and further that quantum decoherence scales seem to be too fast to influence the firing of neurons.

Dreyfus: the virtue of unconscious ability

Hubert Dreyfus argues that human intelligence and expertise depends primarily on instinct instincts rather than conscious symbolic manipulations, and argues that this unconscious skill will never be captured in formal rules.

Dreyfus's argument had been anticipated by Turing in a computational and intelligence paper of the 1950s, in which he classified this as "the argument of behavioral informality." Turing argues that, just because we do not know the rules that govern complex behavior, this does not mean that there are no such rules. He wrote: "we can not easily convince ourselves of the absence of a complete law of conduct... The only way we know to find such a law is scientific observation, and we certainly know there is no circumstance in which we can say , 'We have been looking for enough There is no such law.' "

Russell and Norvig show that, in the years since Dreyfus published his critique, progress has been made to discover the "rules" that govern unconscious reasoning. The movement lies in robotics research trying to capture our subconscious skills on perception and attention. The paradigms of computational intelligence, such as nerve webs, evolutionary algorithms and so on are largely directed at simulated reasoning and learning. The statistical approach for AI can make predictions that approximate human intuitive guessing accuracy. Research into common sense knowledge focuses on reproducing a background or a knowledge context. In fact, AI research has generally shifted from high-level symbol manipulation or "GOFAI", to new models intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier writes that "time has proved the accuracy and understanding of some of Dreyfus's comments.If he has formulated it less aggressively, the constructive actions they suggest may have been taken long ago."

Maps Philosophy of artificial intelligence



Can the machine have mind, consciousness, and mental condition?

This is a philosophical question, related to other mind matters and difficult awareness issues. Questions revolve around positions defined by John Searle as "strong AI":

  • The system of physical symbols can have a state of mind and mental.

Searle differentiates this position from what he calls "weak AI":

  • The physical symbol system can act intelligently.

Searle introduced the term to isolate strong AI from weak AI so that he can focus on what he thinks is more interesting and debatable. He argues that even if we assume that we have a computer program that acts exactly like the human mind, there will still be difficult philosophical questions that need to be answered.

Searle's two positions do not really care about AI research, because they do not directly answer the question "can machines show general intelligence?" (unless it can also be shown that awareness is necessary for intelligence). Turing writes "I do not want to give the impression that I think there is no mystery about consciousness... [b] ut I do not think this mystery needs to be solved before we can answer the question [whether the machine can think]." Russell and Norvig agree: large AI researchers take a weak AI hypothesis to give, and do not care about the powerful AI hypothesis. "

There are some researchers who believe that consciousness is an important element in intelligence, such as Igor Alexander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" is very close to "intelligence". (See artificial consciousness.)

Before we can answer this question, we must be clear what we mean by "mind", "mental state" and "consciousness".

Awareness, thoughts, mental state, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new-age thinkers, for example, use the word "consciousness" to describe something akin to Bergner's "vital lan": an energetic, invisible fluid that permeates life and especially the mind. Sci-fi writers use the word to describe some important properties that make us human: the "conscious" machine or alien will be presented as a fully human character, with intelligence, desire, will, insight, pride, (Sci-fi writers also use the words "patience," "wisdom," "self-awareness" or "ghost" - as in the manga and anime series - to describe this essential human property). For others, the word "mind" or "consciousness" is used as a kind of secular synonym for the soul.

For philosophers, neuroscientists, and cognitive scientists, they are used in a more precise and more usual way: they refer to the familiar experience of having a "mind in your head," such as perception, a dream , intent or plan, and how we know something, or means something or understand something. "It is not difficult to give a reasonable definition of consciousness" to observe the philosopher John Searle. What is mysterious and fascinating is not so much what but how is: how a lump of fatty tissue and electricity raises this familiar experience. , mean or think?

Philosophers call this a difficult problem of consciousness. This is the latest version of the classical problem in the philosophy of the mind called the "mind-body problem." The related issue is a matter of meaning meaning or understanding (which the philosophers call "intentionality"): what is the relationship between mind and what we think about (ie objects and situations in the world)? The third problem is the problem experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or there are things "inside their head" (called "qualia") that can differ from person to person?

Neurobiologists believe that all these problems will be solved when we begin to identify the neural correlations of consciousness: the actual relationship between the machine in our head and its collective nature; such as thoughts, experiences, and understanding. Some hard critics of artificial intelligence agree that the brain is just a machine, and that awareness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can computer programs, running on digital machines that scramble binary numbers of zero and one, double the ability of neurons to create thoughts, with mental states (such as understanding or perceiving), and ultimately, the experience of consciousness?

The argument that the computer can not have a state of mind and mental

Chinese Space Searle

John Searle asks us to consider a mind experiment: suppose we have written a computer program that passed the Turing test and showed "common acts of intelligence." Suppose, specifically, that the program can communicate in fluent Chinese. Write the program on 3x5 card and give it to ordinary people who can not speak Mandarin. Lock the person into the room and have him follow the instructions on the card. He will copy Chinese characters and pass them out of the room through the slot. From the outside, it would seem that the Chinese room contains a very intelligent person who speaks Chinese. The question is: is there anyone (or anything) in a room that understands Mandarin? That is, does anyone have a mental state of understanding, or who has a conscious awareness of what is being talked about in Chinese? The man was clearly unconscious. The room can not be conscious. Cards are of course unconscious. Searle concludes that Chinese space, or any other physical symbol system, can not have a mind.

Searle goes on to state that the actual mental state and consciousness requires (not yet explained) "the actual physical-chemical properties of the real human brain." He argues that there are special "causal attributes" of brain and mind-inducing neurons: in his words "the brain causes the mind."

Related arguments: Factory Leibniz, Davis telephone exchange, China Block state, and Blockhead

Gottfried Leibniz basically made the same argument with Searle in 1714, using mind experiments to expand the brain to the size of the mill. In 1974, Lawrence Davis envisioned doubling the brains of people-run telephone and office lines, and in 1978 the Ned Block envisioned the entire Chinese population engaged in such brain simulations. This thought experiment is called the "Chinese People" or "The Chinese Fitness Center". Ned Block also proposed his Blockhead argument, which is a version of the Chinese space where the program has again reckoned into a simple set of rules "see this, do that", erasing all the mysteries of the program.

Feedback to Chinese space

The response to the Chinese space emphasizes several different points.

  • Reply system and cyber reply : This reply states that the system , including humans, programs, rooms, and cards, is what understands Mandarin. Searle claims that men in the room are the only things that may "have a mind" or "understand", but others disagree, arguing that it is possible to have two thoughts in the same mind. physical place, similar to the way computers can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, strength, and complexity answer: Some critics point out that men in the room may take millions of years to answer simple questions, and will require a "filing cabinet" of astronomical proportions.. This brings the clarity of Searle's intuition to doubt.
  • Robot Reply : To really understand, some people believe that the Chinese Chamber needs eyes and hands. Hans Moravec writes: 'If we can transplant robots into reasoning programs, we will not need someone to give them meaning again: it will come from the physical world. "
  • Brain simulator answer : What if the program simulates the sequence of nerve terminations in the actual brain synapses of an actual Chinese speaker? The man in the room will imitate the real brain. This is a variation on the "reply system" that seems more plausible because the "system" now clearly operates like a human brain, which reinforces the intuition that there is something other than men in a room who can understand Mandarin.
  • Other thoughts retaliate and reciprocal epiphenomena : Some have noted that Searle's argument is just another version of someone's mind problem, applied to the machine. Since it is difficult to decide whether the "real" person thinks, we should not be surprised that it is difficult to answer the same question about the machine.
The related question is whether "consciousness" (as Searle understands) exists. Searle argues that the experience of consciousness can not be detected by examining the behavior of machines, humans or other animals. Daniel Dennett points out that natural selection can not sustain animal characteristics that have no effect on animal behavior, and thus consciousness (as Searle understood) can not be produced by natural selection. Therefore, neither natural selection produces consciousness, nor is the "strong AI" true in that consciousness can be detected by a tailored Turing test.

Philosophical Disquisitions: Cognitive Scarcity and Artificial ...
src: 2.bp.blogspot.com


Thinking about some sort of calculation?

The theory of computation of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between running the program and the computer. This idea has a philosophical roots in Hobbes (who claim reasoning "no more than calculation"), Leibniz (who attempts to create the logical calculus of all human ideas), Hume (who thinks perception can be reduced to "atomic impression") and even Kant analyze all experience as controlled by formal rules). The latest version is associated with philosopher Hilary Putnam and Jerry Fodor.

This question contains our previous question: if the human brain is a kind of computer then the computer can be intelligent and conscious, answering practical and philosophical questions from AI. In the case of AI's practical questions ("Can machines display general intelligence?"), Some versions of computationalism state that (as Hobbes writes):

  • Reasonable reasoning besides calculation

In other words, our intelligence comes from a form of calculation, similar to arithmetic. This is the hypothesis of the physical symbol system discussed above, and this implies that artificial intelligence is possible. In terms of AI's philosophical questions ("Can machines have thoughts, mental states, and consciousness?"), Most versions of computationalism claim that (as characterized by Stevan Harnad):

  • The mental state is just a computer program implementation (right)

This is John Searle's "powerful AI" discussed above, and this is the real target of the Chinese space argument (according to Harnad).

The Philosophy of System Shock [Cyberpunk, Megacorporations ...
src: i.ytimg.com


Other related questions

Alan Turing notes that there are many arguments in the form "the machine will never do X", where X can be many things, such as:

Be kind, smart, beautiful, friendly, have the initiative, have a sense of humor, say right and wrong, make mistakes, fall in love, enjoy strawberries and creams, make someone fall in love with him, learn from experience, use the word properly, his own mind, has a diversity of behavior as a man, doing something completely new.

Turing argues that this objection is often based on naive assumptions about machine versatility or "disguised forms of the argument of consciousness". Writing a program that shows one of these behaviors "will not make much of an impression." All of these arguments touch the basic premise of AI, unless it can be shown that one of these properties is important for general intelligence.

Can the machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function within an organism, then emotions can be seen as the mechanism by which intelligent agents use to maximize the utility of their actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be very emotional about being a good person". Fear is a source of urgency. Empathy is an important component of good human computer interaction. He said the robot "will try to please you in a seemingly unconditional way as it will get the vibrations of this positive reinforcement.You can interpret this as a kind of love." Daniel Crevier writes, "The Moravec point is that emotion is merely a tool for channeling behavior in ways that benefit a species's survival."

However, emotions can also be defined in terms of their subjective qualities, from what it feels like to have emotions. The question of whether the machine really feels emotion, or is it just acting as if it is a feeling of emotion is a philosophical question, can the machine become conscious? in another form.

Can a machine become self-conscious?

"Self-awareness", as mentioned above, is sometimes used by sci-fi writers as a name for the essential human property that makes a person fully human. Turing explores all other human traits and reduces the question to "can the machine be the subject of his own mind?" Could it be thinking of herself ? Viewed in this way, it is clear that a program can be written that can report their own internal state, such as a debugger. Though practically self-awareness often presupposes more ability; a machine that can interpret meaning in some way to not only its own state but in general questions without a solid answer: the contextual nature of its present existence; how to compare it with past countries or future plans, limitations and values ​​of work products, how they perceive their performance as valued or compared to others.

Can a machine be original or creative?

Turing reduces this to the question of whether the machine can "surprise us" and argues that this is definitely true, as any programmer can prove. He noted that, with sufficient storage capacity, computers can behave in astronomical quantities in various ways. It should be possible, even trivial, for computers that can represent ideas to incorporate them in new ways. (The Mathematician Automated from Douglas Lenat, for example, combines ideas to discover new mathematical truths.)

In 2009, scientists at Aberystwyth University in Wales and the University of Cambridge at U.K designed a robot named Adam that they believed to be the first engine to independently emerge with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit in the data entered, such as discovering the laws of motion from the pendulum movement.

Can a machine be generous or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms of function or behavior, in case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "accidentally" be set to harm? The last is the question "can the machine have a conscious state?" (such as intentions) in other forms.

The question of whether a highly intelligent and fully autonomous engine would be harmful has been examined in detail by futurists (such as the Singularity Institute). (A clear element of the drama has also made a popular subject in science fiction, which has considered many possible different scenarios in which intelligent machines pose a threat to mankind.)

One of the problems is that the machine can gain the necessary autonomy and intelligence to become dangerous very quickly. Vernor Vinge has suggested that in just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "Singularity." He points out that it may be somewhat or possibly very dangerous to humans. This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended the conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and capable of making their own decisions. They discuss the possibilities and the extent to which computers and robots may be able to obtain any degree of autonomy, and to what extent they can use that ability to possibility pose any threat or danger. They noted that some machines have acquired various semi-autonomous forms, including being able to find their own resources and able to independently select targets to attack with weapons. They also note that some computer viruses can avoid elimination and have achieved "cockroach intelligence." They note that self-awareness as depicted in science fiction may not be possible, but there are potential dangers and other pitfalls.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous function. The US Navy has funded reports showing that when military robots become more complex, there must be greater attention to the implications of their ability to make autonomous decisions.

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look into the matter. They point to programs like the Language Acquisition Tool that can mimic human interaction.

Some have suggested the need to build "Friendly AI", which means that progress already made with AI should also include efforts to make AI intrinsically friendly and humane.

Can a machine have a soul?

Finally, those who believe in the existence of the soul may argue that "Thinking is the function of the immortal soul of man." Alan Turing calls this "theological refusal". He wrote

In an attempt to build such machines, we should not arbitrarily deprive Him of his strength in creating the soul, more than we are in the procreation of children: instead we, in both cases, the instruments of his will provide houses for the soul- the soul He created.


AI: Computers and Minds | Philosophy Tube - YouTube
src: i.ytimg.com


Bibliography & amp; Conference

The main bibliography on the subject, with several sub-sections, is in PhilPapers


Source of the article : Wikipedia

Comments
0 Comments