At Ai, we're raising a child machine from infancy to adulthood - thus bringing Turing's vision to fruition - and creating entirely new approaches to machine learning. In our research, we take a strong behaviorist approach, meaning that we work from the principle that language is a skill, not simply the output of brain functions, and, therefore, can be learned. The research was initially led by Jason Hutchens, a world-renowned chatbot developer and winner of the Loebner Prize in Artificial Intelligence, and Dr. Anat Treister-Goren, an award-winning neurolinguist.
The pages in this section provide some insight into Ai's original research plan, carried out in 2000-2001. Since the beginning of 2002, Ai has adopted a new research strategy, refraining from articulated theory. Instead, Ai's technology is constantly on public display, offering the interested user a chance to experience the technology first hand, by conversing with Ai's Virtual Personalities.
Your Ad Here
Wednesday, January 20, 2010
Child Intelligence on Machine?
Thursday, January 7, 2010
Artificial intelligence
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. Textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines.
"The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo Sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubries, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks[8] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still a long-term goal of some) research.
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated statues were worshipped in
Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence. Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.
This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain. The field of AI research was founded at a conference on the campus of
AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of
The next few years, when funding for projects was hard to find, would later be called an "AI winter". In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[36] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began. In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.