Global Forum For Indurstrial Devlopment is SPONSORED BY ICO INDIA

  • MP Society Registration (Act. 1973 No. 44) 03/27/01/21857/19 (MSME Forum Established Since-2009)
Artificial Intelligence

Artificial intelligence enables computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind.

Artificial intelligence (AI) is wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.

What is artificial intelligence?

In computer science, the term artificial intelligence (AI) refers to any human-like intelligence exhibited by a computer, robot, or other machine. In popular usage, artificial intelligence refers to the ability of a computer or machine to mimic the capabilities of the human mind—learning from examples and experience, recognizing objects, understanding and responding to language, making decisions, solving problems—and combining these and other capabilities to perform functions a human might perform, such as greeting a hotel guest or driving a car.

After decades of being relegated to science fiction, today, AI is part of our everyday lives. The surge in AI development is made possible by the sudden availability of large amounts of data and the corresponding development and wide availability of computer systems that can process all that data faster and more accurately than humans can. AI is completing our words as we type them, providing driving directions when we ask, vacuuming our floors, and recommending what we should buy or binge-watch next. And it’s driving applications—such as medical image analysis—that help skilled professionals do important work faster and with greater success.

As common as artificial intelligence is today, understanding AI and AI terminology can be difficult because many of the terms are used interchangeably; and while they are actually interchangeable in some cases, they aren’t in other cases. What’s the difference between artificial intelligence and machine learning? Between machine learning and deep learning? Between speech recognition and natural language processing? Between weak AI and strong AI? This article will try to help you sort through these and other terms and understand the basics of how AI works.


Artificial intelligence applications

As noted earlier, artificial intelligence is everywhere today, but some of it has been around for longer than you think. Here are just a few of the most common examples:

Speech recognition: Also called speech to text (STT), speech recognition is AI technology that recognizes spoken words and converts them to digitized text. Speech recognition is the capability that drives computer dictation software, TV voice remotes, voice-enabled text messaging and GPS, and voice-driven phone answering menus.

Natural language processing (NLP): NLP enables a software application, computer, or machine to understand, interpret, and generate human text. NLP is the AI behind digital assistants (such as the aforementioned Siri and Alexa), chatbots, and other text-based virtual assistance. Some NLP uses sentiment analysis to detect the mood, attitude, or other subjective qualities in language.

Image recognition (computer vision or machine vision): AI technology that can identify and classify objects, people, writing, and even actions within still or moving images. Typically driven by deep neural networks, image recognition is used for fingerprint ID systems, mobile check deposit apps, video and medical image analysis, self-driving cars, and much more.

Real-time recommendations: Retail and entertainment web sites use neural networks to recommend additional purchases or media likely to appeal to a customer based on the customer’s past activity, the past activity of other customers, and myriad other factors, including time of day and the weather. Research has found that online recommendations can increase sales anywhere from 5% to 30%.

Virus and spam prevention: Once driven by rule-based expert systems, today’s virus and spam detection software employs deep neural networks that can learn to detect new types of virus and spam as quickly as cybercriminals can dream them up.

Ride-share services: Uber, Lyft, and other ride-share services use artificial intelligence to match up passengers with drivers to minimize wait times and detours, provide reliable ETAs, and even eliminate the need for surge pricing during high-traffic periods.

Household robots: iRobot’s Roomba vacuum uses artificial intelligence to determine the size of a room, identify and avoid obstacles, and learn the most efficient route for vacuuming a floor. Similar technology drives robotic lawn mowers and pool cleaners.

Autopilot technology: This has been flying commercial and military aircraft for decades. Today, autopilot uses a combination of sensors, GPS technology, image recognition, collision avoidance technology, robotics, and natural language processing to guide an aircraft safely through the skies and update the human pilots as needed. Depending on who you ask, today’s commercial pilots spend as little as three and a half minutes manually piloting a flight.


HOW DOES ARTIFICIAL INTELLIGENCE WORK?

Can machines think? — Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?"

Turing's paper "Computing Machinery and Intelligence" (1950), and it's subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.

what is artificial intelligenceAt it's core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.

The major limitation in defining AI as simply "building machines that are intelligent" is that it doesn't actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is "the study of agents that receive percepts from the environment and perform actions." (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI:


The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally." (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together."

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

While addressing a crowd at the Japan AI Experience in 2017, DataRobot CEO Jeremy Achin began his speech by offering the following definition of how AI is used today:

"AI is a computer system able to perform tasks that ordinarily require human intelligence... Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules."

HOW IS AI USED?

artificial intelligence generally falls under two broad categories:


  • Narrow AI:

  • Sometimes referred to as "Weak AI," this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.

  • Artificial General Intelligence (AGI):

  • AGI, sometimes referred to as "Strong AI," is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem.

    ARTIFICIAL INTELLIGENCE EXAMPLES

    • Smart assistants (like Siri and Alexa)
    • Disease mapping and prediction tools
    • Manufacturing and drone robots
    • Optimized, personalized healthcare treatment recommendations
    • Conversational bots for marketing and customer service
    • Robo-advisors for stock trading
    • Spam filters on email
    • Social media monitoring tools for dangerous content or false news
    • Song or TV show recommendations from Spotify and Netflix

    Narrow Artificial Intelligence

    Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had "significant societal benefits and have contributed to the economic vitality of the nation," according to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration.

    A few examples of Narrow AI include:

    Google search
    Image recognition software
    Siri, Alexa and other personal assistants
    Self-driving cars
    IBM's Watson

    Machine Learning & Deep Learning

    Much of Narrow AI is powered by breakthroughs in machine learning and deep learning. Understanding the difference between artificial intelligence, machine learning and deep learning can be confusing. Venture capitalist Frank Chen provides a good overview of how to distinguish between them, noting:

    "Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques."

    Simply put, machine learning feeds a computer data and uses statistical techniques to help it "learn" how to get progressively better at a task, without having been specifically programmed for that task, eliminating the need for millions of lines of written code. Machine learning consists of both supervised learning (using labeled data sets) and unsupervised learning (using unlabeled data sets).

    Deep learning is a type of machine learning that runs inputs through a biologically-inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go "deep" in its learning, making connections and weighting input for the best results.

    Artificial General Intelligence

    The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for AGI has been fraught with difficulty.

    The search for a "universal algorithm for learning and acting in any environment," (Russel and Norvig 27) isn't new, but time hasn't eased the difficulty of essentially creating a machine with a full set of cognitive abilities.

    AGI has long been the muse of dystopian science fiction, in which super-intelligent robots overrun humanity, but experts agree it's not something we need to worry about anytime soon.

    HISTORY OF AI

    intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle's development of the syllogism and it's use of deductive reasoning was a key moment in mankind's quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.



    1943

    Warren McCullough and Walter Pitts publish "A Logical Calculus of Ideas Immanent in Nervous Activity." The paper proposed the first mathematic model for building a neural network.

    1949

    In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they're used. Hebbian learning continues to be an important model in AI.

    1950

    Alan Turing publishes "Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent. Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer. Claude Shannon publishes the paper "Programming a Computer for Playing Chess." Isaac Asimov publishes the "Three Laws of Robotics."

    1952

    Arthur Samuel develops a self-learning program to play checkers.

    1954

    The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.

    1956

    The phrase artificial intelligence is coined at the "Dartmouth Summer Research Project on Artificial Intelligence." Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today. Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.

    1958

    John McCarthy develops the AI programming language Lisp and publishes the paper "Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.

    1959

    Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving. Herbert Gelernter develops the Geometry Theorem Prover program. Arthur Samuel coins the term machine learning while at IBM. John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

    1963

    John McCarthy starts the AI Lab at Stanford.

    1966

    The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.

    1969

    The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.

    1972

    The logic programming language PROLOG is created.

    1973

    The "Lighthill Report," detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.

    1974-1980

    Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year's "Lighthill Report," artificial intelligence funding dries up and research stalls. This period is known as the "First AI Winter."

    1980

    Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first "AI Winter."

    1982

    Japan's Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

    1983

    In response to Japan's FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

    1985

    Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.

    1987-1993

    As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the "Second AI Winter." During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor. Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier. DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.

    1991

    U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

    1997

    IBM's Deep Blue beats world chess champion Gary Kasparov

    2005

    STANLEY, a self-driving car, wins the DARPA Grand Challenge. The U.S. military begins investing in autonomous robots like Boston Dynamic's "Big Dog" and iRobot's "PackBot."

    2008

    Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

    2011

    IBM's Watson trounces the competition on Jeopardy!.

    2012

    Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in breakthrough era for neural networks and deep learning funding.

    2014

    Google makes first self-driving car to pass a state driving test.

    2016

    Google DeepMind's AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.