Artificial Intelligence 101: How does AI Work

Artificial Intelligence 101

Sharing is Caring!

Do you want to inculcate Artificial Intelligence within your next project? Or, do you have evil ideas of developing an AI-based Intelligent Robot that can take over the world?

Maybe, you are trying to make sense of how Apple’s Siri and Amazon Alexa out rightly respond to most of your queries?

More or less, you must be expecting the answers of following questions from an article of Artificial Intelligence 101:

  • What is Artificial Intelligence?
  • Is Artificial Intelligence Dangerous?
  • How does AI Work?
  • How artificial intelligence will change the future?
  • Will Artificial Intelligence take over?

Believe me, you are not ALONE in all of this!

But since you are here, you need to start by answering a stupid-simple question:

“How did you end up here on this blog post?”

You entered some keywords related to artificial intelligence within the browser search bar and pressed enter. The browser presented you with a list of blog posts, and then you clicked on this one to end up here. Simple, right?

If you observe closely, you will realize that all the search results for the entered keywords were related to artificial intelligence, and there was not a single post about ‘Bob, The Carpenter’.

This means that your web browser is intelligent enough to understand your intentions, and has presented you with only the relevant search results.

In other words, your life has been made easy. Otherwise, you had to go through a trillion data bytes stored within the browser’s database.

Can you guess who’s in action here?

Yes! You guessed it right! Our very own ARTIFICIAL INTELLIGENCE.

Artificial Intelligence, as put forward by its founder John McCarthy, is defined as,” The task which if done by humans would require intelligence to accomplish

You may be wondering why animal intelligence is not considered under the umbrella of artificial intelligence. The answer lies within the behavior of animals.

During the stone age, humans used to hunt for food every time they felt hungry. Meanwhile, Lions also used to do the same. Neither humans nor animals felt the need to store their food.

Fast forward till today, humans have designed sophisticated technologies to preserve food for an unlimited amount of time. Consequently, humans don’t need to go looking for food every time they feel hungry. They can just open their refrigerators, and VOILA!!!

By contrast, lions continue to hunt for food using the old-fashioned technique. They have not used their intelligence to make their lives easier. As a result, they also starve whenever they don’t find any prey.

What is Artificial Intelligence

Artificial intelligence, unlike the natural intelligence of humans or animals, is the intelligence demonstrated by machines.

In simple terms, AI is the endeavor to replicate or simulate human intelligence into the machines to make them able to perform tasks that typically require human intelligence.

Such machines can mimic the cognitive functions possessed by humans such as learning, reasoning, knowledge representation, planning, perception, natural language processing, and the ability to move and manipulate objects.

With the cognitive capabilities, machines can learn from experience, adjust to new inputs, and perform human-like tasks.

In this modern era, digital computers have made it possible to solve complex mathematical problems in a limited amount of time.

As a result, many user-oriented applications have emerged. The most prominent ones are Apple’s Siri, Amazon Alexa, Google’s Search Algorithms, and IBM’s Watson, etc.

Because of Hollywood movies like Terminator, 2001: A Space Odyssey, Ex Machina, Chappie, and Transcendence, etc. you may have a notion of artificial intelligence as an intelligent robot who is powerful enough to take over the world.

Believe me, there is nothing like that in reality. We are going there, but we are not there yet.

Until now, artificial intelligence has reached the level of intelligence which can be ascribed to the simplest of human behaviors.

Simply put, artificial intelligence stands at a point where it cannot even beat the crude intelligence possessed by a one-year-old.

“Artificial Intelligence is the system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”

Russel and Norvig

Artificial Intelligence History

The pandora box for AI was initially opened by a famous mathematician Alan Turing in 1950 with a very simple question:

“Can Machines Think?”

Alan Turing

Later he changed his question statement to,” Whether or not it is possible for a machine to show intelligent behavior?” Instinctively, he suggested that a machine should be able to make an inference from mathematical deductions.

For example, if a machine is provided with the mathematical deductions as X=Y and Y=Z, it should be able to infer that X=Z.

This statement sparked an interest in intelligent machines among the research community. The term ‘Artificial Intelligence’ was firstly coined by John McCarthy in 1956 to distinguish this field from cybernetics.

The co-founders of Artificial Intelligence were very optimistic in their predictions about the development of intelligent machines in the near future.

“Machines will be capable, within twenty years, of doing any work a man can do”

Herbert Simon

“Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”

Marvin Minsky

Although their predictions did not hold well in the upcoming years, yet intensive research was already being conducted in the field of Artificial Intelligence using Symbolic and Connectionist approaches.

The symbolic approach dealt with the abstract mathematical rules, whereas the Connectionist approach was used to mimic human-brain like intelligence within the machines.

The progress in AI showed some real promise in the late 1990s and early 20th century when it was used in conjunction with the other fields i.e., statistics, medical diagnosis, biomechanics, stock-evaluation, fraud, and spam detection, etc.

In 1997, IBM’s virtual chess-playing system Deep Blue was able to beat Gary Kasparov, who was the World Chess Champion at that time.

This marked an important moment in the history of artificial intelligence. More importantly, it provided the researchers with authentic proof that a machine can surpass human intelligence, albeit, in limited ways.

Deep Blue beats Garry Kasparov in Chess (Image Credits: Wired)

In 2011, IBM’s Watson, a question-answering computer system, beat the champions Brad Rutter and Ken Jennings in a Jeopardy Quiz show by a significant margin, securing a first-place prize of $1 million.

IBM’s Watson wins Jeopardy Quiz (Image Credits: IBM)

In March 2016, Deep Mind’s AlphaGo beat the Go champion Lee Sedol by 4-1 in a 5-game match. During the Future of Go Summit in 2017, AlphaGo won a three-game match of Go with Ke Jie, who held the world’s top Go ranking for continuously two years.

Deep Mind’s AlphaGo beats Lee Sedol in Go Match (image Credits: BecomingHuman.ai)

With all these remarkable achievements, you can sense how far AI has come to beat humans at their games. Yet, some experts claim that AI is still in its nascent stages of development.

“A computer beating a grandmaster at chess is about as interesting as a bulldozer winning an Olympic weightlifting competition”

Noam Chomsky, a linguist at the Massachusetts Institute of Technology (MIT)

Such claims are true to an extent given that the supercomputers can store as well as process information at lightning-fast speeds as compared to even the most intelligent humans.

With parallel processing and distributed computing, supercomputers can easily predict the next 400 to 500 moves within a matter of few microseconds.

On the contrasting side, someone has the right to say that humans have the ability to think-out-of-box which gives them an edge over supercomputers.

History of Artificial Intelligence (Image Credits: Sohail Zahid)

The discussion of whether the AI is intelligent enough, or in its early development stages, or somewhere in-between is a long and never-ending debate. So, I will leave this with you guys.

Whatever your stance maybe, you need to learn how artificial intelligence is incorporated seamlessly with the machines.

So, let’s move on…

How does AI Work

The basic goal of AI is to induce human-like intelligence within the machines. Thus, you firstly need to understand how natural intelligence works, and how humans employ this intelligence within their daily routine.

If you observe closely, humans mostly perform three operations which gives them a premium tag of ‘intelligence’:

  1. INSPECT the environment closely and GATHER the surrounding’s DATA using Five Basic Senses
  2. INTERPRET the data and then LEARN from it in the context of problem-in-hand
  3. EMPLOY their learning to achieve specific GOALS and TASKS by adapting to the environmental conditions

When you will try to incorporate all these rules within the machines, you will be grateful to NATURE for the complexity associated with the anatomy of a human body.

Moreover, you will realize why it is practically impossible to fuse all the sophisticated human traits within a machines’ body.

To make it simple for you guys, I am going to consider two simple scenarios in the life of a human:

Scenario 1:To do Grocery Shopping

Scenario 2:To make fresh Grape-Fruit Juice

“Grape-Fruit Juice is beneficial for Weight Loss. You might want to give it a try”

Awais Naeem

Now, I am going to explain TEN necessary actions that a human performs sub-consciously to accomplish a specific task, and how a digital machine can somehow mimic the same action.

To make it more interactive, I will explain each action for both of the stated scenarios. This will help you reconcile the necessity of these actions in the context of an intelligent machine.

Here we go then. The Number One…

1. Reasoning

This aspect of human nature is defined as,” The action of thinking about something in a logical and sensible manner”

In simple words, reasoning means to draw inferences from mathematical deductions e.g., By simply measuring the height of a human body, we can easily infer whether the human is a grown-up adult or a toddler.

Now, let’s define the reasoning for both scenarios:

Scenario 1: If you want to do Grocery shopping, you need to go to a SUPERMARKET

Scenario 2: If you want to make juice, you need a JUICER/BLENDER machine

For machines, the reasoning is to draw either Deductive or Inductive inference.

Deductive Inference is mostly used in mathematics and formal logic where the complex theorems are built from indisputable axioms and rules. Thus, we can rely on deductive inference for our machines.

For example, if X=Y and Y=Z, then the machine can correctly infer that X=Z.

Another example could be: “John is either sleeping or playing games or studying”.  If John is neither sleeping nor playing games, then a machine could easily infer that “John is studying”

Inductive Reasoning is common in science, where an inference is made by looking at some data. In such reasoning, there is a strong possibility that anomalous data in the future may force the inference to be revised accordingly.

For example, during the start of mankind, you could have inferred that “ALL humans are born with five fingers”. However, if one was later born with six fingers, you would have revised your inference as,” MOST humans are born with five fingers”

Another example can be explained as: “In 2020, ALL humans who had fever were infected with coronavirus”. However, if someone having a fever is diagnosed with malaria, the inference needs to be revised as,” In 2020, MOST of the humans who had fever were infected with coronavirus”

Most of the machines have been able to draw inferences, but precise reasoning draws conclusion relevant to a particular task.

Let’s say that a machine is fed with two matching deductions: “If a person has a fever, he should eat medicine” and “If a person has a fever, it is dangerous to hike the mountain”.

Now, if a person has a fever and trying to hike the mountain, then the machine should warn him of the danger. On the contrary, machine should suggest the person to eat the medicine.

2. Problem Solving

Problem-solving is the act of defining a problem, identifying/prioritizing alternatives for a solution, and finally implementing the solution.

Simply put, it is a systematic search through a range of possible actions to accomplish a predefined goal or a task.

Reasoning and Problem solving go hand-in-hand with each other, as solutions to a specific problem are explored using the Deduction-Inference system. Many deductions are explored and then the inference is made which is most suitable for the problem at hand.

Scenario 1: If you need to go to a supermarket for grocery shopping, you must decide between various supermarkets. You can base your decision on different factors such as traveling distance, customer experience, fuel consumption, hygienic environment, etc.

Scenario 2: If you need a juicer machine to make fresh Grape-Fruit juice, you need to decide between the different juicer machines. You need to select the juicer suitable for making Grape-Fruit juice, and not the ones for carrot/orange juice

In the context of the machines, two methodologies are frequently used for problem-solving. One is the Special Purpose Method, and the second one is General Purpose Method.

A Special Purpose Method is tailor-made to solve a particular problem. It explores only the very specific features of the environment in which the problem is embedded.

For example, if a machine is installed in a factory to fetch and place like-colored products in different baskets, then the robot can be instructed to do two operations:

  • Red – Left: If the product has a RED color, put this in the LEFT basket
  • Blue – Right: If the product has a BLUE color, put this in the RIGHT basket

This sort of differentiation is specific to only two colors and two baskets. Thus, it cannot be generalized for the operation of the same pick-and-place robot in a different manufacturing facility.

On the other hand, a General-Purpose Method applies to a wide variety of problems. The same set of problem-solving rules can be applied to the machines operating in different situations.

One general-purpose method in AI is a means-end analysis which is an incremental reduction between the current state and desired state. The robot can perform several actions i.e., PICKUP, PUTDOWN, MOVE LEFT, MOVE RIGHT, MOVE UP, MOVE DOWN, etc.

Whatever the problem-solving method is employed within a machine; you must ensure that the machine can make logical inferences in a step-by-step fashion and also concurrently solve puzzles.

3. Knowledge Representation

This aspect of human intelligence explores the strategies to represent the knowledge which is helpful while solving a problem.

The human brain mostly stores knowledge in the form of abstract mathematical concepts, mathematical deductions, images, textual characters, and voice notes, etc.

Scenario 1: To do grocery shopping from a supermarket, you need to make a TEXTUAL list of items on paper. Alternatively, you can store the NAMES/IMAGES of all the items in your mind, however, you run a risk of forgetting some items in this case

Scenario 2: To make Grape-Fruit juice in a juicer machine, you need to make a TEXTUAL list of all the ingredients (sugar, salt, grapefruit, water), and their requisite amounts (in grams)

For the machines, information about the world should be represented in a form that a computer system can understand and utilize to accomplish complex goals and tasks.

First things first; you must store the information in a Digital Form as computers can only comprehend the binary language.

Moreover, the binary information must be compiled in the form of abstract mathematical laws or data structures. In this manner, a computer can reconcile important and unimportant observations in the context of a problem.

Machines can be trained at a particular task by gathering expert knowledge about a narrow thing, or either be adaptable by representing the commonsense knowledge of an average person in the extensive database.

However, you cannot represent each category of human knowledge simply and straightforwardly. There are always some twists and turns:

  • Any common rule of knowledge comes with unstated Assumptions and Default Reasoning. For example, if someone mentions the word “animal”, they may be imagining herbivorous animals in their mind. However, all the animals are not like that. Moreover, they may think that all the herbivores eat the leaves of a tree. However, such default reasoning does not hold for the herbivores who eat grass
  • The breadth of common-sense knowledge which an average person knows is very vast. Consequently, an extensive database is demanded by the systems built upon the common-sense knowledge
  • The subconscious knowledge of a human brain cannot be represented in a mathematical form. Such knowledge is often classified as ‘guts’ or ‘sixth sense’ in human terminology.
  • For example, you can tell whether fruit is rotten or not just by the looks of it. An art critic can glance at a statue, and realize it is fake. You can examine the body-language of a sports team, and infer that this team is going to win.

4. Planning

The alone act of setting a goal, and then choosing a sequence of actions to achieve that goal comes under the hood of Planning.

The actions must be chosen such that their likelihood of success can be measured in the context of the foreseeable future.

Scenario 1: Out of all the routes to reach a supermarket, you need to plan for a route to minimize fuel consumption. Once you reach the supermarket, you need a plan to traverse through different portions of the supermarket to minimize the physical exertion and time spent

Scenario 2: You need to make a plan to gather all the ingredients as well as the juicer machine from different parts of the kitchen. Additionally, you need to plan the order of adding ingredients in the juicer machine

Intelligent agents must be able to set goals and achieve them.

In the wake of achieving their most ambitious goals, an intelligent agent must be able to visualize the future.

They should be able to make choices that will maximize their utility in terms of power requirement, energy, speed, and agility, etc.

To traverse from Point A to Point B, an agent must be able to calculate the cost attached with different routes and the associated risks.

The agent must be equipped to select an optimal route by employing a trade-off between different factors associated with each route. For example, an agent may prefer safety over speed, while another agent may go for the shortest route to minimize fuel consumption.

5. Learning

Learning focuses on acquiring data from the environment and then translating the data into actions using a sequence of mathematical rules called algorithms.

Algorithms provide an agent with step-by-step information on how to perform actions to accomplish a specific task or a goal.

An algorithm for a Pick-And-Place robot is a continuous loop of FOUR Instructions:

  1. MOVE to POSITION 1
  2. FETCH the Item
  3. MOVE to POSITION 2
  4. RELEASE the Item

Now let’s explore the learning algorithm for both of the humanoid scenarios:

Scenario 1: Spot different groceries on the shelves of the supermarket. Read the labels of the groceries. Try to reconcile the spotted labels with the ones in your grocery list one-by-one. If the labels match, put the grocery item in your carry bag. Else, try to look on another shelf within the supermarket

Scenario 2: Find the Ingredients for Grape-Fruit Juice from your kitchen. Measure the correct amount (in grams) for one of the ingredients. Put the ingredient in the juicer machine. Repeat steps 1-3 for all the ingredients. Once all the ingredients are in the juicer machine, power on the juicer machine. Let the juicer mix the ingredients for 1-2 min. Turn off the power. You have prepared fresh grapefruit juice.

To inculcate learning within the machine, two approaches are mostly used. The first one is the Trial and Error approach, whereas the second one is called Generalization.

In the Trial and Error approach, a computer randomly learns the solution to a problem. If one solution does not force the robot to reach a goal location, the computer tries again and again unless a satisfactory solution is found.

The computer then stores this solution and implements this if a robot finds itself trapped in a similar situation.

For example, an autonomous-vehicle will try to park itself unless the parking sensors detect the correct parking position. In a chess game, a computer might try to move at random until it finds the mate position.

In the generalized form of learning, a computer must be able to learn something in a specific situation and then apply the same learning within a contextually-related, albeit different situation.

For example, if a computer classifies the label of “DOG” from an image dataset of 10 different dog breeds, it should be able to identify a dog of a different breed upon visualization.

Similarly, if a machine is taught to recognize the word “PLAY” in 10 English accents, then it should be able to identify if “PLAY” is pronounced in a different accent.

6. Natural Language Processing

The ultimate objective of Natural Language Processing (or NLP) is to read, decipher, and understand the human language in its true form.

NLP allows a human to make sense of the surroundings in a manner that is conducive to accomplish a specific goal. Moreover, human intentions are mostly conveyed through the use of NLP.

Here’s how the NLP is employed in both of our example scenarios:

Scenario 1: To know the location of a grocery item, you need to read the big digital screens at each block of the supermarket displaying the item categories along with the corresponding shelf number. Once you locate the shelf, you need to read the labels of the products placed there. If the label of any product matches your desired grocery item, you will put that in a carry bag.

Scenario 2: To make the grapefruit juice, you have either memorized the recipe or noted it down in your cookbook. For the latter case, you need to read and implement the recipe instructions in a step-by-step manner.

As far as humans are concerned, their subconscious mind takes care of NLP for them. For example, you consume and understand a lot of interesting information while scrolling through your social media news-feed.

If you are on a road trip, you tend to read marketing bill-boards. If you are swimming, you are continuously reading the water-level sign boards to help you swim in a safe area.

It doesn’t matter wherever you go, NLP helps you collect information about your surroundings and is regarded as a necessary skill for your survival in hostile environments.

To develop a collaborative environment, machines need to read and understand the human language. With the help of sensors, a machine can take input from a human to understand what a user is asking.

The most frequent causes include information retrieval, question answering, text scraping, and language translation.

A highly desirable aspect of machines is to incorporate the breadth of human language in their algorithms.

Moreover, it is highly encouraged for machines to cater to the sentiment analysis during language processing.

7. Perception

In the perception, the environment is scanned and various objects are recognized by comparing the feature set of each object with the knowledge base stored in the human mind.

Humans use visualization and hearing skills to acknowledge different objects placed spatially in their surroundings.

Perception is the single-most-important aspect to succeed in both scenarios:

Scenario 1: To be able to read the digital screens within the supermarket as well as the labels of different items placed on the shelves, you must visualize them using your eyes. This will help you identify the items you need with further help from NLP

Scenario 2: While collecting all the ingredients from your kitchen and pouring them within the juicer machine, you need some visuals for identification. Moreover, you need to recognize the “RUN” and “STOP” buttons, which are displayed as either text or a logo on the juicer machine.

In terms of machines, perception deals with vision and speech recognition. Computer vision deals with analyzing and processing the visual input, whereas, speech recognition is responsible for identifying the individual words spoken by a human in a native language.

Machines have been made able to perceive and interpret the information about the environment using different sensors:

  • Camera: Visualization
  • Microphones: Listening/Hearing
  • Tactile Sensors: Sense of Touch
  • Wireless Signals: To Control a Machine remotely
  • Lidar: 3D Representation of Environment
  • Sonar: Object Detection

For the machines, perceptive analysis is complicated by the fact that an object may appear at a different angle/orientation, illumination intensity, and lighting conditions.

During object recognition, the contrast of the object w.r.t environment plays a vital role. For example, it will be easy to detect an apple placed with mangoes, however, this task will become extremely difficult with a large number of strawberries surrounding the apple.

Common applications for machine perception include Autonomous vehicles, Object Recognition, Facial Recognition, Speech Recognition, etc.

8. Motion and Manipulation

Motion is the study of a physical movement between different locations while avoiding obstacles along the way.

Manipulation deals with the dexterity and movement of a mechanical structure while staying at a fixed physical location.

If you consider humans, LEGS play a vital role in the movement, whereas ARMS and HANDS manipulate different objects within the environment.

For both of our example scenarios, motion and manipulation are an absolute necessity:

Scenario 1: To collect different grocery items on your list, you need to move between the different sections of the supermarket. For grabbing an item from the shelf and putting it in your carry bag, you need the dexterity of your hands.

Scenario 2: To grab the ingredients from different cabinets of your kitchen, you need to move towards the cabinet. Then, you need to reach for the ingredient by extending your ARM in the correct direction, and finally grabbing it with the help of your HAND.

The physical dexterity of a machine is one of the two complicated human aspects, the other being the perception.

It is frequently stated that the motion and manipulation capabilities in most of the robots are unmatchable even to a one-year-old. AI can train a computer to play a game with quite an accuracy, only if it does not involve mobility and perception.

A mobile robot can map and traverse across the static environment effectively, however, dynamic environments pose greater challenges.

For industrial applications, AI can be used to train robotic arms and industrial robots for effective manipulation thereby compensating for the joint friction and gear slippage.

9. Social Intelligence

Social Intelligence deals with affective computing. In affective computing, a system takes actions by recognizing and processing human emotions, sentiments, and effects.

It is also a medium through which humans elicit feelings of empathy for fellow-beings, and is an important ingredient to form a peaceful community.

Social intelligence can also be integrated within our example scenarios, albeit in some special cases:

Scenario 1: Let’s say you wanted to buy bread from a specific brand. Upon reaching the location of that bread, you got to know that there was only one packet of bread left and another customer was demanding the same brand. Rather than disputing over the bread, you let the other person take the bread thinking that you will buy it from a nearby supermarket.

Scenario 2: Once you have prepared the fresh juice, you ask your roommate to give it a try. However, what you get is an AWKWARD face indicating a problem in the juice composition. You quickly realize that you have forgotten to add sugar to the juice. So, you quickly do the needful and blend the juice again.

In the study of social intelligence, a robot either tries to mimic the sentiments of a human or recognize their emotions.

Social intelligence can be regarded as the sub-field of perception, as the robot needs to read the facial expressions or decipher the fluctuations in the human voice.

Using the data of human emotions, an agent is best able to make decisions about human intentions and actions.

Such type of agents has frequent applications in Old Age Homes and Psychological centers.

10. General Intelligence

This is a venture to induce human-like intelligence within a machine. This sort of intelligence tends to make robots independent.

In such intelligence, human cognition skills are to be employed within the machine so that it can make informed decisions on its own without the intervention of humans or an explicit input defining the machines’ behavior.

I am now going to confess,” You cannot complete both of the example scenarios without the use of General Intelligence”.

At each moment in your life, you are using your sub-conscious mind which constitutes general intelligence.

If the machines are to solve the problems as well as people do, then general intelligence is the only trait which they need to have. PERIOD.

Today’s modern machines cannot even employ the basic intelligence of a one-year-old, let alone the general intelligence of a grown-up adult.

However, we are continuously moving towards that specific direction. And to your surprise, incorporating general intelligence within the robots is a single-important goal of Artificial Intelligence.

At the same time, a robot with a general or superintelligence will have the ability to TAKE OVER THE WORLD. Don’t feel scared! That time has not come yet!

Artificial Intelligence Algorithms

The basic goal of artificial intelligence is to induce the human-intelligence within, otherwise, dumb machines.

This feat is achieved using a sequence of instructions, called algorithms, to carry out a specific task.

Most of the algorithms fall under two categories:

  1. Symbolic Approach
  2. Connectionist Approach

The symbolic approach defies the biological structure and working of the human brain to replicate the human cognition within the machines.

The connectionist approach imitates the human brain by creating layers of neurons in an Artificial Neural Network.

In object recognition, the connectionist approach will train an artificial neural network by feeding it the images of the object under different angles and orientations. By contrast, the symbolic approach will try to compare each object with the geometrical description of the real object.

Expert Systems employ the symbolic approach, whereas neural networks are implemented using the connectionist approach.

1. Expert Systems

Expert systems incorporate specific knowledge about a narrow field, that otherwise, an expert would know. Such systems are capable of out shadowing the experts while accomplishing a specific task.

Expert systems are self-contained in a microworld of their own, and their probability to fail for any other environment is almost close to 100%. An example of such a system is auto-cruise in a ship and auto-pilot in a plane.

In such a system, many experts are interviewed about the knowledge in some narrow field. This knowledge is then represented as a large number of actionable rules.

Computers are then programmed with such roles, which allows them to make informed and critical decisions based on the inputs from the environment.

In this manner, a machine can replicate the intelligence of an expert while performing a task within some narrow field.

There are two basic components of an expert system:

  • Knowledge Base: The explicit IF-THEN rules are stored within the knowledge base after interviewing some experts in the field. Such rules are also called Production Rules
  • Inference Engine: The inference engine draws deductions from the production rules stored in the knowledge base considering the environmental inputs

Let’s suppose that the knowledge base contains a rule i.e., “John either eats apple or guava”. If the environmental input states that “John is eating a fruit, but not guava”. The inference engine should infer that “John is eating Apple”.

Most expert systems use fuzzy logic to store the rules in a knowledge database rather than simple binary logic (True or False).

In fuzzy logic, intermediate values are also considered rather than only the two extremes as in binary logic.

For example, a fuzzy logic between 0 and 1 inclusive can have five values i.e., 0, 0.25, 0.5, 0.75, 1. The heated water can be represented as Cold, Mild Cold, Warm, Mild Hot, and Hot.

Since most of the experts use vague expressions while describing the rules, it is often necessary to use fuzzy logic to define the rules within the knowledge base.

Fuzzy Logic vs Binary Logic (Copyrights: Embedded Robotics)

Many examples of expert systems include airline scheduling, genetic engineering, oil and mineral prospecting, chemical analysis, medical diagnosis, financial management, and cargo placement, etc.

2. Neural Networks

This approach was developed to mimic the human brain at the neural level, and most importantly, how humans learn and remember.

A human brain works through an intermingled connection of millions of neurons that pass energy to each other. This is the basic concept behind the emergence of neural networks and a central idea to connectionism.

In a neural network, there is an interconnected mechanism of neurons arranged into different layers. An input layer, some hidden layers, and an output layer.

In the training phase, neurons feed data into each other while modifying the important attribute of the input data within the hidden layers. Besides, the weights attached to the input data are continuously adjusted to predict accurate output labels.

Once the system training is complete, new test data is fed into the neural network. If the neural network shows a classification accuracy of almost 90-95%, the system has been trained well.

However, if the accuracy of the neural network falls below par value, the neural network requires re-training, albeit with additional hidden layers as well as more samples in the training data set.

Neural Network Illustration with Input, Hidden, and Output Layers (Copyrights: Embedded Robotics)

Let’s evaluate the working of a neural network using a simple example of 4 neurons. 3 neurons for the input layer, and 1 neuron for the output layer.

I am excluding the hidden layers in this example just for the sake of simplicity. However, the same concept applies to any number of hidden layers.

Each input neuron has a weight assigned to it, whereas the output neuron has a firing threshold. Each neuron will either fire (1) or not fire (0) depending on its firing threshold.

The output neuron will fire if the cumulative sum of weights of all the firing neurons exceeds the firing threshold of the output neuron. In a general case, only the weights of those firing neurons are considered which are attached to the output neuron.

In this example, all the input neurons are attached to the output neuron and therefore, all the firing neurons are considered.

If we want the output neuron to fire, the weights of input neurons are progressively increased. However, if the desired outcome was 0 and the result was 1, the weights of the firing neurons will be adjusted accordingly.

This process will be repeated multiple times for each input unless the weights of the neurons are well adjusted to correctly predict the output.

Here is a simple animation illustrating the training regime of a neural network with three input, and one output layer. The desired output is ‘1’ and the network is trained accordingly.

https://www.embedded-robotics.com/wp-content/uploads/2020/11/Neural-Network-Animation-1.mp4
Neural Network Training for Desired Output ‘1’ (Copyrights: Embedded Robotics)

Another animation illustrates the training for desired output ‘0’ while using the same structural framework for the neural network:

https://www.embedded-robotics.com/wp-content/uploads/2020/11/Neural-Network-Animation.mp4
Neural Network Training with Desired Output ‘0’ (Copyrights: Embedded Robotics)

Due to a large number of repetitions, a pattern of the connection weights is forged that enables the network to respond correctly to different patterns.

Another salient feature of neural networks is the automatic training process without the intervention of a human.

Moreover, the same learning procedure applies to different tasks. You just need to specify the network configuration, inputs, outputs, and then let the system go through the instinctive training process.

Artificial Intelligence Types

Artificial intelligence is so vast of a field that it becomes necessary to divide AI into two sub-categories. Both the sub-categories then differentiate types of AI within their domain:

  • Functionality based Types of Artificial Intelligence
  • Capability based Types of Artificial Intelligence

Functionality based Types of Artificial Intelligence

This sort of differentiation in AI is due to functionality being offered within an AI-based intelligent system.

AI is divided into four types, as far as the AI Functionality is concerned:

  1. Reactive Machine
  2. Limited Memory
  3. Theory of Mind
  4. Self-Awareness

1. Reactive Machine

These sorts of AI systems have no memory. This implies that such systems can only react based on current environmental conditions, hence the name, reactive machines.

When such systems react to any situation, they are unable to use the past data of similar interactions. Consequently, they cannot make better-informed decisions.

Such agents have the propensity to repeat the old mistakes if deployed in a similar environment.

The biggest example of this type of AI is the Deep Blue computer that beat Gary Kasparov in a chess game. Deep Blue could identify the chess pieces, and make predictions. However, it could not make informed decisions by taking guidance from the past.

2. Limited Memory

These systems go one step further from the reactive machines. These machines have memory available, albeit limited, which helps them analyze past experiences to make better decisions.

These machines store multiple occurrences of the most frequent problems, the decision taken for problem-solving, and the outcome of the decision.

Such saved data helps them shape their decisions in the future. This also helps optimize the problem-solving algorithms for the machine.

For example, a pick-and-place may get a little bit rusty the first time it performs the desired action. Next time, the robot would be a little more patient with its operation. Over time, the robot would keep adjusting the timing of its PICK and PLACE operations unless they are optimized enough for a precise operation.

Some of the decision-making functions in self-driving cars are built in this way. These can provide useful information to users regarding traffic jams, taking the best possible routes, and minimizing the risk of accidents.

3. Theory of Mind

This is a psychology term that deals with human emotions and sentiments. In terms of AI, the theory of mind relates to social intelligence being incorporated within the machines.

These machines can understand human emotions and sentiments with the help of facial and speech recognition. Such machines can also decipher non-verbal cues and other intuitive elements during conversational speech.

Since human emotions elicit a thorough review of a human’s character, such machines can understand human intentions and predict their behavior.

This will allow the machines to make informed decisions and align their actions with human intentions to create a collaborative environment.

Such robots find applications in Old Age Homes, and for the children suffering from traumas, compulsive disorders, and psychotic disorders.

4. Self-Awareness

Have you ever seen super-intelligent robots in movies i.e., Big Hero 6, Chappie, Ex Machina, etc?

These robots are self-aware and possess human-level consciousness. They have their desires, goals, and objectives.

Such machines do not need any guidance from humans. They can make well-informed decisions without any prior learning or reasoning algorithms being inculcated with them.

They are intelligent enough to make their algorithms. It helps them in the optimization of their plan-of-action.

Currently, the main goal of AI is to induce self-awareness within the robots. However, such technology is a far fledged idea which is constantly being worked and searched upon all around the world.

Capability based Types of Artificial Intelligence

Each system employing AI does not have the same problem-solving capabilities as the others. This difference arises mainly due to the variance in the features of AI algorithms.

Artificial Intelligence is mainly divided into three types regarding its capabilities:

  1. Narrow AI or Weak AI
  2. Artificial General Intelligence (AGI)
  3. Super Intelligence

Let’s explore each type in more depth.

1. Narrow AI

Narrow AI is also sometimes called “Weak AI” or “Artificial Narrow Intelligence”. Such systems are designed and trained only to perform a specific task in a narrow domain.

If you ask of any such system to perform any unprecedented task, it will miserably fail at that. This is because of its limited capacity to reason and infer out of its expertise.

In simple words, this AI can only take task-dependent decisions. For example, a system may be best at facial recognition, but fail to recognize other objects even in a static environment.

Most of the AI-based systems today make use of narrow AI. This is only as far as humans have succeeded in replicating human intelligence within the machines.

Whether it is speech/language recognition in Siri, or the visual recognition system of self-driving cars, or the advanced recommendation system by the search engines. You will almost always find narrow AI at work.

Unlike humans, these systems do not have the capability for independent-reasoning. They can only be taught to perform a specific task. This is the primary reason behind the tag of Narrow AI.

2. Artificial General Intelligence (AGI)

In AGI, algorithms are designed to mimic the human cognition capabilities within the machine.

When presented with an unfamiliar task, the system will perform an action based on some previously stored data for vaguely similar tasks.

Artificial general intelligence is also called Strong AI. The main goal of such AI is to build a machine, whose intellectual capability will complement that of humans.

A machine infused with AGI can learn, reason, and improvise as per the situation demands. Most of the decisions taken by such machines are task-independent.

In simple terms, such machines can perform tasks for which they are not even explicitly programmed.

AGI is a flexible form of learning which can perform a wide variety of tasks. From collecting garbage to cooking delicious meals, lifting weights, giving valid arguments, and reason about a large number of topics. There is nothing that AGI cannot do.

However, the quest for such AI is fraught with difficulties. It is thought that AI cannot even replicate the general intelligence of an ant in the foreseeable future, let alone a complete human being.

By one prediction, there is a 50% chance that AGI will be achieved between 2040 and 2050, and 90% probability of getting there by 2075.

“Narrow AI may outperform humans in its specific task such as chess or solving equations, but AGI is intended to challenge humans in almost every living aspect”

Narrow AI vs Artificial General Intelligence

3. Super Intelligence

Bostrom has defined the super-intelligence as Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”

Now, this is a type of AI that can wreak havoc on earth, and take over the humans in each aspect.

Feeling Scared?

Well, don’t. I was just kidding. If a system is super-intelligent as compared to humans, it can be used in a lot of better ways rather than just destroying humanity.

Super AI can be used to make irrational decisions considering a large number of contextually-related factors because of immense processing power.

Moreover, it can be used to optimize human play-of-action during daily activities. For example, it can maintain the exact calorie count for each individual, it can recommend the best possible route given human constraints such as fuel consumption, time limitations, etc.

With super-intelligence, possibilities are endless. However, incorporating such intelligence within the machines also seems impossible in the current time frame.

According to one vague prediction, it will take almost 30-40 years to achieve super AI within the machines once the long-sought barrier of AGI gets crossed.

Artificial Intelligence Examples

In the modern world, most businesses employ Artificial Intelligence in one way or the other. However, in most cases, you will mainly see Narrow AI at work.

Here are some of the AI-based machines which you may encounter during everyday life:

  • Smart Virtual Assistants i.e., Apple’s Siri, Amazon Alexa
  • Industrial Robots
  • YouTube’s Video Recommendation System
  • Chatbots for Marketing and Customer service
  • Robo-Advisors for stock trading
  • Social media monitoring tools for explicit content
  • Song or TV show recommendations from Spotify and Netflix
  • Optimized HealthCare Treatment Recommendations
  • Google Search Engine
  • Facial Recognition Systems
  • Spam filters on email
  • RFID based Door Locking system
  • Self-driving Vehicles
  • IBM’s Watson 
  • Disease Mapping and Prediction tools
  • Interpreting Video Feeds from Drones
  • Plagiarism Checkers
  • Smart Email Categorization
  • Optical Recognition System
  • Ridesharing apps like Uber, Careem, Lyft, etc.

That’s all from my side. Which prospective applications can you think of Artificial Intelligence? Let me know in the comments…

Sharing is Caring!

Subscribe for Latest Articles

Don't miss new updates on your email!

Leave a Comment

Your email address will not be published. Required fields are marked *

shares
Exit mobile version