Artificial Intelligence

Intelligent Agents

What are Intelligent Agents?

Within the context of artificial intelligence sector, intelligent agents refer to entities (agents). Such agents perceive and interact with the environment they reside in. These interactions are autonomous (no direct control from a person) and aim, through learning and obtaining more information, to boost efficiency and effectiveness and subsequently achieve specific objectives.

PEAS

The PEAS is an artificial intelligence agent model, and many AI agents follow the structure. PEAS stands for performance, environment, actuators, and sensors.

  • Performance – the measure of how successful the AI agent’s act (or behaviour)
  • Environment – the surroundings of the AI agent and the interactions between both
  • Actuators – the control parts (including functioning) of the AI agent
  • Sensors – the perception of the AI agent of its surroundings

The following are two simplified examples of PEAS artificial intelligence agents:

  1. Automated warehouse robot
    – performance: percentage of items correctly moved from point A to B
    – environment: the surrounding room, items (packages), and other relevant objects
    – actuators: mechanical arms and their function for picking up items
    – sensors: camera to identify the environment and sensors and angle sensors to measure distance and rotation
  2. Autonomous car
    – performance: measuring safety and comfort
    – environment: the surrounding roads, pedestrians, other vehicles, and other objects
    – actuators: steering, turn signals, and brakes
    – sensors: GPS measuring location, camera analysing the surroundings, and accelerometer measuring speed

Types of AI Agents

Intelligent agents can be split by capabilities and functions into five categories.

  • Reflext agents
  • Model-based agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents

Reflex Agents

Reflex agents do not base decisions on any historical data, but only on the perceived present. They operate on a pre-set rules and work efficiently only on what is seeing, so fully visible environment is a must. The action is done only when an event is triggered, i.e. when something changes.

– example: a robot is placed in a warehouse and tasked to pick a package only once. The robot picks the item, moves and sees another package, and picks it again as it does not have information of the previous collection.

Model-based Agents

Model-based agents operate similarly to reflex agents, but have the advantage of a better environment comprehension. These agents take into consideration historical data as well as understand the present situation, and as such, can work effectively in a partially visible environment.

– example: the same scenario of the collecting robot applies in the model-based agent, with the difference that the robot does not obtain the second package as it has memories of the first collection.

Goal-based Agents

Goal-based agents operate on research and planning decisions. The actions work on objective-oriented moves, with the abilities of the AI agent to choose amongst multiple options. This in turn enables to the agent to be more flexible and adaptive to the environment.

– example: from the above case, the robot’s location is outside the warehouse and must get to the destination before collecting. There are multiple ways and the robot must choose one.

Utility-based Agents

Utility-based agents work similarly to goal-based agents, with the additional benefit of having utility measurement. The utility measurement helps the AI agent utilise its decisions by rating and choosing the best possible outcome towards the desired objective.

– example: the same scenario of the goal-based agent applies here. The difference is that the robot measures all routes and picks the most efficient way to the warehouse.

Learning Agents

Learning agents use an extra feature, learning, that helps the artificial intelligence agent improve its knowledge about the environment by learning from past experiences. The agent operates on a feedback system, allowing it to gradually adapt and become a more efficient decision maker.

– example: in the above case, if the item is very fragile and it breaks when it’s collected, the robot can gradually learn how fast and how much force to use so the package does not get damaged.


Next: Generative Artificial Intelligence

by AICorr Team

We are proud to offer our extensive knowledge to you, for free. The AICorr Team puts a lot of effort in researching, testing, and writing the content within the platform (aicorr.com). We hope that you learn and progress forward.