• Non ci sono risultati.

Laurea Laurea MAGISTRALE in MAGISTRALE in COMPUTER SCIENCECOMPUTER SCIENCE

N/A
N/A
Protected

Academic year: 2021

Condividi "Laurea Laurea MAGISTRALE in MAGISTRALE in COMPUTER SCIENCECOMPUTER SCIENCE"

Copied!
14
0
0

Testo completo

(1)

Questi lucidi sono stati preparati per uso didattico. Essi contengono materiale originale di proprietà dell'Università degli Studi di Bari e/o figure di proprietà di altri autori, società e organizzazioni di cui e' riportato il riferimento. Tutto o parte del materiale può essere fotocopiato per uso personale o didattico ma non può essere distribuito per uso commerciale. Qualunque altro uso richiede una specifica autorizzazione da parte dell'Università degli Studi di Bari e degli altri autori coinvolti.

Agents Agents Intelligent Agents – Types of Intelligent Agents – Types of agents agents

Laurea

Laurea MAGISTRALE in MAGISTRALE in COMPUTER SCIENCE COMPUTER SCIENCE

Corso di

ARTIFICIAL INTELLIGENCE Stefano Ferilli

Definitions

Agent: Who acts

Software Agents: Automated systems that carry out useful tasks

Characterizing Features

Autonomous: action is driven by “internal” directives

Reactive: perceive aspects of the environment and react appropriately

Proactive: may take initiative and make goal-driven actions

Social: communicate with other agents

Agents

How should agents act?

They should interact with the environment

Sensors

Effectors

Kinds of agents

A traditional distinction

Simple-Reflex agents

Agents who take into account the world

Goal-driven agents

Utility-driven agents

Our focus

NO interaction with the environment

Making suitable sensors and effectors is the job of engineering

YES reasoning process that the agent must carry out

Agent Schemes

First schema : Simple-Reflex (S-R) Agents

Machines that have no internal states and react to stimuli coming from the environment

Interpret input – find rule

Stimulus-Reaction

Next schema

The agent takes into account the world and the acquired experience

Memory and Internal states

Simple-Reflex Agent

E nv iro nm en t

AGENT

Sensors

How the world Is now

Effectors

Which action to take now

Condition-Action Rules

(2)

Simple Reflex Agents

Operates on the basis of simple condition- action rules

function ARS(perceptions) return action

static: rules // a set of condition-action rules

state  interpret_input(perceptions)

rule  find_rule(state, rules)

action  rule_action(rule)

return action

Simple-Reflex Agent with Internal State

E nv iro nm en t

AGENT

Sensors

How the world Is now

Effectors

Which action to take now

Condition-Action Rules What actions do How the world evolves

State

Simple-Reflex Agent with Internal State

Operate on the basis of

simple condition-action rules

an internal state that represents a simple past experience of the agent

function ARCS(perceptions) return action

static: rules // a set of condition-action rules state /* a description of the current

state of the world */

state  update_state(state, perceptions)

rule  find_rule(state, rules)

action  rule_action(rule)

state  update_state(state, action)

return action

Agents with explicit goals

When the agent has explicit information about its goal, the sequence of actions, based on the stimuli that “produce” the consequent action, is defined as a function of the goal to reach

Search & Planning

Agent with explicit goals

E nv iro nm en t

AGENT

Sensors

How the world Is now

Effectors

Which action to take now

Goal What actions do How the world evolves

State

How it becomes if I carry out action A

Agent with explicit goals

Variant of the schema

The agent knows how to operate when

there are many goals, possibly contrasting with each other, or

none of the goals can be attained with certainty

The agent must be able to assess the

utility/convenience of making each choice

(3)

Complete utility-driven agent

E nv iro nm en t

AGENT

Sensors

How the world Is now

Effectors

Which action to take now

Utility What actions do How the world evolves

State

How it becomes if I carry out action A

How happy will I be in such a state

14

Interactive

English tutor Typed words Print exercises, suggestions,

corrections

Maximize student scores

in tests

Sets of students Refinery

controller

Temperature and pressure readings

Open and close valves; adjust

temperature

Maximize purity, product,

safety

Refinery Robot that

collects parts

Pixels with variable intensity

Collect parts and sort them in containers

Place parts in proper containers

Conveyor belt with parts Satellite images

analysis system

Pixels with variable intensity

and color

Print a categorization

of the scene Fix categorization

Images from orbiting satellite Medical

diagnosis system

Symtoms, responses, patient’s answers

Questions, tests, treatments

Healty patient, minimize costs

Patient, hospital Agent Types Perceptions Actions Goals Environment

Examples of Agents

Rational Agent

Does the right thing,

the action that will bring most success by maximizing the expected performance measure

Acting rationally = acting to reach goals, given own knowledge, based on the incoming perceptions, given the actions he can carry out

Performance measure = Evaluate – how and when – the expected success, considering what has been perceived

New incoming perceptions do not affect its representation/model of the world

Describing/Building an agent

Correspondence:

Sequence of perceptions  Actions

An agent may be described through

A complete listing of actions it may carry out in response to any possible sequence of perceptions

Project/program of an ideal rational agent

The definition of the correspondences ( function of correspondence) without providing an exhaustive listing

Describing/Building a real and autonomous rational agent

Computational limitations prevent reaching perfect rationality

Autonomous agent: has learning abilities

Allow it to update its initial knowledge

In real rational agents there are many ways for modeling

Autonomy and learning capacity

Evaluation of performance

Aim: building agents with best performance

As for any system

Problem solving agents and Planning agents

Problem solving agent

Decides the sequence of actions before acting

Accessible environment

Knowledge-based agent

Chooses actions based on an explicit representation of states and of actions’ effect

Complex and inaccessible environment

Planning agent

Plans but using explicit knowledge about actions and their effect

A special case of a KB agent

(4)

Problem-solving Agent

To solve a problem, an AI system or

Knowledge-based system generally considers a large number of possibilities and dynamically builds a solution

Problem Solving

“Problem” is a concept that cannot be defined, only exemplified. (Nilsson, 1982)

Some examples

Board puzzles -> usually NP

“Traveling Salesman”

Puzzles such as Rubik’s Cube

SAT, Theorem Proving

Games (Checkers, Chess, ...)

VLSI

A General “Problem Solver”

How to build programs that, through

symbolic computation

specialized knowledge about a domain of interest are able to solve problems “automatically”?

I.e., without an algorithm defined and translated into a pre-defined sequence of operations to be executed

Initial focus of AI

Before focusing on methods and techniques o represent, manipulate, process knowledge

Problem Solving

Methods to attain (often indeterminate or uncertain) goals

Search Methods

Methods to generate all possible solutions and test them until an appropriate one is found

Problem Solving

Available are

1. A (possibly partial) description of a current situation and of a desired situation

Situations represented using schemes (or languages) rich enough to allow describing entities, events, cases or objects (situations) and differences between pairs of situations

2. A list of operators that can be applied to situation in order to transform them into new situations

Operators available in a language referred to the solution process

Any sequence of operators in the process language is itself an operator

Problem Solving

Problem solution

3. A (composite) operator in the process language

that transforms the object describing the initial

situation into the object describing the desired

situation

(5)

Turing Machine

Theoretical computational model

Sequence of applicable operators known a priori

Imperative solution methods corresponding to algorithms that are general but valid for specific problems

Universal Turing Machine

Using the imitation algorithm, can execute any algorithm for a specific problem (specialized Turing Machine) it takes as input along with the data on which it is to be applied

Problems for which the algorithm is lacking

Are to be solved tentatively

Must exploit a general search mechanism

Solution is to be searched in the space of possible

“problem states”

To reach a solution must allow to choose, at any moment, among the applicable operators, those that can be appropriately applied to a situation/state and transform it into a situation that is closer to the final expected situation (goal state)

A possible computational model:

Pattern Directed Inference Systems

Pattern Directed Inference Systems (PDIS)

Programs that directly and dinamically respond to a range of (unanticipated) data or events

rather than working on expected data, in known format, using a pre-defined and rigid strategy (control structure)

So-called “Pattern-Directed organization”

Patterns underlying data help to choose the code to be applied

Pattern matching operator is crucial

Pattern Directed Inference Systems

3 components

A collection of modules (PDM)

Substructures that can be activated by patterns in the data

Global Data Base, or Working memory

One or more data structures that can be examined and modified by the PDMs

An interpreter

Controls the selection and activation of the PDM modules

Problem Solver

Organized as 3 specialized modules with specific objectives

Describe operator

Match applicable conditions

Choose operator

Problem Solving

Problem setup

Define the “problem environment”

Identify all possible configurations of the elments in the domain (state space)

Distinguish “legal” or “admissible” states Define the initial state

Define the goal states

Corresponding to the desired situations of the problem Define a set of operators (rules)

Each explicitly expressing the conditions that are to be satisfied in order to apply it

Generate solution

Search process, in the space of states, of those operators that allow to reach the goal states

(6)

Problem State Space

Problem space =

A set of problem states (possible configurations)

Symbolic structures representing single problem configurations in sufficient detail to allow devising a solution procedure

+

A set of operators that can change the states

Functions that take a state and map/transform it into another

Not all applicable to any state

Operator preconditions : Applicability conditions The conditions that must be true for an operator to be

applicable to a state

Problem State Space

Examples

8-Puzzle:

States: the different permutations of the tiles

Operators: move a numbered tile up, down, left, right

Chess:

States: the different dispositions of pieces on the chessboard

Operators: valid moves for each piece according to the game rules

8-puzzle

Problem

Given a frame of numbered square tiles in random order with one tile missing, place the tiles in order by making sliding moves that use the empty space

E.g.:

Possible moves: tile 4 down; tile 1 right, tile 8 up

9! = 362.880 configurations

2

8 7

6 5 4

3 1 7

2

5 8

1 6

4 3

How Many States?

15-puzzle

10

13

states

24-puzzle

10

24

states

Rubik’s Cube

10

19

states

1 2 3 4 5 6 7 8 9 1011 12131415

2021222324 1 2 3 4 5 6 7 8 9 1011121314 1516171819

How Many States?

Knight’s Tour ...

“The first real program I tried to write was called the Knight’s Tour. You jump a knight piece around the chessboard, only in valid moves for a knight, in a pattern so that it hits every one of the sixty-four squares on the board exactly once. [...] I wrote my program [...] to try all the moves until you can’t move again. And if it didn’t hit all the squares by the time it got stuck, the program would back up and change a move and try again from there. It would keep backtracking as far as it needed and then kept going. That computer could calculate instructions a million times a second, so I figured it would be a cinch and would solve this problem quickly.

... Knight’s Tour ...

... Knight’s Tour ...

“[...] The computer doesn’t spit out anything. The

lights on the computer flickered, and then the lights

just stayed the same. Nothing was happening. My

engineer friend let it run a while longer and then

said, “Well, probably it’s in a loop”. [...] Anyway, the

next week I went back and I wrote my program so

that I could flip a switch in order to get printouts of

whatever chess arrangement it was working on. I

remember pulling the printouts out and studying

them that very day and realizing something. The

program was in fact working the way it was

supposed to. I hadn’t done anything wrong. It just

wasn’t going to come up with a solution for 10

25

years. That’s a lot longer than the universe has

even been around.”

(7)

... Knight’s Tour

... Knight’s Tour

“That made me realize that a million times a second didn’t solve everything. Raw speed isn’t always the solution. Many understandable problems need an insightful, well-thought-out approach to succeed.”

... Knight’s Tour

... Knight’s Tour

“That made me realize that a million times a second didn’t solve everything. Raw speed isn’t always the solution. Many understandable problems need an insightful, well-thought-out approach to succeed.”

Steve Wozniak

... Knight’s Tour

... Knight’s Tour

“That made me realize that a million times a second didn’t solve everything. Raw speed isn’t always the solution. Many understandable problems need an insightful, well-thought-out approach to succeed.”

Steve Wozniak

Problem-space Graph

A mathematical abstraction often used to represent the space of a problem

Directed or undirected graph

States = Nodes

Operators = Arcs

State space formally represented as a 4-tuple

<N,A,S,G>

N : set of states, represented as nodes in a graph

A : set of arcs connecting nodes, representing steps of a problem-solving process

S : a non-empty subset of N that includes the initial state

G : non-empty subset of N that includes the goal states of the problem

Solution = path in the graph, from an initial node in S to a node in G

Problem Solving

4 general steps

Goal definition

Which are the successful states of the world

Problem definition

What actions and states are to be considered depending on a specific objective

Search

Determining the possible sequence of actions that lead to known/legal states

(possibly) Choosing the best sequence

Execution

Carrying out actions

Problem Solving

Example: Route finding

(8)

Problem Solving

Example: Planning a vacation

We are in Romania

We are currently in Arad

Tomorrow there is a flight from Bucharest

Goal

Being in Bucharest

Problem formulation

States: different towns

Actions: driving from one town to another

Solution

A sequence of towns: Arad, Sibiu, Fagaras, Bucharest

Problem Solving

Example: Route finding

Goal Actions

Problem Solving

Example: Route finding

Goal Start

States Actions

Solution

Choosing the space of states

The real world is, in general, complex

State space : an abstraction of the real world useful for problem solving

States = set of real states

Admissible, legal, etc.

Actions = complex combinations of real actions

E.g., the journey Arad  Zerind represents a complex set of possible roads, paths, journeys

Abstraction is valid if the path connecting two states reflects what one can do in the real world

Solution(s) = the set of possible real paths that are soutions in the real world

Each abstract solution should be simpler than in the real world

General model of a solver agent

function SIMPLE-PROBLEM-SOLVING-AGENT( percept) return an action

static: seq, an action sequence

state, some description of the current world state goal, a goal

problem, a problem formulation state  UPDATE-STATE(state, percept) if seq is empty then

goal  FORMULATE-GOAL(state)

problem  FORMULATE-PROBLEM(state,goal) seq  SEARCH(problem)

action  FIRST(seq) seq  REST(seq) return action

Hypotheses so far

Environment

Static

Discretizable

Observable

Actions

Deterministic

(9)

Planning vs Problem Solving

PS Agents

Can generate successors of a state

Implicit goal representation, test to check goal attainment (goal test)

Planning problem

Obtaining, through a heuristic search process, a sequence of actions that lead from the initial state to the goal state

Planning Agents

Has an explicit representation of the goal, of the actions and of their effect

Can decompose the goal into independent sub-goals

Has freedom in building the plan

Can/must be more efficient

Planning agents

Considerations

A goal must be given

Knowing the current status of the environment is not sufficient to decide what to do

Searching a solution to the problem requires planning the sequence of actions

Planning agents

When the state of the world is accessible, an agent may use its perceptions of the

environment to build a “complete and correct”

model of the current state of the world

Given a goal, can exploit a planning algorithm to generate an action plan that it will put in action step by step

Ideal planner

Logic, or Knowledge-based agent

Starts with a general knowledge of the world and of its actions

Uses logical reasoning to

Maintain a description of the world that is consistent with new incoming perceptions

Infer a sequence of actions that will lead to attain its goals

Knowledge-based agent

Agents with knowledge expressed explicitly and declarative (not hard-wired)

To improve their rational capabilities, artificial agents must be endowed with more complex representations of the world, that cannot be described simply

The world is typically complex: need for a partial and incomplete representation of an abstraction of the world useful for the agent’s goals

Partially observable environments  need for more expressive knowledge representation languages and inferential capabilities

Most problems in AI are “knowledge intensive”

Knowledge-based Systems is almost a synonym of AI

Knowledge

Definitions

Awareness and understanding of facts, information, truth

The fact or condition of being aware of something

Knowledge is experience

Self-consciousness of owning valuable information

items if connected among them and of little utility if

taken singularly

(10)

Are facts and knowledge the same?

Through knowledge, we can understand the world around us and make inferences

E.g., we know that sun is warm and sky is blue.

These facts are knowledge about the world.

But we also know that

if sun is high, then there is visibility

an automatic dispenser dispenses products if we put in coins

We make these inferences based on the availability of facts

Data, Information, Knowledge

Data

Numbers and words related to property of reality that may be synthesized and processed

Information

Data in context and from an objective perspective...

interpreted

Knowledge

Dynamic set of concrete experiences, values, information and intuitions that allow to evaluate and include new experiences and information

Information made subjecive

[Experience]

Knowledge in practice

Knowledge = information available for action

Declarative: knowing that

E.g., knowing Roman history

Knowing Wikipedia = being aware of the website

Procedural: knowing how

E.g., “I can swim”

Knowing Wikipedia = being able to write a page using the Wikipedia language

Focus

Traditional Computer Science : on knowing how

Procedural knowledge “hidden” in the algorithm

AI : also on knowing that

Programming with declarative knowledge

3 Kinds of declarative knowledge

Terminological

About the lexicon of a language

E.g., “mother” means “woman with at least one child”

Nomologic

About regularities, general laws that rule the world

E.g., mothers are always older than their children, usually mothers love their children, etc.

Factual

About particular facts

E.g., Steve is a child of Anna

...Kinds of knowledge

What about knowledge about individuals?

Specific, concrete or abstract, living or non-living, animate or inanimate objects

E.g., “I know Barbara”, “I know Beethoven’s Symphony No. 9”

Only seemingly different case

Again, factual knowledge

More precisely an (often very wide) set of factual knowledge related to a specific individual

Barbara, Beethoven’s Symphony No. 9

How to Represent Knowledge?

Need to establish a set of conventions about how to describe a situation, some objects, some events, a reality

Using a “computable” knowledge representation means adopting these

conventions to conceptualize an abstraction of the world, having available tools to create it, modify it, reason with it, etc.

But what does it mean to “reason”?

(11)

A Perspective on Human-Level Reasoning

“It appears that there are two kinds of reasoning that people do. I will characterize them roughly as rule-based and associative. When we’re doing the former, we’re aware of it, and it has steps that we can describe. We know we’re doing something, and if it’s complicated enough, we know that we’re doing work and that we might make mistakes.

When we do associative reasoning, however, it’s largely below the level of consciousness, and it appears effortless.

...

A Perspective on Human-Level Reasoning

It’s probably not an accident, therefore, that the structure of computer programs and the characteristics of logical formalisms tend to resemble the rule-based reasoning that we’re aware of, and we’ve been relatively successful at getting computers to do this kind of reasoning.

On the associative side, however, it is much harder even to understand what people do, much less figure out how to get computers to do something equivalent to it. When it comes to dealing with large amounts of knowledge, when a person has more knowledge, the person generally thinks better and is more effective at understanding the environment and functioning in it.”

W. A. Woods “Meaning and Links” 2007

Knowledge-based agent

Has knowledge about:

The objects in the domain

The events that are to happen

How to accomplish a specific task

Knowledge must be represented explicitly

Knowledge representation = a combination of data structures and interpretive procedures that, if used appropriately, make the system pursue a

reasonable behavior, aware of the world in which it acts

Note: not just the definition of suitable data structures to represent information,

but also development of procedures that can be applied on them to make inferences

Knowledge-based agent

Architectural Principles

Any knowledge-based system must be able to express 2 kinds of knowledge in a separate and modular way

About the application domain (what)

About how to use knowledge about the application domain to solve problems

Problems

Representation

Expressing knowledge about the problem Strategy

What control strategy to use

Knowledge Base

Inference Engine

Knowledge-based agent

Must own a Knowledge Base (KB)

Often described in a formal language

Adopts a declarative approach

I.e., everything it needs to know is explicit

Includes an inference engine to ask itself (Ask) what to do or answers that can be deduced from what is stored in the Knowledge Base

2 possible perspectives on such agents

formal (knowledge level) : for what they know, independently of their implementation

operational (implementation level) : for how data structures are implemented in the KB and for the algorithms devised to process such data structures

A simple knowledge-based agent

A knowledge-based agent

maintains a knowledge base (KB): a set of propositions expressed in a representation language

Interacts with the KB through a functional Tell-Ask interface

Tell: to add new facts to the KB

Ask: to query the KB… maybe Retract Answers must logically follow from the KB

(being logical consequence of the KB)

(12)

A simple knowledge-based agent

The agent must be capable of

Representing states, actions, etc.

Incorporate new perceptions of the world

Update its internal representation of the world

Reason about the world

Deduce the most appropriate actions

Knowledge-based & Intelligent agents

Is a knowledge-based agent an Intelligent Agent?

“The ability of an agent, natural or artificial, of exhibiting an intelligent behavior can be described in terms of knowledge owned by a subject”

A. Newell & H. Simon: “Human Problem Solving”, Prentice-Hall, 1972

Knowledge-based Systems Intelligent Systems (Agents)

A. Newell & H. Simon: “Human Problem Solving”, Prentice-Hall, 1972

69 Affect the

world

Change representation Apply

method

Choose solution method Recognize

input Internal Representation

General

Knowledge Methods

Repository

A “human” intelligent agent is immersed in an environment in which it must carry out a task

The task must be intially acquired, recognized and encoded in an initial internal representation

Need for an ability to recognize

First, the task and

Then, the method to use for solving the problem

selected from a “Methods repository”

The methods repository draws from a repository of “general knowledge about the world”

Agent’s activity cycle

Recognize the task to be carried out

Select the method from a methods repository

If the internal representation is satisfactory, apply the method

Application of the method translates into an action that affects the environment

During several activity cycles, both methods and representations may be changed

Representation :

The set of data structures that describe the problem and that, once processed, will allow to solve the problem, &

The set of ways to interpret them

The Knowledge Level

“The knowledge level provides the means to

‘rationalise’ the behaviour of a system from the standpoint of an external observer. This observer treats the system as a ‘black box’ but maintains that it acts ‘as if’ it possesses certain knowledge about the world and uses this knowledge in a perfectly rational way toward reaching its goals. The behaviour of the agent is explained and predicted in terms of the reasons that the agent is assumed to have to take certain actions in order to reach ascribed goals.”

[A. Newell]

(13)

The Knowledge Level

[A. Newell]

The Knowledge Level

“In more detail a knowledge level description is based on the following model of the behaviour of an agent:

The intelligent agent possesses knowledge

Some of this knowledge constitutes the goals of the agent

The agent has the ability to perform a set of actions

The agent chooses actions according to the principle of rationality

The agent will select an action to perform next which according to its knowledge leads to the achievement of one of its goals.”

[A. Newell]

The Knowledge Level

[A. Newell]

75

K’

Knowledge Symbol System S’

Goals G

Knowledge K Symbol System S

Observer Agent

Actions

The artificial agent (observer) simulates the behavior of an intelligent agent (human)

The Knowledge Level

The observer

Considers the agent as a knowledge system, and knows its goals (G) and knowledge (K)

Knows that it owns a set of actions:

by direct observation also knows the environment.

Knows that the agent determines which actions to take based on a symbol system

May itself be considered as a knowledge system

It has knowledge (about the agent, the environment, etc.)

It has available a symbol system which selects the actions that the agent would take

Through its processing mechanism, produces predictions about the agent’s behavior and can simulate it

The Knowledge Level

The observer is a

Knowledge-based System

that can emulate the behavior of an intelligent agent that is able to predict and understand, without having an operational model of the mechanisms, methods and processing enacted by the agent

The artificial system uses only

the knowledge that the agent has about its external environment

the knowledge about the goals

the system of symbols

What are the minimum features for an artificial system to be capable of taking actions in an intelligent way?

Primary need: being able to handle symbols

Read

Interpretate

Process

(14)

Physical Symbol System

Physical Symbol System

A system that produces a collection of symbol structures that evolves in time

Operates in a wider world than that of the symbolic expressions themselves

Physical Symbol System Hypothesis

A physical symbol system has the necessary and sufficient means for generalized intelligent action

[Newell & Simon]

Computers can be programmed to simulate any physical symbol system

Learning Programs/Agents

A program A is said to learn from experience E with respect to a given set of tasks T and with a performance measure P

if its performance on tasks T, measured according to P, improves with experience E

Any specific program that learns must identify and define:

The class of tasks

The measure of performance to be improved

The source of experience

Learning Agent

Architecture

Learning Agent

Components

Learning Element

Learns and improves behavior

Performance Element

The agent itself knows what to do and can evaluate and improve what it does

Problem Generator

Suggests alternative actions to explore and carry out

Critical evaluation element

Provides feedback about how the agent is behaving

Further Readings

A. Newell & H. Simon: “Human Problem Solving”, Prentice-Hall, 1972

N.J. Nilsson: “Problem Solving Methods in

Artificial Intelligence”, McGraw-Hill, 1971

Riferimenti

Documenti correlati

One compound can have a higher affinity for the stationary phase with a particular mobile phase and will present a higher retention time than another metabolite with a lower

● So, #$Individual includes, among other things, physical objects, temporal sub-abstractions of physical objects, numbers, relationships and groups. ● An instance of #$Individual

– Human reasoning: if a fact cannot be proven false, it is assumed to be true?.

Essi contengono materiale originale di proprietà dell'Università degli Studi di Bari e/o figure di proprietà di altri autori, società e organizzazioni di cui e' riportato

Essi contengono materiale originale di proprietà dell'Università degli Studi di Bari e/o figure di proprietà di altri autori, società e organizzazioni di cui e' riportato

● Abduction then emerges as an important computational paradigm that would be needed for certain problem solving tasks within a declarative representation of the problem

Automatic Inductive and Analogical Reasoning on Concept Networks: a Forensic Application. Candidate: Fabio Leuzzi –

Essi contengono materiale originale di proprietà dell'Università degli Studi di Bari e/o figure di proprietà di altri autori, società e organizzazioni di cui e' riportato