Ai Practical File
Ai Practical File
EXPERIMENT 1
THEORY:
In chess, a queen is considered to be the most powerful piece on the boards,
because she can move as far as she pleases, horizontally, vertically, or
diagonally.
1
ARTIFICIAL INTELLIGENCE CO17335
In this experiment we are going to find the number of all possible solutions to
a given N by N board.
There can be many approaches to solve the N-queens problem, (one of them
is brute force technique which has exponential time complexity) but the best
approach to solve the given problem is through backtracking.
2
ARTIFICIAL INTELLIGENCE CO17335
Backtracking can be applied only for problems which admit the concept of a
“partial candidate solution” and a relatively quick test of whether it can
possibly be completed to a valid solution. It is useless, for example, for
locating a given value in an unordered table.
When it is applicable, however, backtracking is often much faster than brute
force enumeration of all complete candidates, since it can eliminate many
candidates with a single test.
3
ARTIFICIAL INTELLIGENCE CO17335
4
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
1. Place the queens column wise, start from the left column.
2. If all queens are placed.
i. Return true and increment the count variable.
3. Else
i. Try all the rows in the current column.
ii. Check if queen can be placed here safely, if yes, mark the current cell
in solution matrix as 1 and try to solve the rest of the problem
recursively.
iii. If placing the queen in above steps leads to solution return true.
iv. If placing the queen in above step does not lead to the solution,
BACKTRACK, mark the current cell in solution matrix as 0 and
return false.
4. If all rows are tried and nothing worked, return false.
5
ARTIFICIAL INTELLIGENCE CO17335
CODE:
#include <stdio.h>
#include <time.h>
int SIZE, MASK, COUNT;
if (y == SIZE) {
COUNT++;
} else {
bitmap = MASK & ~(left | down | right);
while (bitmap) {
bit = -bitmap & bitmap;
bitmap ^= bit;
Backtrack(y+1, (left | bit)<<1, down | bit, (right | bit)>>1);
}
}
}
int main(void)
{
/* <- N */
COUNT = 0; /* result */
clock_t begin, end;
double time_spent = 0.0;
for(SIZE = 1;SIZE <= 20; SIZE++)
{
begin = clock();
MASK = (1 << SIZE) - 1;
Backtrack(0, 0, 0, 0);
end = clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
printf("N=%d -> %d\t Time taken=%lf seconds\n", SIZE, COUNT, time_spent);
COUNT = 0;
}
return 0;
}
RESULTS:
6
ARTIFICIAL INTELLIGENCE CO17335
7
ARTIFICIAL INTELLIGENCE CO17335
800000000
700000000
600000000
500000000
300000000
200000000
100000000
0
0 5 10 15 20
8000
7000
6000
5000
3000
2000
1000
0
0 5 10 15 20
8
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 2
AIM OF THE EXPERIMENT: To implement graph traversal
techniques breadth first search and
depth first search.
THEORY:
9
ARTIFICIAL INTELLIGENCE CO17335
1. Breadth first search will not get trapped exploring a blind alley. This
contrast with depth first searching, which may follow a single,
unfruitful path for a very long time, perhaps forever, before the path
actually terminates in a state that has no successors. This is a particular
problem in depth-first search if there are loops (i.e., a state has a
successor that is also one of its ancestors) unless special care is
expended to test for such a situation.
2. If there is a solution, then breadth-first search is guaranteed to find it.
Furthermore, if there are multiple solutions, then a minimal solution
(i.e. one that requires the minimum number of steps) will be found. This
is guaranteed by the fact that longer paths are never explored until all
shorter ones have already been examined. This contrast with depth first
search, which may find a long path to a solution in one part of the tree,
when a shorter path exists in some other, unexplored part of the tree.
10
ARTIFICIAL INTELLIGENCE CO17335
(selecting some arbitrary node as the root node in case of a graph) and
explores as far as possible along each branch before backtracking.
1. Depth first search requires less memory since only the nodes on the
current path are stored. This contrasts with breadth-first search, where
all of the tree that has so far been generated must be stored.
2. By chance (or if care is taken in ordering the alternative successor
states), depth-first search may find a solution without examining much
of the search space at all. This contrasts with breadth-first search in
which all parts if the tree must be examined to level n before any nodes
on level n + 1 can be examined. This is particularly significant if many
acceptable solutions exist. Depth-first search can stop when one of them
is found.
11
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
Breadth-first search:
Depth-first search:
12
ARTIFICIAL INTELLIGENCE CO17335
CODE:
#include <iostream>
#include <vector>
using namespace std;
struct node
{
int key;
vector<node*> next;
node* back;
};
class tree
{
node root;
vector<node> otherNodes;
public:
tree();
void initialize(node &currNode, int i);
void dfs();
node pop(vector<node> &vect);
void dfsUtil(node toPush);
void bfs();
void bfsUtil(node toEnqueue);
};
tree::tree()
{
root.back = NULL;
cout<<"Enter key of root node: ";
int key;
cin>>key;
root.key = key;
cout<<"Enter the number of nodes in tree ";
int num;
cin>>num;
if(num == 1)
root.next.push_back(NULL);
13
ARTIFICIAL INTELLIGENCE CO17335
else
{
for(int i = 0; i < num; i++)
{
node tempNode;
otherNodes.push_back(tempNode);
}
for(int i = 0; i < num - 1; i++)
{
initialize(otherNodes[i], i+1);
}
}
}
else
currNode.next.push_back(&otherNodes[p-1]);
}
}
void tree::dfs()
{
dfsUtil(root);
}
void tree::bfs()
{
bfsUtil(root);
}
int main()
{
tree T;
cout<<"The depth first traversal of the tree is: ";
T.dfs();
cout<<endl<<"The breadth first traversal of the tree is: ";
T.bfs();
return 0;
}
16
ARTIFICIAL INTELLIGENCE CO17335
RESULTS:
17
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 3
AIM OF THE EXPERIMENT: To implement Tic-Tac-Toe using minimax
algorithm.
THEORY:
In minimax the two players are called maximizer and minimizer. The
minimizer tries to get the highest score possible while the maximizer tries to
do the opposite and get the lowest score possible.
Every board has a value associated with it. In a given state if the maximizer
has upper hand the, the score of the board will tend to be some positive value.
If the minimizer has the upper hand in that board state then it will tend to be
some negative value. The values of the board are calculated by some
heuristics which are unique for every type of game.
18
ARTIFICIAL INTELLIGENCE CO17335
19
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
# @player is the turn taking player
def score(game)
if game.win?(@player)
return 10
elsif game.win?(@opponent)
return -10
else
return 0
end
end
def minimax(game)
return score(game) if game.over?
scores = [] # an array of scores
moves = [] # an array of moves
CODE:
let boxArray = [];
for (let i = 1; i < 10; ++i) {
let str = "tictac";
str += i;
boxArray.push(document.getElementById(str));
}
let board = [
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
];
function makeBoard() {
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
if (boxArray[3 * i + j].innerHTML === " ") {
board[i][j] = 0;
}
else if (boxArray[3 * i + j].innerHTML === "X") {
board[i][j] = 1;
}
else if (boxArray[3 * i + j].innerHTML === "O") {
board[i][j] = -1;
}
}
}
}
let turnCount = 1;
function XO() {
if (turnCount % 2 === 1) {
event.srcElement.innerHTML = 'X';
++turnCount;
}
else {
event.srcElement.innerHTML = 'O';
++turnCount;
}
makeBoard();
21
ARTIFICIAL INTELLIGENCE CO17335
if (hasOwon(board))
window.alert("YOU WON!!");
else if (isFull(board))
window.alert("DRAW!!");
makeBoard();
}
for (let i = 0; i < boxArray.length; ++i) {
boxArray[i].addEventListener('click', XO);
}
function hasXwon(board) {
let check = new Set();
for (let i = 0; i < 3; ++i) {
check.add(board[i][i]);
}
if (check.size === 1 && check.has(1)) {
return true;
}
check.clear();
for (let i = 0; i < 3; ++i) {
let j = 2 - i;
check.add(board[i][j]);
}
if (check.size === 1 && check.has(1)) {
return true;
}
check.clear();
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
check.add(board[i][j]);
}
if (check.size === 1 && check.has(1)) {
return true;
}
check.clear();
}
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
check.add(board[j][i]);
}
if (check.size === 1 && check.has(1)) {
22
ARTIFICIAL INTELLIGENCE CO17335
return true;
}
check.clear();
}
}
function hasOwon(board) {
let check = new Set();
for (let i = 0; i < 3; ++i) {
check.add(board[i][i]);
}
if (check.size === 1 && check.has(-1)) {
return true;
}
check.clear();
for (let i = 0; i < 3; ++i) {
let j = 2 - i;
check.add(board[i][j]);
}
if (check.size === 1 && check.has(-1)) {
return true;
}
check.clear();
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
check.add(board[i][j]);
}
if (check.size === 1 && check.has(-1)) {
return true;
}
check.clear();
}
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
check.add(board[j][i]);
}
if (check.size === 1 && check.has(-1)) {
return true;
}
check.clear();
}
23
ARTIFICIAL INTELLIGENCE CO17335
}
function isFull(board) {
let check = new Set();
for (let i = 0; i < 3; i++) {
for (let j = 0; j < 3; ++j) {
check.add(board[i][j]);
}
}
if (check.has(0))
return false;
else
return true;
}
function nextMoves(board, XorY) {
let moves = [];
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
if (board[i][j] !== 0)
continue;
else {
let copyBoard = board.map(inner => inner.slice());
if (XorY)
copyBoard[i][j] = 1;
else
copyBoard[i][j] = -1;
moves.push(copyBoard);
}
}
}
return moves;
}
let choiceBoard;
function minimax(board, depth, isXturn) {
if (hasXwon(board))
return 10 - depth;
else if (hasOwon(board))
return depth - 10;
else if (isFull(board))
return 0;
else {
24
ARTIFICIAL INTELLIGENCE CO17335
++depth;
let moves = [];
let score = [];
let newMoves = [];
newMoves = nextMoves(board, isXturn);
for (let i = 0; i < newMoves.length; ++i) {
score.push(minimax(newMoves[i], depth, !isXturn));
moves.push(newMoves[i]);
}
if (isXturn) {
let index = score.indexOf(Math.max(...score));
choiceBoard = moves[index];
return score[index];
}
else {
let index = score.indexOf(Math.min(...score));
choiceBoard = moves[index];
return score[index];
}
}
}
function turnBoard(choiceBoard) {
++turnCount;
for (let i = 0; i < 3; ++i) {
for (let j = 0; j < 3; ++j) {
board[i][j] = choiceBoard[i][j];
if (board[i][j] === 1)
boxArray[3 * i + j].innerHTML = "X";
else if (board[i][j] === -1)
boxArray[3 * i + j].innerHTML = "0";
}
}
}
function AIplay() {
minimax(board, 0, true);
turnBoard(choiceBoard);
if (hasXwon(board))
window.alert("AI won!!");
else if (isFull(board))
window.alert("DRAW!!");
}
const AI = document.getElementById("AI");
AI.addEventListener('click', AIplay);
25
ARTIFICIAL INTELLIGENCE CO17335
RESULTS:
26
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 4
THEORY:
Prolog is a logic programming language associated with artificial intelligence
and computational linguistics.
Prolog has its roots in first-order logic, a formal logic, and unlike many other
programming languages, Prolog is intended primarily as a declarative
programming language: the program logic is expressed in terms of relations,
represented as facts and rules. A computation is initiated by running a query
over these relations.
The language was first conceived by Alain Colmerauer and his group in
Marseilles, France, in the early 1970s and the first prolog system was
developed in 1972 by Colmerauer with Philippe Roussel.
Prolog was one of the first logic programming languages, and remains the
most popular among such languages today, with several free and commercial
implementations available. The language has been used for theorem proving,
expert systems, term rewriting, type systems, and automated planning, as
well as its original intended field of use, natural language processing. Modern
Prolog environments support the creation of graphical user interface, as well
as administrative and networked applications.
Prolog is well-suited for specific tasks that benefit from rule-based logical
queries such as searching databases, voice control systems, and filling
templates.
In Prolog, programming logic is expressed in terms of relations, and a
computation is initiated by running a query over these relations. Relations
and queries are constructed using Prolog’s single data type, the term.
Relations are defined by clauses. Given a query, the Prolog engine attempts
to find a resolution refutation of the negated query. If the negated query can
be refuted i.e., an instantiation for all free variables can be found that makes
27
ARTIFICIAL INTELLIGENCE CO17335
the union of clauses and the singleton set consisting of the negated query
false, it follows that the original query, with the found instantiation applied, is
a logical consequence of the program. This makes Prolog (and other logic
programming languages) particularly useful for database, symbolic
mathematics, and language parsing applications. Because Prolog allows
impure predicates, checking the truth value of certain special predicates may
have some deliberate side effects, such as printing a value to screen. Because
of this, the programmer is permitted to use some amount of conventional
imperative programming when the logical paradigm is inconvenient. It has a
purely logical subset, called “pure Prolog”, as well as a number of extra
logical features.
Data Types
Prolog’s single data type is the term. Terms are either atoms, numbers,
variables or compound terms.
• An atom is a general-purpose name with no inherent meaning.
Examples of atoms include x, red, ‘Taco’, and ‘some atom’.
• Numbers can be floats or integers. ISO standard compatible Prolog
systems can check the Prolog flag “bounded”. Most of the major Prolog
systems support arbitrary length integer numbers.
• Variables are denoted by a string consisting of letters, numbers and
underscore characters, and beginning with an upper case letter or
underscore. Variables closely resemble variables in logic in that they are
placeholders for arbitrary terms.
• A compound term is composed of an atom called a “functor” and a
number of arguments, which are again terms.
Searching a Maze
It is a dark and stormy night. As you drive down a lonely country road, your
car breaks down, and you stop in front of a splendid palace. You go to the
door, find it open, and begin looking for a telephone. How do you search the
palace without getting lost? How do you know that you have searched every
28
ARTIFICIAL INTELLIGENCE CO17335
room? Also, what is the shortest path to the telephone? It is for such
situations that maze-searching methods have been devised.
In many computer programs, such as those for searching mazes, it is
useful to keep lists of information, and search the list if some information is
needed at later time. For example, if we decide to search the palace for a
telephone, we might need to keep a list of room numbers visited so far, so we
don’t go round in circles visiting the same rooms over and over again. What
we do is to write down the room numbers visited on a piece of paper. Before
entering the room, we check to see if its number is on our piece of paper. If it
is, we ignore the room, since we must have been to it previously. If the room
number is not on the paper, we write down the number, and enter the room.
And so on until we find the telephone.
The steps required to solve the problems are:
1. Go to the door of any room.
2. If the room number is on our list, ignore the room and go to Step 1. If
there are no room in sight, then “backtrack” through the room we went
through previously, to see if there are any other rooms near it.
3. Otherwise, add the room number to our list.
4. Look in the room for a telephone.
5. If there is no telephone, go to Step 1. Otherwise we stop, and our list
has path we took to come to the correct room.
29
ARTIFICIAL INTELLIGENCE CO17335
CODE:
Knowledge Base:
door(a, b).
door(b, e).
door(b, c).
door(d, e).
door(c, d).
door(g, e).
door(g, h).
door(e, f).
go(X, X, T, T).
go(X, Y, T, NT) :-
(door(X,Z) ; door(Z, X)),
\+ member(Z,T),
go(Z, Y, [Z|T], NT).
hasphone(h).
Function Call:
go(a,X,[],PATH),hasphone(X).
30
ARTIFICIAL INTELLIGENCE CO17335
RESULTS:
31
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 5
AIM OF THE EXPERIMENT: To implement a software project in
Lisp.
THEORY:
Lisp (historically LISP) is a family of computer programming languages with
a long history and a distinctive, fully parenthesized prefix notation.
Originally, specified in 1958, Lisp is the second-oldest high-level
programming language in widespread use today. Only Fortran is older by one
year. Lisp has changed since its early days, and many dialects have existed
over its history. Today, the best-known general-purpose Lisp dialects are
Clojure, Common Lisp and Scheme.
Lisp was originally created as a practical mathematical notation for computer
programs, influenced by the notation of Alonzo Church’s lambda calculus. It
quickly became the favored programming language for artificial intelligence
(AI) research. As one of the earliest programming languages, Lisp pioneered
many ideas in computer science, including tree-data structures, automatic
storage management, dynamic typing, conditionals, higher-order functions,
recursion, the self-hosting compiler, and the read-eval-print-loop.
The name LISP derives from “List Processor”. Linked lists are one of Lisp’s
major data structures, and the Lisp source code is made of lists. Thus, Lisp
programs can manipulate source code as a data structure, giving rise to the
macro systems that allow programmers to create new syntax or new domain-
specific languages embedded in Lisp.
The interchangeability of code and data gives Lisp its instantly recognizable
syntax. All program code is written as s-expressions, or parenthesized lists. A
function call or syntactic form is written as a list with the function or
operator’s name first, and the arguments following: for instance, a function f
that takes three arguments would be called as (f arg1 arg2 arg3).
Since inception, Lisp was closely connected with the artificial intelligence
research community, especially on PDP-10 systems. Lisp was used as the
implementation of the programming language Macro Planner, which was
32
ARTIFICIAL INTELLIGENCE CO17335
The reliance on expressions gives the language great flexibility. Because Lisp
functions are written as lists, they can be processed exactly like data. This
allows easy writing of programs which manipulate other programs
(metaprogramming). Many Lisp dialects exploit this feature using macro
systems, which enables extension of the language almost without limit.
A Lisp list is written with its elements separated by whitespace, and
surrounded by parentheses. For example, (1 2 foo) is a list whose elements
are the three atoms 1, 2, and foo. These values are implicitly typed: they are
33
ARTIFICIAL INTELLIGENCE CO17335
respectively two integers and a Lisp-specific data type called a "symbol", and
do not have to be declared as such.
The empty list () is also represented as the special atom nil. This is the only
entity in Lisp which is both an atom and a list.
Expressions are written as lists, using prefix notation. The first element in the
list is the name of a function, the name of a macro, a lambda expression or
the name of a "special operator" (see below). The remainder of the list are the
arguments. For example, the function list returns its arguments as a list, so
the expression
(+ 1 2 3 4)
evaluates to 10. The equivalent under infix notation would be "1 + 2 + 3 + 4".
(incf x)
equivalent to (setq x (+ x 1)), returning the new value of x.
34
ARTIFICIAL INTELLIGENCE CO17335
(if nil
(list 1 2 "foo")
(list 3 4 "bar"))
evaluates to (3 4 "bar"). Of course, this would be more useful if a non-trivial
expression had been substituted in place of nil.
Lisp also provides logical operators and, or and not. The and and or operators
do short circuit evaluation and will return their first nil and non-nil argument
respectively.
35
ARTIFICIAL INTELLIGENCE CO17335
CODE:
(defun make-record (name number city)
(list :name name :number number :city city))
(defun dump-dir ()
(dolist (person *dir*)
(format t "~{~a:~10t~a~%~}~%" person)))
(defun prompt-for-person ()
(make-record
(prompt-read "Name")
(prompt-read "Number")
(prompt-read "City")))
36
ARTIFICIAL INTELLIGENCE CO17335
(with-standard-io-syntax
(setf *dir* (read in)))))
(defun add-person ()
(loop (add-record (prompt-for-person))
(if (not (y-or-n-p "Another?[y/n]:"))(return))))
37
ARTIFICIAL INTELLIGENCE CO17335
38
ARTIFICIAL INTELLIGENCE CO17335
RESULT:
39
ARTIFICIAL INTELLIGENCE CO17335
40
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 6
AIM OF THE EXPERIMENT: To implement unification algorithm
in Prolog.
THEORY:
In logic and computer science, unification is an algorithmic process of
solving equations between symbolic expressions.
42
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
43
ARTIFICIAL INTELLIGENCE CO17335
CODE:
unify_mm(L,T) :- prepara(L,Tree,Z,Counter),
unify_sys(Tree,[],T,Z,Counter).
unify_sys(_,T,T,[],0) :-
!.
unify_sys(_,_,_,[],_) :- write('Error: cycle'),
fail, !.
unify_sys(U,T,Ts,[{0,S=[]}|Z],Co) :- !, Co_n is Co-1,
unify_sys(U,[S=nil|T],Ts,Z,Co_n).
unify_sys(U,T,Ts,[{0,S=M}|Z],Co) :- cpf(M,C,F-[]),
compact(F,U,Uo,Z,Zo,0,El),
Co_n is Co - El -1,
unify_sys(Uo,[S=C|T],Ts,Zo,Co_n).
cpf(T,Fu,Nil-Nil) :- functor(T,Fu,0),!.
cpf([],[],[]-[]) :- !.
cpf([{[]=Mi}|T],[Ci|Ct],F) :- !, append_d_l(Fi,Ft,F), cpf(Mi,Ci,Fi),
cpf(T,Ct,Ft).
cpf([{[S|St]=Mi}|T],[S|Ct],F) :- !, append_d_l([{[S|St]=Mi}|Nil]-
Nil,Ft,F), cpf(T,Ct,Ft).
compact([],U,U,Z,Z,El,El) :- !.
compact([{[V|Se]=M}|T],Ui,Uo,Zi,Zo,El_t,El) :-
m_tree(Ui,V,{Cv,Sv=Mv},Mef,Ut),
Cv1 is Cv - 1,
44
ARTIFICIAL INTELLIGENCE CO17335
compact_iter(Se,Ut,{Cv1,Sv=Mv},Mef,Ut1,M,El_t,El_1),
update_zerome(Mef,Zi,Zt),
compact(T,Ut1,Uo,Zt,Zo,El_1,El).
compact_iter([],U,{Ct,St=Mt},{Ct,St=Mf},U,M,El,El) :-
merge_mt(Mt,M,Mf).
compact_iter([H|T],Ui,{Ct,St=Mt},Mef,Uo,M,El_t,El) :-
mbchk(H,St), !,
Ct1 is Ct -1,
m_tree(Ui,H,_,Mef,Ut),
compact_iter(T,Ut,{Ct1,St=Mt},Mef,Uo,M,El_t,El).
compact_iter([H|T],Ui,{Ct,St=Mt},Mef,Uo,M,El_t,El) :-
m_tree(Ui,H,{Ch,Sh=Mh},Mef,Ut),
Ch1 is Ch -1,
merge_me({Ct,St=Mt},{Ch1,Sh=Mh},MeT),
El_tt is El_t + 1,
compact_iter(T,Ut,MeT,Mef,Uo,M,El_tt,El).
update_zerome({0,S=M},Zi,[{0,S=M}|Zi]) :- !.
update_zerome(_,Zi,Zi).
merge_mt([],M2,M2) :- !.
merge_mt(M1,[],M1) :- !.
merge_mt([{S1i=M1i}|T1],[{S2i=M2i}|T2],[{S3i=M3i}|T3]) :- !,
union_se(S1i,S2i,S3i),
merge_mt(M1i,M2i,M3i),
merge_mt(T1,T2,T3).
merge_mt(M1,M2,M3) :- M1=..[F|A1], M2=..[F|A2],
functor(M1,F,N), functor(M2,F,N),
functor(M3,F,N), M3=..[F|A3],
45
ARTIFICIAL INTELLIGENCE CO17335
merge_mt(A1,A2,A3).
merge_me(M1,M2,M1) :- M1==M2,!.
merge_me({C1,S1=T1},{C2,S2=T2},M3) :- length(S1,N1),
length(S2,N2),
N1 < N2,
merge_me({C2,S2=T2},{C1,S1=T1},M3).
merge_me({C1,S1=T1},{C2,S2=T2},{C3,S3=T3}) :- C3 is C1 + C2,
append(S2,S1,S3), %NON EFFICIENTE
merge_mt(T1,T2,T3).
trasf([],[],X-X) :- !.
trasf([[S,M]|T],[{S=Rm}|Rt],V) :- !, append_d_l(Vm,Vt,V),
trasf(M,Rm,Vm), trasf(T,Rt,Vt).
functor_list([],nil,nil) :- !.
functor_list(L,F,N) :- functor_list_1(L,F,N).
functor_list_1([],_,_).
functor_list_1([T|L],F,N) :- functor(T,F,N), functor_list_1(L,F,N).
arg_list(_,[],[]) :- !.
arg_list(I,[H|T],[A|At]) :- arg(I,H,A), arg_list(I,T,At).
46
ARTIFICIAL INTELLIGENCE CO17335
args(_,0,A,A,V,V) :- !.
args(L,I,At,A,Vt,V) :- arg_list(I,L,Ai), separa(Ai,S,M,Vi),
append_d_l(Vi,Vt,Vtt), I1 is I-1,
args(L,I1,[[S,M]|At],A,Vtt,V).
separa([],[],[],Nil-Nil).
separa([H|T],[H|St],M,V) :- var(H), !, append_d_l(Vt, [H|L] - L, V),
separa(T,St,M,Vt).
separa([H|T],S,[H|Mt],V) :- separa(T,S,Mt,V).
prepara(L,Tree,Z,N) :-
trasf_begin(L,Me,V-[]), msort(V,Vars),
build_sys(Vars,nil,0,X-X,Sy-[],0,N1), N is N1+1,
sys_tree(Sy,Me,Tree,Z).
build_sys([],nil,0,Sy,Sy,Num,Num) :- !.
build_sys([],P,C,St,Sy,Num_t,Num) :-
!, append_d_l(St,[{C,[P]=[]}|L]-L,Sy), Num is
Num_t+1.
build_sys([V|T],nil,0,Syt,Sy,Num_t,Num) :-
!, build_sys(T,V,1,Syt,Sy,Num_t,Num).
build_sys([V|T],P,C,Syt,Sy,Num_t,Num) :-
V==P,!, C1 is C+1,
build_sys(T,P,C1,Syt,Sy,Num_t,Num).
build_sys([V|T],P,C,Syt,Sy,Num_t,Num) :-
append_d_l(Syt,[{C,[P]=[]}|L]-L,Sytt), Num_t1 is
Num_t+1,
build_sys(T,V,1,Sytt,Sy,Num_t1,Num).
sys_tree(Sys,{[]=M},Tree,Z) :- !, crea_albero(Sys,Tree),
update_zerome({0,[New_var]=M},[],Z).
sys_tree(Sys,{S=M},Tree,Z) :- crea_albero(Sys,Tree_t),
counter_me(Tree_t,0,S,{C,S=M},Tree),
47
ARTIFICIAL INTELLIGENCE CO17335
update_zerome({C,S=M},[],Z).
counter_me(Tree,C,[],{C,S=M},Tree) :- !.
counter_me(Tree1,Ct,[H|T],{C,S=M},Tree) :-
m_tree(Tree1,H,{C1,_=_},{C,S=M},Tree2),
Ctt is Ct+C1,
counter_me(Tree2,Ctt,T,{C,S=M},Tree).
crea_albero([],nil) :- !.
crea_albero(Sy,[V-{C,[V]=M},L,R]) :-
dividi(Sy,{C,[V]=M},L1,R1), crea_albero(L1,L),
crea_albero(R1,R).
dividi_2([H|T],1,H,L-[],L,T) :- !.
dividi_2([H|T],N,X,Lt,L,R) :- append_d_l(Lt,[H|Y]-Y,Ltt), N1 is N-1,
dividi_2(T,N1,X,Ltt,L,R).
m_tree(nil,_,{0,[]=[]},_,nil) :- !.
m_tree([N-Me,L,R],E,Me,Ms,[N-Ms,L,R]) :- E==N, !.
m_tree([N-M,L,R],E,Me,Ms,[N-M,L1,R]) :- E@<N, !,
m_tree(L,E,Me,Ms,L1).
m_tree([N-M,L,R],E,Me,Ms,[N-M,L,R1]) :- m_tree(R,E,Me,Ms,R1).
mbchk(_,[]) :- fail.
mbchk(E,[H|_]) :- E==H , ! .
mbchk(E,[_|T]) :- mbchk(E,T).
48
ARTIFICIAL INTELLIGENCE CO17335
union_se([],L,L) :- !.
union_se([H|T], L, R) :-
mbchk(H, L), !,
union_se(T, L, R).
union_se([H|T], L, [H|R]) :-
union_se(T, L, R).
49
ARTIFICIAL INTELLIGENCE CO17335
RESULTS:
50
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 7
AIM OF THE EXPERIMENT: To implement best-first search.
THEORY:
Best-first search is a search algorithm which explores a graph by expanding
the most promising node chosen according to a specified rule.
In BFS and DFS, when we are at a node, we can consider any of the adjacent
as next node. So both BFS and DFS blindly explore paths without
considering any cost function. The idea of Best First Search is to use an
evaluation function to decide which adjacent is most promising and then
51
ARTIFICIAL INTELLIGENCE CO17335
explore. Best First Search falls under the category of Heuristic Search or
Informed Search.
Analysis:
• The worst case time complexity for Best First Search is O(n * Log n)
where n is number of nodes. In worst case, we may have to visit all
nodes before we reach goal. Note that priority queue is implemented
using Min(or Max) Heap, and insert and remove operations take O(log
n) time.
• Performance of the algorithm depends on how well the cost or
evaluation function is designed.
52
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
PriorityQueue pq;
2) Insert "start" in pq.
pq.insert(start)
u = PriorityQueue.DeleteMin
If u is the goal
Exit
Else
Foreach neighbor v of u
If v "Unvisited"
Mark v "Visited"
pq.insert(v)
Mark u "Examined"
End procedure
53
ARTIFICIAL INTELLIGENCE CO17335
CODE:
(defstruct node key next)
(defun make-edge-node-pair (edge-wt next-edge)
(cons edge-wt next-edge))
(defvar node0 (make-node :key 'S :next nil))
(defvar node1 (make-node :key 'A :next nil))
(defvar node2 (make-node :key 'B :next nil))
(defvar node3 (make-node :key 'C :next nil))
(defvar node4 (make-node :key 'D :next nil))
(defvar node5 (make-node :key 'E :next nil))
(defvar node6 (make-node :key 'F :next nil))
(defvar node7 (make-node :key 'G :next nil))
(defvar node8 (make-node :key 'H :next nil))
(defvar node9 (make-node :key 'I :next nil))
(defvar node10 (make-node :key 'J :next nil))
(defvar node11 (make-node :key 'K :next nil))
(defvar node12 (make-node :key 'L :next nil))
(defvar node13 (make-node :key 'M :next nil))
(defun heuristic (n)
(random n))
(setf (node-next node0) (list (make-edge-node-pair (heuristic 20) node1) (make-
edge-node-pair (heuristic 20) node2) (make-edge-node-pair (heuristic 20) node3)))
(setf (node-next node1) (list (make-edge-node-pair (heuristic 20) node4) (make-
edge-node-pair (heuristic 20) node5)))
(setf (node-next node2) (list (make-edge-node-pair (heuristic 20) node6) (make-
edge-node-pair (heuristic 20) node7)))
(setf (node-next node3) (list (make-edge-node-pair (heuristic 20) node8)))
(setf (node-next node8) (list (make-edge-node-pair (heuristic 20) node9) (make-
edge-node-pair (heuristic 20) node10)))
(setf (node-next node9) (list (make-edge-node-pair (heuristic 20) node11) (make-
edge-node-pair (heuristic 20) node12) (make-edge-node-pair (heuristic 20)
node13)))
(defun find-min (priority-q)
(let ((edge-wt (make-array 1 :adjustable t :fill-pointer 0)))
(loop for i in priority-q
54
ARTIFICIAL INTELLIGENCE CO17335
55
ARTIFICIAL INTELLIGENCE CO17335
RESULTS:
56
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 8
AIM OF THE EXPERIMENT: To implement Knight tour’s
problem's
THEORY:
A knight's tour is a sequence of moves of a knight on a chessboard such that
the knight visits every square only once. If the knight ends on a square that is
one knight's move from the beginning square (so that it could tour the board
again immediately, following the same path), the tour is closed; otherwise, it
is open.
The numbers of all directed tours (open and closed) on an n × n board for n =
1, 2, … are:
There are several ways to find a knight's tour on a given board with a
computer. Some of these methods are algorithms while others are heuristics.
57
ARTIFICIAL INTELLIGENCE CO17335
Brute-force algorithms
A brute-force search for a knight's tour is impractical on all but the smallest
boards. For example, there are approximately 4×1051 possible move
sequences on an 8 × 8 board, and it is well beyond the capacity of modern
computers (or networks of computers) to perform operations on such a large
set. However, the size of this number is not indicative of the difficulty of the
problem, which can be solved "by using human insight and ingenuity ...
without much difficulty."
Warnsdorff’s rule
Warnsdorff's rule is a heuristic for finding a single knight's tour. The knight is
moved so that it always proceeds to the square from which the knight will
have the fewest onward moves. When calculating the number of onward
moves for each candidate square, we do not count moves that revisit any
square already visited. It is possible to have two or more choices for which
the number of onward moves is equal; there are various methods for breaking
such ties, including one devised by Pohl and another by Squirrel and Cull.
This rule may also more generally be applied to any graph. In graph-theoretic
terms, each move is made to the adjacent vertex with the least degree.
Although the Hamiltonian path problem is NP-hard in general, on many
graphs that occur in practice this heuristic is able to successfully locate a
solution in linear time. The knight's tour is such a special case.
58
ARTIFICIAL INTELLIGENCE CO17335
59
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
Naive Algorithm for Knight’s tour
The Naive Algorithm is to generate all tours one by one and check if the
generated tour satisfies the constraints.
60
ARTIFICIAL INTELLIGENCE CO17335
61
ARTIFICIAL INTELLIGENCE CO17335
CODE:
// C program for Knight Tour problem
#include<stdio.h>
#define N 8
62
ARTIFICIAL INTELLIGENCE CO17335
return 1;
}
return 0;
}
64
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 9
AIM OF THE EXPERIMENT: To implement alpha-beta pruning.
THEORY:
Alpha–beta pruning is a search algorithm that seeks to decrease the number
of nodes that are evaluated by the minimax algorithm in its search tree. It is
an adversarial search algorithm used commonly for machine playing of two-
player games (Tic-tac-toe, Chess, Go, etc.). It stops evaluating a move when
at least one possibility has been found that proves the move to be worse than
a previously examined move. Such moves need not be evaluated further.
When applied to a standard minimax tree, it returns the same move as
minimax would, but prunes away branches that cannot possibly influence the
final decision.
Allen Newell and Herbert A. Simon who used what John McCarthy calls an
"approximation" in 1958 wrote that alpha–beta "appears to have been
reinvented a number of times". Arthur Samuel had an early version for a
checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel
Edwards also invented alpha–beta independently in the United States.
McCarthy proposed similar ideas during the Dartmouth workshop in 1956
and suggested it to a group of his students including Alan Kotok at MIT in
1961. Alexander Brudno independently conceived the alpha–beta algorithm,
publishing his results in 1963. Donald Knuth and Ronald W. Moore refined
the algorithm in 1975. Judea Pearl proved its optimality for trees with
randomly assigned leaf values in terms of the expected running time in two
papers. The optimality of the randomized version of alpha-beta was shown by
Michael Saks and Avi Wigderson in 1986.
The algorithm maintains two values, alpha and beta, which represent the
minimum score that the maximizing player is assured of and the maximum
score that the minimizing player is assured of respectively. Initially, alpha is
negative infinity and beta is positive infinity, i.e. both players start with their
worst possible score. Whenever the maximum score that the minimizing
player (i.e. the "beta" player) is assured of becomes less than the minimum
65
ARTIFICIAL INTELLIGENCE CO17335
score that the maximizing player (i.e., the "alpha" player) is assured of (i.e.
beta < alpha), the maximizing player need not consider further descendants
of this node, as they will never be reached in the actual play.
Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
As we have seen in the minimax search algorithm that the number of game
states it has to examine are exponential in depth of the tree. Since we cannot
eliminate the exponent, but we can cut it to half. Hence there is a technique
by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves
two threshold parameter Alpha and beta for future expansion, so it is called
alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
not only prune the tree leaves but also entire sub-tree.
The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same
move as the standard algorithm does, but it removes all the nodes which are
not really affecting the final decision but making algorithm slow. Hence by
pruning these nodes, it makes the algorithm fast.
66
ARTIFICIAL INTELLIGENCE CO17335
ALGORITHM:
function alphabeta(node, depth, α, β, maximizingPlayer) is
if depth = 0 or node is a terminal node then
return the heuristic value of node
if maximizingPlayer then
value := −∞
for each child of node do
value := max(value, alphabeta(child, depth − 1, α, β, FALSE))
α := max(α, value)
if α ≥ β then
break (* β cut-off *)
return value
else
value := +∞
for each child of node do
value := min(value, alphabeta(child, depth − 1, α, β, TRUE))
β := min(β, value)
if α ≥ β then
break (* α cut-off *)
return value
(* Initial call *)
alphabeta(origin, depth, −∞, +∞, TRUE)
67
ARTIFICIAL INTELLIGENCE CO17335
CODE:
#include<iostream>
using namespace std;
int index1;
char board[9] = {'*','*','*','*','*','*','*','*','*'};// Single array represents the
board '*' means empty box in board
if((board[i]==board[i+1])&&(board[i+1]==board[i+2])&&(board[i]=='O'))
return 1;
}
for(int i=0;i<3;i++)
{
68
ARTIFICIAL INTELLIGENCE CO17335
if((board[i]==board[i+3])&&(board[i+3]==board[i+6])&&(board[i]=='O'))
return 1;
}
if((board[0]==board[4])&&(board[4]==board[8])&&(board[0]=='O'))
{
return 1;
}
if((board[2]==board[4])&&(board[4]==board[6])&&(board[2]=='O'))
{
return 1;
}
return 0;
}
if((board[i]==board[i+1])&&(board[i+1]==board[i+2])&&(board[i]=='X'))
return 1;
}
for(int i=0;i<3;i++)
{
if((board[i]==board[i+3])&&(board[i+3]==board[i+6])&&(board[i]=='X'))
return 1;
}
if((board[0]==board[4])&&(board[4]==board[8])&&(board[0]=='X'))
{
return 1;
}
if((board[2]==board[4])&&(board[4]==board[6])&&(board[2]=='X'))
{
69
ARTIFICIAL INTELLIGENCE CO17335
return 1;
}
return 0;
}
int max_val=-1000,min_val=1000;
int i,j,value = 1;
if(cpu_won() == 1)
{return 10;}
else if(user_won() == 1)
{return -10;}
else if(isFull()== 1)
{return 0;}
int score[9] = {1,1,1,1,1,1,1,1,1};//if score[i]=1 then it is empty
for(i=0;i<9;i++)
{
if(board[i] == '*')
{
if(min_val>max_val) // reverse of pruning condition.....
{
if(flag == true)
70
ARTIFICIAL INTELLIGENCE CO17335
{
board[i] = 'X';
value = minimax(false);
}
else
{
board[i] = 'O';
value = minimax(true);
}
board[i] = '*';
score[i] = value;
}
}
}
if(flag == true)
{
max_val = -1000;
for(j=0;j<9;j++)
{
if(score[j] > max_val && score[j] != 1)
{
max_val = score[j];
index1 = j;
}
}
return max_val;
}
if(flag == false)
{
min_val = 1000;
for(j=0;j<9;j++)
{
if(score[j] < min_val && score[j] != 1)
{
71
ARTIFICIAL INTELLIGENCE CO17335
min_val = score[j];
index1 = j;
}
}
return min_val;
}
}
while(true)
{
72
ARTIFICIAL INTELLIGENCE CO17335
cout<<endl<<"CPU MOVE....";
minimax(true);
board[index1] = 'X';
draw_board();
if(cpu_won()==1)
{
cout<<endl<<"CPU WON.....";
break;
}
if(isFull()==1)
{
cout<<endl<<"Draw....";
break;
}
again: cout<<endl<<"Enter the move:";
cin>>move;
if(board[move-1]=='*')
{
board[move-1] = 'O';
draw_board();
}
else
{
cout<<endl<<"Invalid Move......Try different move";
goto again;
}
if(user_won()==1)
{
cout<<endl<<"You Won......";
break;
}
if(isFull() == 1)
{
cout<<endl<<"Draw....";
73
ARTIFICIAL INTELLIGENCE CO17335
break;
}
}
74
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 10
AIM OF THE EXPERIMENT: Brief reports on semantic
web/Swarm Intelligence/Genetic Algorithm/ Artificial Neural Network
THEORY:
1. Semantic Web
2. Swarm Intelligence
Boids
Boids is an artificial life program, developed by Craig Reynolds in 1986,
which simulates the flocking behaviour of birds. His paper on this topic was
published in 1987 in the proceedings of the ACM SIGGRAPH conference.[3]
The name "boid" corresponds to a shortened version of "bird-oid object",
which refers to a bird-like object.[4]
As with most artificial life simulations, Boids is an example of emergent
behavior; that is, the complexity of Boids arises from the interaction of
individual agents (the boids, in this case) adhering to a set of simple rules.
The rules applied in the simplest Boids world are as follows:
separation: steer to avoid crowding local flockmates
78
ARTIFICIAL INTELLIGENCE CO17335
Self-propelled particles
79
ARTIFICIAL INTELLIGENCE CO17335
3. Genetic Algorithm
In computer science and operations research, a genetic algorithm(GA) is
a meta heuristic inspired by the process of natural selection that belongs
to the larger class of evolutionary algorithms (EA). Genetic algorithms
are commonly used to generate high-quality solutions to optimization and
search problems by relying on bio-inspired operators such as mutation,
crossover and selection. John Holland introduced genetic algorithms in
1960 based on the concept of Darwin’s theory of evolution; afterwards,
his student David E. Goldberg extended GA in 1989.
In a genetic algorithm, a population of candidate solutions (called
individuals, creatures, or phenotypes) to an optimization problem is
evolved toward better solutions. Each candidate solution has a set of
properties (its chromosomes or genotype) which can be mutated and
altered; traditionally, solutions are represented in binary as strings of 0s
and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated
individuals, and is an iterative process, with the population in each iteration
called a generation. In each generation, the fitness of every individual in the
population is evaluated; the fitness is usually the value of the objective
function in the optimization problem being solved. The more fit individuals
are stochastically selected from the current population, and each individual's
genome is modified (recombined and possibly randomly mutated) to form a
new generation. The new generation of candidate solutions is then used in the
next iteration of the algorithm. Commonly, the algorithm terminates when
either a maximum number of generations has been produced, or a satisfactory
fitness level has been reached for the population.
81
ARTIFICIAL INTELLIGENCE CO17335
Warren McCulloch and Walter Pitts (1943) opened the subject by creating a
computational model for neural networks. In the late 1940s, D. O. Hebb
created a learning hypothesis based on the mechanism of neural plasticity
that became known as Hebbian learning. Farley and Wesley A. Clark (1954)
first used computational machines, then called "calculators", to simulate a
Hebbian network. Rosenblatt[6] (1958) created the perceptron.The first
functional networks with many layers were published by Ivakhnenko and
Lapa in 1965, as the Group Method of Data Handling. The basics of
continuous backpropagation were derived in the context of control theory by
Kelley in 1960 and by Bryson in 1961, using principles of dynamic
programming.
In 1970, Seppo Linnainmaa published the general method for automatic
differentiation (AD) of discrete connected networks of nested differentiable
functions. In 1973, Dreyfus used backpropagation to adapt parameters of
controllers in proportion to error gradients. Werbos's (1975) backpropagation
algorithm enabled practical training of multi-layer networks. In 1982, he
applied Linnainmaa's AD method to neural networks in the way that became
widely used. Thereafter research stagnated following Minsky and Papert
(1969), who discovered that basic perceptrons were incapable of processing
the exclusive-or circuit and that computers lacked sufficient power to process
useful neural networks. In 1992, max-pooling was introduced to help with
least shift invariance and tolerance to deformation to aid in 3D object
recognition. Schmidhuber adopted a multi-level hierarchy of networks (1992)
pre-trained one level at a time by unsupervised learning and fine-tuned by
backpropagation.
Geoffrey Hinton et al. (2006) proposed learning a high-level representation
using successive layers of binary or real-valued latent variables with a
restricted Boltzmann machine to model each layer. In 2012, Ng and Dean
created a network that learned to recognize higher-level concepts, such as
cats, only from watching unlabeled images. Unsupervised pre-training and
increased computing power from GPUs and distributed computing allowed
the use of larger networks, particularly in image and visual recognition
problems, which became known as "deep learning".[citation needed]
Ciresan and colleagues (2010) showed that despite the vanishing gradient
problem, GPUs make backpropagation feasible for many-layered
83
ARTIFICIAL INTELLIGENCE CO17335
84
ARTIFICIAL INTELLIGENCE CO17335
86
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 11
AIM OF THE EXPERIMENT: Brief report on latest AI
technologies
THEORY:
1. Natural language generation
2. Speech recognition
3. Virtual Agents
87
ARTIFICIAL INTELLIGENCE CO17335
Some of the companies that provide virtual agents include Amazon, Apple,
Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft,
Microsoft and Satisfi.
These days, computers can also easily learn, and they can be incredibly
intelligent!
Machine learning (ML) is a subdiscipline of computer science and a branch
of AI. Its goal is to develop techniques that allow computers to learn.
By providing algorithms, APIs (application programming interface),
development and training tools, big data, applications and other machines,
ML platforms are gaining more and more traction every day.
They are currently mainly being used for prediction and classification.
Some of the companies selling ML platforms include Amazon, Fractal
Analytics, Google, H2O.ai, Microsoft, SAS, Skytree and Adext. The
last one is actually the first and only audience management tool in the
world that applies real AI and machine learning to digital advertising to
find the most profitable audience or demographic group for any ad.
5. AI-optimised hardware
88
ARTIFICIAL INTELLIGENCE CO17335
EXPERIMENT 12
AIM OF THE EXPERIMENT: Brief report on Rule ML
THEORY:
RuleML is a global initiative, led by a non-profit organization RuleML Inc.,
that is devoted to advancing research and industry standards design activities
in the technical area of rules that are semantic and highly inter-operable. The
standards design takes the form primarily of a markup language, also known
as RuleML. The research activities include an annual research conference,
the RuleML Symposium, also known as RuleML for short. Founded in fall
2000 by Harold Boley, Benjamin Grosof, and Said Tabet, RuleML was
originally devoted purely to standards design, but then quickly branched out
into the related activities of coordinating research and organizing an annual
research conference starting in 2002. The M in RuleML is sometimes
interpreted as standing for Markup and Modeling. The markup language was
developed to express both forward (bottom-up) and backward (top-down)
rules in XML for deduction, rewriting, and further inferential-
transformational tasks. It is defined by the Rule Markup Initiative, an open
network of individuals and groups from both industry and academia that was
formed to develop a canonical Web language for rules using XML markup
and transformations from and to other rule standards/systems.
Markup standards and initiatives related to RuleML include:
Rule Interchange Format (RIF): The design and overall purpose of W3C's
Rule Interchange Format (RIF) industry standard is based primarily on the
RuleML industry standards design. Like RuleML, RIF embraces a
multiplicity of potentially useful rule dialects that nevertheless share common
characteristics.
RuleML Technical Committee from Oasis-Open: An industry standards effort
devoted to legal automation utilizing RuleML.
89
ARTIFICIAL INTELLIGENCE CO17335
90