A Knowledge-Based System For Prototypica PDF
A Knowledge-Based System For Prototypica PDF
Preprint version. The final version is available on the Taylor & Francis site:
http://www.tandfonline.com.
RESEARCH ARTICLE
A Knowledge-Based System
for Prototypical Reasoning
In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired
architecture for the representation of conceptual information. The proposed system aims
at extending the classical representational and reasoning capabilities of the ontology-based
frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base,
composed of a classical symbolic component (grounded on a formal ontology) with a typicality
based one (grounded on the conceptual spaces framework). The resulting system attempts
to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual
process theories of reasoning and rationality. The system has been experimentally assessed
in a conceptual categorization task where common sense linguistic descriptions were given
in input, and the corresponding target concepts had to be identified. The results show that
the proposed solution substantially extends the representational and reasoning “conceptual”
capabilities of standard ontology-based systems.
1. Introduction
Representing and reasoning on common sense concepts is still an open issue in the
field of knowledge representation (KR) and, more specifically, in that of formal on-
tologies. In Cognitive Science evidences exist in favor of prototypical concepts, and
typicality based conceptual reasoning has been widely studied. Conversely, in the
field of computational models of cognition, most contemporary concept oriented
KR systems, including formal ontologies, do not allow –for technical convenience–
neither the representation of concepts in prototypical terms nor forms of approxi-
mate, non monotonic, conceptual reasoning. In this paper we focus on the problem
of concept representation in the field of formal ontologies. Following the approach
proposed in [13] we introduce a conceptual architecture that, embedded in a larger
knowledge-based system, aims at extending the representational and reasoning ca-
pabilities available to traditional ontology-based frameworks.
The study of concept representation concerns different research areas, such as
Artificial Intelligence, Cognitive Science, Philosophy, etc.. In the field of Cognitive
Science, the early work of Rosch [34], preceded by the philosophical analysis of
Wittgenstein [40], showed that ordinary concepts do not obey the classical theory
(stating that concepts can be defined in terms of sets of necessary and sufficient
conditions). Rather, they exhibit prototypical traits: e.g., some members of a cate-
gory are considered better instances than other ones; more central instances share
certain typical features –such as the ability of flying for birds– that, in general,
cannot be thought of as necessary nor sufficient conditions. These results influ-
enced pioneering KR research, where some efforts were invested in trying to take
into account the suggestions coming from Cognitive Psychology: artificial systems
were designed –e.g., frames [29] and semantic networks [33]– to represent and to
conduct reasoning on concepts in “non classical”, prototypical terms [3].
However, these systems lacked in clear formal semantics, and were later sacrificed
in favor of a class of formalisms stemmed from structured inheritance semantic net-
works: the first system in this line of research was KL-ONE [5]. These formalisms
are known today as description logics (DLs) [31]. In this setting, the representation
of prototypical information (and therefore the possibility of performing non mono-
tonic reasoning) is not allowed,1 since the formalisms in this class are primarily
intended for deductive, logical inference. Nowadays, DLs are largely adopted in
diverse application areas, in particular within the area of ontology representation.
For example, OWL and OWL 2 formalisms follow this tradition,2 which has been
endorsed by the W3C for the development of the Semantic Web. However, under a
historical perspective, the choice of preferring classical systems based on a well de-
fined –Tarskian-like– semantics left unsolved the problem of representing concepts
in prototypical terms. Although in the field of logic oriented KR various fuzzy and
non-monotonic extensions of DL formalisms have been designed to deal with some
aspects of “non-classical” concepts [38, 17, 2, 6], nonetheless various theoretical
and practical problems remain unsolved [10].
As a possible way out, we follow the proposal presented in [13], that relies on
two main cornerstones: the dual process theory of reasoning and rationality [37,
9, 22], and the heterogeneous approach to the concepts in Cognitive Science [26].
This paper has the following major elements of interest: i) we provide the hybrid
conceptual architecture envisioned in [13] with a working implementation; ii) we
show how the system implementing such architecture is able to perform a simple
form of non-monotonic categorization that is, vice versa, unfeasible by using only
formal ontologies.
The paper is structured as follows: in Section 2 we illustrate the general architec-
ture and the main features of the knowledge-based system. In Section 3 we provide
the results of a twofold experimentation to assess the accuracy of the system in a
categorization task. Finally, we conclude by presenting the related work (Section 4)
and by outlining future work (Section 5).
2. The System
In the following, i) we first outline the design principles that drove the development
of the system; ii) we then provide an overview of the knowledge base architecture
and of its components and features, based on the conceptual spaces framework [15,
16] and on formal ontologies [18]; iii) we elaborate on the inference task, providing
the detailed control strategy.
Two cornerstones inspiring the current proposal are the dual process theory
and the heterogeneous approach to concepts in Cognitive Science. The theoret-
ical framework known as dual process theory postulates the co-existence of two
different types of cognitive systems [37, 9, 22]. The systems of the first type (type
1) are phylogenetically older, unconscious, automatic, associative, parallel and fast.
The systems of the second type (type 2) are more recent, conscious, sequential and
slow, and featured by explicit rule following. We assume that each system type can
be composed of many sub-systems and processes. According to the reasons pre-
sented in [11, 13], the conceptual representation of our system relies on two major
sorts of components, based on:
• type 1 processes, to perform fast and approximate categorization by taking ad-
vantage from prototypical information associated to concepts;
• type 2 processes, involved in complex inference tasks and that do not take into
account the representation of prototypical knowledge.
The two sorts of system processes are assumed to interact, since type 1 processes
are executed first and their results are then refined by type 2 processes.
The second theoretical framework inspiring our system regards the heterogeneous
approach to the concepts in Cognitive Science, according to which concepts do not
constitute a unitary element from a representational point of view [26]. By following
this approach, we assume that each concept represented in an artificial system can
be composed of several bodies of knowledge, each one carrying a specific type of
information.1
A system has been implemented to explore the hypothesis of the hybrid con-
ceptual architecture. To test it, we have considered a basic inference task: given
an input description in natural language, the system should be able to find, even
for typicality based description (that is, most of common sense descriptions), the
corresponding concept category by combining ontological inference and typicality
based one. We chose this task as a challenging one. In fact, classical queries for
concept retrieval based on lists of necessary and sufficient conditions are commonly
handled by standard ontology-based systems, and in general by logic oriented sys-
tems. Conversely, the answer to typicality based queries –i.e., queries based on
prototypical traits–, is almost never addressed by exploiting ontological inference.
1 Inthe present implementation we considered two possible types of informational components: the typical
one (encoding prototypical knowledge) and the classical one (encoding information in terms of necessary
and sufficient conditions). In particular, although in this case we mainly concentrate on representation
and reasoning tenets coming from the prototype theory, the typical component can be considered general
enough to encode many other forms of representational and reasoning mechanisms related to a wider
spectrum of typicality theories such as, for example, the Exemplars theory [30].
October 1, 2014 lieto14hybrid˙˙final
Representation of
Concept X
system 1 system 2
hasComponent hasComponent
Non
Monotonic
Monotonic
Reasoning
Reasoning
Typical Classical
representation of X representation of X
Exemplar and
prototype-based Ontology-based
categorization categorization
2 Therefore, the use of prototypical knowledge in cognitive tasks such as categorization is not a fault of the
human mind, as it could be the fact that people are prone to fallacies and reasoning errors (leaving aside
the problem of establishing whether recurrent errors in reasoning could have a deeper “rationality” within
the general framework of cognition). For the same reason it is also a desired characteristics in the field of
intelligent artificial systems.
1 Currently OWL and OWL 2 profiles are not expressive enough to perform the reasoning processes provided
by the overall system. However, both language profiles are usable in their DL-safe characterization to
exploit taxonomical reasoning. Extending the expressivity of ontological formalisms and languages would
be a long-term desideratum in order to enrich the ontological reasoning with more complex inference.
To be more expressive and practically usable, a knowledge representation framework should provide an
acceptable trade-off in terms of complexity. However, this is an open problem in Fuzzy and Non Monotonic
extensions of standard Description Logics.
October 1, 2014 lieto14hybrid˙˙final
termediate level between the symbolic and the sub-symbolic approaches to the
knowledge representation relying on geometrical structures, encoded as a set of
quality dimensions. In some cases, such dimensions can be directly related to per-
ceptual mechanisms; examples of this kind are temperature, weight, brightness,
pitch. In other cases, dimensions can be more abstract in nature. A geometrical
(topological or metrical) structure is associated to each quality dimension. The
chief idea is that knowledge representation can benefit from the geometrical struc-
ture of conceptual spaces: instances are represented as points in a space, and their
similarity can be calculated in terms of their distance according to some suitable
distance measure. In this setting, concepts correspond to regions, and regions with
different geometrical properties correspond to different kinds of concepts. Concep-
tual spaces are suitable to represent concepts in “typical” terms, since the regions
representing concepts can have soft boundaries. In many cases typicality effects
can be represented in a straightforward way: for example, in the case of concepts,
corresponding to convex regions of a conceptual space, prototypes have a natural
geometrical interpretation, in that they correspond to the geometrical centre of
the region itself. So, “when natural properties are defined as convex regions of a
conceptual space, prototype effects are indeed to be expected” [15, p. 9]. Given a
convex region, we can provide each point with a certain centrality degree, that can
be interpreted as a measure of its typicality. Moreover, single exemplars correspond
to single points in the space: this allows us to consider both the exemplar and the
prototypical accounts of typicality (further details can be found in [12, p. 9]).
The conceptual space defines a metric space that can be used to compute the
proximity of the input entities to prototypes. To compute the distance between two
points p1 , p2 we apply a distance metrics based on the combination of the Euclidean
distance and the angular distance intervening between the points. Namely, we use
Euclidean metrics to compute within-domain distance, while for dimensions from
different domains we use the Manhattan distance metrics, as suggested in [15, 1].
Weights assigned to domain dimensions are affected by the context, too, so the
resulting weighted Euclidean distance distE is computed as follows
v
u n
uX
distE (p1 , p2 , k) = t wi (p1,i − p2,i )2 ,
i=1
where i varies over the n domain dimensions, k is the context, and wi is the weight
associated to the i -th dimension.
The representation format adopted in conceptual spaces (e.g., for the concept
whale) includes information such as:
02062744n,whale,dimension(x=350,y=350,z=2050),color(B=20,H=20,S=60),food=10 .
that is, the WordNet identifier, the lemma of a given concept, information about
its typical dimensions, such as color (as the position of the instance on the three-
dimensional axes of brightness, hue and saturation) and food.1 All concepts are
mapped onto WordNet synsets: WordNet is a lexical resource whose nodes –the
synsets– are sets of synonyms, connected through binary relations such as hy-
ponymy/hypernymy and meronymy [28].2 Each quality in a domain is associated
1 Typical traits are selected based on statistically relevant information regarding a given concept, as posited
by the Prototype Theory [34]. For example, the selection of the information regarding the typical color of
a rose (red) is given by the fact that roses are often red.
2 WordNet information is relevant in our system in that synset identifiers are used by both S1 and S2 as
a lexical ground to access both the conceptual representations.
October 1, 2014 lieto14hybrid˙˙final
to a range of possible values. To avoid that larger ranges affect too much the dis-
tance, we have introduced a damping factor to reduce this effect; also, the relative
strength of each domain can be parametrized.
We represent points as vectors (with as many dimensions as required by the con-
sidered domain), whose components correspond to the point coordinates, so that a
natural metrics to compute the similarity between them is cosine similarity. Cosine
similarity is computed as the cosine of the angle between the considered vectors:
two vectors with same orientation have a cosine similarity 1, while two orthogonal
vectors have cosine similarity 0. The normalized version of cosine similarity (cs),
ˆ
also accounting for the above weights wi and context k, is computed as
Pn
i=1 wi (p1,i × p2,i )
cs(p
ˆ 1 , p2 , k) = pPn pPn .
2 2
i=1 wi (p1,i ) × i=1 wi (p2,i )
where K is the whole context, containing domain weights wj and contexts kj , and
|Dj | is the number of dimensions in each domain.
In this setting, the distance between each two concepts can be computed as
a function of the distance between two regions in a given domain (Formula 1).
Also, we can compute the distance between any two region prototypes, or the
minimal distance between their individuals, or we can apply more sophisticated
algorithms: in all cases, we have designed a metric space and procedures that allow
characterizing and comparing the concepts herein.
2.1.2. Ontology
On the other hand, the representation of the classical component S2 is imple-
mented through a formal ontology. As already pointed out, the standard ontological
formalisms leave unsolved the problem of representing prototypical information.
Furthermore, it is not possible to execute non monotonic inference, since classi-
cal ontology-based reasoning mechanisms contemplate deductive processes. It is
known, in fact, in literature (e.g., by referring to the foundational approach in
DOLCE) how to model the fact that “the rose is red”, that is:
• we refer to a given rose (rose#1 in Figure 2);
• it has a certain color, expressed via the inherence relation, qtc: this enables us
to specify that qtc(rose#1));
• the particular color of the rose#1 has a particular redness at a certain time t:
this is expressed via the quale, ql, as the relation: ql(qtc(rose#1, t)).
However, in this setting we cannot represent even simple prototypical informa-
tion, such as “A typical rose is red”. This is due to the fact that being red is
neither a necessary nor a sufficient condition for being a rose, and therefore it is
not possible neither to represent and to automatically identify a prototypical rose
(let us assume #roseP ) nor to describe (and to learn from new cases) the typical
October 1, 2014 lieto14hybrid˙˙final
Figure 2. Connecting Concepts to Qualities and Quality Regions in a Foundational Ontology (taken
HA9>&)$0G$W><8A+A)=$<34$P><8A+:$&)9A'3=G
from [27]).
features of the class of prototypical roses. Such aspect has, on the other hand, a
natural interpretation by using the conceptual spaces framework.
S2 is always returned with the rationale that it is safer,1 and potentially helpful
in correcting the mistakes returned by the S1 process. If all results in C are incon-
sistent with those computed by S2, a pair of classes is returned including c0 and
the output of S2 having for actual parameters d and Thing, the meta class of all
the classes in the ontological formalism (Algorithm 1: line 16).
3. Experimentation
description d target T
z }| { z }| {
The big carnivore with yellow and black stripes is the . . . tiger .
| {z }
stimulus st
1 The output of S2 cannot be wrong on a purely logical perspective, in that it is the result of a deductive
process. The control strategy tries to implement a tradeoff between ontological inference and the output
of S1, which is more informative but also less reliable from a formal point of view. However, in next future
we plan to explore different conciliation mechanisms to ground the overall control strategy.
1 The expected prototypical target category represents a gold standard, since it corresponds to the results
provided within a psychological experimentation. In this experimentation 30 subjects were requested to
October 1, 2014 lieto14hybrid˙˙final
provide the corresponding target concept for each description. The full list is available at the URL http:
//www.di.unito.it/~radicion/datasets/cs_2014/stimuli.txt.
October 1, 2014 lieto14hybrid˙˙final
10
Figure 3. The software pipeline takes in input the linguistic description, queries the hybrid knowledge
base and returns the categorized concept.
Experiment 1
11
black and yellow stripes” denoting the concept of tiger, or “the fresh water fish that
goes upstream” denoting the concept of salmon, and so on.
We devised some metrics to assess both the accuracy of the system, by evaluating
it against the expected target, and the agreement between S1 and S2. The following
information was recorded:
(1) how often S1 and S2 returned in output the same category. This figure is
a measure of the agreement between the two outputs: it scores cases where
S1-S2 output is equal. In this case we do not consider whether the result
is the expected category or not;
(2) the accuracy obtained by S1 alone and by S1-S2:
2a. the accuracy of S1. This figure is intended to measure how often the
top ranked category c0 returned by S1 is the same as that expected.
2b. the accuracy of S1-S2, that is the overall accuracy of the system also
considering, as additional result, the category returned by S2. This
figure is intended to measure how often the cc category is the appro-
priate one w.r.t. the expected result. We remark that cc has not been
necessarily computed by starting from c0 : in principle any ci ∈ C might
have been used (see also Algorithm 1, lines 3 and 15).
(3) how often Google and Bing —used in a question-answering mode— return
pages corresponding to the appropriate concepts, given the same set of
definitions and target concepts used to test the proposed system. To these
ends, we considered the first 10 results provided by each search engine.1
In the first experiment a formal ontology has been developed describing the ani-
mal kingdom. It has been devised to meet common sense intuitions, rather than
reflecting the precise taxonomic knowledge of ethologists, so we denote it as naı̈ve
animal ontology.2 In particular, the ontology contains the taxonomic distinctions
that have an intuitive counterpart in the way human beings categorize the corre-
sponding concepts. Classes are collapsed at a granularity level such that they can
be naturally grouped together also based on their accessibility [36]. For example,
although the category pachyderm is no longer in use by ethologists, we created a
pachyderm class that is superclass to elephant, hippopotamus, and rhinoceros. The
underlying rationale is that it is still in use by non experts, due to the intuitive re-
semblances among its subclasses. The ontology is linked to DOLCE’s Lite version;3
in particular, the tree containing our taxonomy is rooted in the agentive-physical-
object class, while the body components are set under biological-physical-object,
and partitioned between the two disjunct classes head-part (e.g., for framing horns,
antennas, fang, etc.) and body-part (e.g., for paws, tails, etc.). The biological-object
class includes different sorts of skins (such as fur, plumage, scales), substances pro-
duced and eaten by animals (e.g., milk, wool, poison and fruits, leaves and seeds).
The results obtained in the first experimentation are presented in Table 1.
Discussion
The system was able to correctly categorize a vast majority of the input descrip-
tions: in most cases (92.6%) S1 alone produces the correct output, with consider-
able saving in terms of computation time and resources. Conversely, none of the
1 We also tried to extend our evaluation to the well-known semantic question-answering engine Wolfram-
Alpha (https://www.wolframalpha.com). However, it was not possible to test the descriptions in that it
explicitly disregards considering typicality based queries. Namely, the only stimulus correctly categorized
is that describing the target cat as “The domestic feline.”.
2 The ontology is available at the URL http://www.di.unito.it/ radicion/datasets/cs_2014/Naive_
~
animal_ontology.owl
3 http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl
October 1, 2014 lieto14hybrid˙˙final
12
Table 1. Results of the first experiment. The first column reports the kind of test; the second column reports
the number of correctly categorized descriptions, and the third column reports in percentage the same datum as
the previous one.
concepts (except for one) described with typical features would have been classi-
fied through classical ontological inference. It is by virtue of the former access to
conceptual spaces that the whole system is able to categorize such descriptions.
Let us consider, e.g., the description “The animal that eats bananas”. The on-
tology encodes knowledge stating that monkeys are omnivore. However, since the
information that usually monkeys eat bananas cannot be represented therein, the
description would be consistent to all omnivores. The information returned would
then be too informative w.r.t. the granularity of the expected answer.
Another interesting result was obtained for the input description “the big herbi-
vore with antlers”. In this case, the correct answer is the third element in the list C
returned by S1; but because of the categorization performed by S2, it is returned
in the final output pair (see Algorithm 1, line 8).
Finally, the system revealed to be able to categorize stimuli with typical, though
ontologically incoherent, descriptions. As an example of such a case we will consider
the categorization results obtained with the following stimulus: “The big fish that
eats plankton”. In this case the expected prototypical answer is whale. However,
whales properly are mammals, not fishes. In our hybrid system, the S1 component
returns whale by resorting to prototypical knowledge. If further details were added
to the input description, the answer would have changed accordingly: in this sense
the categorization performed by S1 is non-monotonic. When then C (the output of
S1) is checked against the ontology, as described by the Algorithm 1 lines 7–13, and
an inconsistency is detected,1 the consistency of the second result in C (whale-shark
in this example) is tested against the ontology. Since this answer is an ontologically
compliant categorization, then this solution is returned by the S2 component. The
final output of the categorization is then the pair hwhale, whale-sharki: the first
element, prototypically relevant for the query, would have not been provided by
querying a classical ontological representation. Moreover, if the ontology recorded
the information that also other fishes eat plankton, the output of a classical onto-
logical inference would have included them too, thereby resulting in a too large set
of results w.r.t. the intended answer.
Experiment 2
1 This follows by observing that c0 = whale, cc = whale-shark; and whale ⊂ mammal, while whale-shark ⊂
fish; and mammal and fish are disjoint.
2 http://www.cyc.com/platform/opencyc.
October 1, 2014 lieto14hybrid˙˙final
13
WordNet, DBpedia, Wikicompany, etc.). Its coverage and depth were therefore
its most attractive features (it contains about 230, 000 concepts, 2, 090, 000 triples
and 22, 000 predicates). Differently from Experiment 1, we adopted OpenCyc to
use a knowledge base independent of our own representational commitments. This
was aimed at more effectively assessing the flexibility of the proposed system when
using general-purpose, well-known, existing resources, and not only domain-specific
ones.
A second dataset of 56 new “common-sense” linguistic descriptions was collected
with the same rationale considered for the first experiment.1
The obtained results are reported in Table 2.
Discussion
While the previous experiment explores the output of both S1 and S2 compo-
nents, the present one is aimed at assessing it with respect to existing state-of-art
search technologies: the main outcome of this experiment is that the trends ob-
tained in the preliminary experiment are confirmed in a broader and more demand-
ing evaluation. Despite being less accurate with respect to the previous experiment,
the hybrid knowledge-based S1-S2 system was able to categorize and retrieve most
of the new typicality based stimuli provided as input, and still showed a better
performance w.r.t. the general-purpose search engines Google and Bing used in
question-answering mode.
The major problems encountered in this experiment derive from the difficulty
of mapping the linguistic structure of stimuli containing very abstract meaning in
the representational framework of conceptual spaces. For example, it was impos-
sible to map the information contained in the description “the place where kings,
princes and princesses live in fairy tales” onto the features used to characterize
the prototypical representation of the concept Castle. Similarly, the information
extracted from the description “Giving something away for free to someone” could
not be mapped onto the features associated to the concept Gift. On the other
hand, the system shows good performances when dealing with less abstract de-
scriptions based on perceptual features such as shape, color, size, and with some
typical information such as function.
In this experiment, differently from the previous one (e.g., in the case of whale),
S1 mostly provided an output coherent with the model in OpenCyc. This datum
is of interest, in that although we postulate that the reasoning check performed
by S2 is beneficial to ensure a refinement of the categorization process, in this
experimentation S2 did not reveal any improvement to the output provided by
S1, also when this output was not in accord with the expected results. In fact, by
analyzing in detail the different answers, we notice that at least one inconsistency
should have been detected by S2. This is the case of the description “An intelligent
grey fish” associated to the target concept Dolphin. In this case, the S1 system
returned the expected target, but S2 did not raise the inconsistency since OpenCyc
1 The full list of the second set of stimuli, containing the expected “prototypically correct” category is
available at the following URL: http://www.di.unito.it/~radicion/datasets/cs_2014/stimuli.txt.
October 1, 2014 lieto14hybrid˙˙final
14
4. Related work
The presented solution has some analogies with the approach that considers con-
cepts as “semantic pointers” [8, 39], proposed in the field of the computational
modeling of brain. In such approach, different informational components are sup-
posed to be attached to a unifying concept identifier. The similarity with their
approach is limited to the idea that concepts consist of different types of informa-
tion. However, the mentioned authors specifically focus on the different modalities
of the stimuli contributing to conceptual knowledge, and therefore they identify
the different components of concepts according to the different information car-
riers used to provide the information. Their conceptual components are divided
in: sensory, motor, emotional and verbal stimuli, and for each type of carriers a
mapping function to a brain area is supposed to be activated. On the other side,
our focus is on the type of conceptual information (e.g., classical vs. typical in-
formation): we do not consider the modality associated to the various sources of
information (e.g., visual or verbal, etc.).1 Rather, we are concerned with the type
of information combined in the hybrid conceptual architecture embedded in our
S1-S2 computational system.
In the context of a different field of application, a solution similar to the one
adopted here has been proposed in [7]. The main difference with their proposal
concerns the underlying assumption on which the integration between symbolic
and sub-symbolic system is based. In our system the conceptual spaces and the
classical component are integrated at the level of the representation of concepts,
and such components are assumed to convey different –though complementary-
conceptual information. On the other hand, the previous proposal is mainly used
to interpret and ground raw data coming from sensors in a high level symbolic
system through the mediation of conceptual spaces.
In other respects, our system is also akin to the ones developed in the field of
the computational approach to the above mentioned dual process theories. A first
example of such “dual-based systems” is the mReasoner model [23], developed with
the aim of providing a computational architecture of reasoning based on the mental
models theory proposed by Philip Johnson-Laird [20]. The mReasoner architecture
is based on three components: a system 0, a system 1 and a system 2. The last two
systems correspond to those hypothesized by the dual process approach. System 0
operates at the level of linguistic pre-processing. It parses the premises of an argu-
ment by using natural language processing techniques, and it then creates an initial
intensional model of them. System 1 uses this intensional representation to build
an extensional model, and uses heuristics to provide rapid reasoning conclusions;
finally, system 2 carries out more demanding processes to searches for alternative
models, if the initial conclusion does not hold or if it is not satisfactory. A second
system has been proposed by Larue et al., [24]. The authors adopt an extended
version of the dual process approach, which has been described in [37]; it is based
on the hypothesis that the system 2 is divided in two further levels, respectively
called “algorithmic” and “reflective”. The goal of Laure and colleagues is to build
1 In our view the distinction classical vs. prototypical is ‘a-modal’ per se, for example both a typical and a
classical conceptual information can be accessed and processed through different modalities (that is visual
vs. auditory, etc.).
October 1, 2014 lieto14hybrid˙˙final
15
1 http://www.cogsci.rpi.edu/
~rsun/clarion.html
October 1, 2014 lieto14hybrid˙˙final
16
Acknowledgment
This work has been partly supported by the Ateneo-San Paolo project number
TO call03 2012 0046, The role of visual imagery in lexical processing (RVILP). The
first author’s work is also partially supported by the CNR F.A.C.I.L.E. project
ICT.P08.003.001.
The authors kindly thank Leo Ghignone, for working to an earlier version of the
system; Marcello Frixione, for discussions and advices on the theoretical aspects of
this approach; the anonymous reviewers, whose valuable suggestions were helpful
to improve the work; Manuela Sanguinetti, for her comments on a previous version
of the article. We also thank the attendees of the ConChaMo 4 Workshop,1 orga-
nized by the University of Helsinki, and the participants of the Spatial Colloquium
Workshop organized by the Spatial Cognition Center of the University of Bremen2
for their comments and insights to initial versions of this work: in particular, we
thank David Danks, Christian Freksa, Peter Gärdenfors, Ismo Koponen, and Paul
Thagard.
We especially thank Leonardo Lesmo, beloved friend and colleague no longer
with us, who strongly encouraged the present line of research.
References
[1] Benjamin Adams and Martin Raubal. A metric conceptual space algebra. In Kathleen Stewart Hornsby,
Christophe Claramunt, Michel Denis, and Gérard Ligozat, editors, COSIT, volume 5756 of Lecture
Notes in Computer Science, pages 51–68. Springer, 2009.
[2] Piero A Bonatti, Carsten Lutz, and Frank Wolter. Description logics with circumscription. In KR,
pages 400–410, 2006.
[3] Ronald J. Brachman and Hector J. Levesque. Readings in Knowledge Representation. Morgan Kauf-
mann Pub, 1985.
[4] Ronald J. Brachman and Hector J. Levesque. Knowledge Representation and Reasoning. Elsevier,
2004.
[5] Ronald J. Brachmann and James G. Schmolze. An overview of the KL-ONE knowledge representation
system. Cognitive Science, 9(2):171–202, April 1985.
[6] Silvia Calegari and Davide Ciucci. Fuzzy ontology, fuzzy description logics and fuzzy-owl. In Applica-
tions of Fuzzy Sets Theory, pages 118–126. Springer, 2007.
[7] A. Chella, M. Frixione, and S. Gaglio. A cognitive architecture for artificial vision. Artificial Intelli-
gence, 89(1–2):73–111, 1997.
[8] Chris Eliasmith, Terrence C Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang, and
Daniel Rasmussen. A large-scale model of the functioning brain. Science, 338(6111):1202–1205, 2012.
2 http://conceptnet5.media.mit.edu.
3 http://www.b2international.com/portal/snow-owl.
1 http://conceptualchange.it.helsinki.fi/workshops/conchamo4/.
2 http://www.sfbtr8.uni-bremen.de/en/home/.
October 1, 2014 lieto14hybrid˙˙final
17
[9] Jonathan St. B.T. Evans and Keith Frankish, editors. In two minds: Dual processes and beyond.
Oxford University Press, 2009.
[10] Marcello Frixione and Antonio Lieto. The computational representation of concepts in formal
ontologies-some general considerations. In KEOD, pages 396–403, 2010.
[11] Marcello Frixione and Antonio Lieto. Representing concepts in formal ontologies: Compositionality
vs. typicality effects. Logic and Logical Philosophy, 21(4):391–414, 2012.
[12] Marcello Frixione and Antonio Lieto. Representing Non Classical Concepts in Formal Ontologies:
Prototypes and Exemplars. In New Challenges in Distributed Information Filtering and Retrieval,
volume 439 of Studies in Computational Intelligence, pages 171–182, 2013.
[13] Marcello Frixione and Antonio Lieto. Towards an Extended Model of Conceptual Representations in
Formal Ontologies: A Typicality-Based Proposal. Journal of Universal Computer Science, 20(3):257–
276, March 2014.
[14] Marcello Frixione and Antonio Lieto. Formal Ontologies and Semantic Technologies: A Dual Process
Proposal for Concept Representation. Philosophia Scientiae, forthcoming.
[15] Peter Gärdenfors. Conceptual spaces: The geometry of thought. MIT press, 2000.
[16] Peter Gärdenfors. The Geometry of Meaning: Semantics Based on Conceptual Spaces. MIT Press,
2014.
[17] Laura Giordano, Valentina Gliozzi, Nicola Olivetti, and Gian Luca Pozzato. A non-monotonic descrip-
tion logic for reasoning about typicality. Artificial Intelligence, 195:165–202, 2013.
[18] Tom Gruber. Ontology. Encyclopedia of database systems, pages 1963–1965, 2009.
[19] Bernard J. Jansen, Danielle L. Booth, and Amanda Spink. Determining the informational, naviga-
tional, and transactional intent of web queries. Information Processing & Management, 44(3):1251–
1266, 2008.
[20] Philip N. Johnson-Laird. Mental models in cognitive science. Cognitive Science, 4(1):71–115, 1980.
[21] Daniel Jurafsky and James H. Martin. Speech and Language Processing: an Introduction to Natural
Language Processing, Computational Linguistics and Speech. Prentice Hall, 2000.
[22] Daniel Kahneman. Thinking, fast and slow. Macmillan, 2011.
[23] Sangeet Khemlani and PN Johnson-Laird. The processes of inference. Argument & Computation,
4(1):4–20, 2013.
[24] Othalia Larue, Pierre Poirier, and Roger Nkambou. A cognitive architecture based on cogni-
tive/neurological dual-system theories. In Brain Informatics, pages 288–299. Springer, Berlin, 2012.
[25] Leonardo Lesmo. The Rule-Based Parser of the NLP Group of the University of Torino. Intelligenza
Artificiale, 2(4):46–47, June 2007.
[26] Edouard Machery. Doing without concepts. Oxford University Press Oxford, 2009.
[27] Claudio Masolo, Stefano Borgo, Aldo Gangemi, Nicola Guarino, and Alessandro Oltramari. Wonder-
Web deliverable D18 ontology library (final). Technical report, IST Project 2001-33052 WonderWeb:
Ontology Infrastructure for the Semantic Web, 2003.
[28] George A Miller. WordNet: a lexical database for English. Communications of the ACM, 38(11):39–41,
1995.
[29] Marvin Minsky. A framework for representing knowledge. In P. Winston, editor, The Psychology of
Computer Vision, pages 211–277. McGraw-Hill, New York, 1975.
[30] Gregory Leo Murphy. The big book of concepts. MIT press, Cambridge, Massachusetts, 2002.
[31] Daniele Nardi and Ronald J. Brachman. An introduction to description logics. In Description logic
handbook, pages 1–40, 2003.
[32] Giovanni Pilato, Agnese Augello, and Salvatore Gaglio. A modular system oriented to the design of
versatile knowledge bases for chatbots. ISRN Artificial Intelligence, 2012, 2012.
[33] Ross Quillian. Semantic Memory, pages 216–270. MIT Press, 1968.
[34] Eleanor Rosch. Cognitive representations of semantic categories. Journal of experimental psychology:
General, 104(3):192–233, 1975.
[35] Herbert A Simon and Allen Newell. Human problem solving: The state of the theory in 1970. American
Psychologist, 26(2):145, 1971.
[36] Eliot R. Smith and Nyla R. Branscombe. Category accessibility as implicit memory. Journal of Ex-
perimental Social Psychology, 24(6):490–504, 1988.
[37] Keith E Stanovich, Richard F West, et al. Individual differences in reasoning: Implications for the
rationality debate? Behavioral and brain sciences, 23(5):645–665, 2000.
[38] Umberto Straccia. Reasoning within fuzzy description logics. arXiv preprint arXiv:1106.0667, 2011.
[39] Paul Thagard and Scott Findlay. The cognitive science of science: Explanation, discovery, and con-
ceptual change. MIT Press, 2012.
[40] Ludwig Wittgenstein. Philosophische untersuchungen-Philosophical investigations. B. Blackwell, 1953.