0% found this document useful (0 votes)
59 views93 pages

Semantic Web Wednesday

The document discusses the evolution of the web from static pages to today's dynamic web (Web 2.0) and outlines Tim Berners-Lee's vision for the next stage called the Semantic Web (Web 3.0). The Semantic Web aims to make web content machine-readable through technologies like RDF and OWL so that software agents can process and understand data to perform tasks for users. Currently, most web content is designed for humans and search is based on keywords, but the Semantic Web seeks to add meaning and structure to data so machines can better integrate, analyze and reuse information across applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views93 pages

Semantic Web Wednesday

The document discusses the evolution of the web from static pages to today's dynamic web (Web 2.0) and outlines Tim Berners-Lee's vision for the next stage called the Semantic Web (Web 3.0). The Semantic Web aims to make web content machine-readable through technologies like RDF and OWL so that software agents can process and understand data to perform tasks for users. Currently, most web content is designed for humans and search is based on keywords, but the Semantic Web seeks to add meaning and structure to data so machines can better integrate, analyze and reuse information across applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

The Semantic Web: Web of

Data
Origin of Internet
Current Web

500 million user


more than 3 billion pages

WWW
Static URI, HTML, HTTP
World Wide Web
(WWW)
 An information system on the Internet
which allows documents to be connected
to other documents by hypertext links,
enabling the user to search for information
by moving from one document to another.
 In 1989, Sir Tim Berners-Lee invented the
World Wide Web. Then, he gave it to the
world for free. Now, it’s up to all of us to
protect and enhance it.
Tim Berner’s LEE: Father of
web
CONTRIBUTIONS:
 HTML: (HyperText Markup Language) The markup (formatting)

language for the web.


 URI: Uniform Resource Identifier. A kind of “address” that is

unique and used to identify to each resource on the web. It is


also commonly called a URL.
 HTTP: Hypertext Transfer Protocol. Allows for the retrieval of

linked resources from across the web.


 Tim also wrote the first web page editor/browser

(“WorldWideWeb.app”) and the first web server (“httpd“). By


the end of 1990, the first web page was served on the open
internet, and in 1991, people outside of CERN were invited to
join this new web community.
 Originator of Semantic Web.

5
Web Versions
 Web 1.0 was the “read only era” of static websites where
there was one way information flow and information was
just presented before the users by the producers.
 Web 2.0 was coined in 2002, which hold goods till present.
Present Web 2.0 is the “read-write-publish” era of
interactive websites with best examples like twitter,
facebook etc.
 Presently, the WWW (World Wide Web), has grown to be
the largest repository of information leading to an
Information Technology (IT) revolution.
 The Semantic Web is the extension of the present Web 2.0
to Web 3.0 which would enable machines to understand
data and work on behalf of humans for more efficient
search results. Actually the web was originally designed to
be processed by humans and not by machines.

6
Today’s Web
 Currently most of the Web content is suits
human needs and is usable to humans
only.
 Typical uses of the Web today includes
information seeking, publishing, and using,
searching for people and products,
shopping, reviewing catalogues etc.
 Dynamic pages are generated based on
information from databases but without
original information structure found in
databases.
The Syntactic Web

[Hendler & Miller 02]


8
i.e. the Syntactic Web is…

 A place where
 computers do the presentation (easy) and
 people do the linking and interpreting
(hard).

 Why not get computers to do more of


the hard work?

[Goble, 03]
9
What is the Problem?
 Consider a typical web
 page:
Markup consists of:
 rendering
information (e.g.,
font size and
colour)
 Hyper-links to

related content
 Semantic content is
accessible to humans
but not (easily) to
computers…

[Davies, 03]
10
…Limitations of the Today’s
Web

The present information search on web with the aid of


search engines(Google, Yahoo etc.) is mostly keyword-
based and doesn’t understand any information nor it’s
structure.
There is a lack of Machine Intelligence, Web
intelligence and Machine to Machine Intelligent
Interaction.
Web: Very Vast and
Decentralized

12
Simply because We are now
connecting almost what we can…
Smart TVs, Microwave, cameras,
Plugs, yet more….
Web 2.0 and Web 3.0

14
TOWARDS SEMANTIC WEB:

Web Limitations
Average WWW searches examine Doubles in size
only about 25% of potentially every six months Semantic Web
relevant sites and return a lot of
unwanted information
The Semantic Web is a
vision: the idea of having
data on the Web defined and
World Wide Web linked in a way that it can be
used by machines not just for
display purposes, but for
automation, integration and
reuse of data across various
applications.
Information on web is not suitable
for software agents
4
The Semantic Web: Web of
Data
Extension of Web 2.0 to
Web 3.0

Present
Present
Web
Web2.0
2.0
Ontology
+ Future
Future Web
Web 3.0:
3.0:
Intelligent
Intelligent Web
Web
(Semantic
(SemanticWeb)
Web)

Intelligent/ Semantics
Semanticsincorporated,
incorporated,
Semantic aaSmarter
SmarterWeb.
Web.
Services (Machine-
(Machine-
understandable)
understandable)
Transition: WWW to Semantic Web
Serious Problems in information
•finding
•extracting
•representing
•interpreting
•and maintaining

WWW Semantic
StaticURI, HTML, HTTP WebRDF(S), OWL
RDF,
From the World Wide Web
to the
Web of Data
Third Generation: The Web of
Data
Data centred processing
Semantic Web Definitions:
 Tim Berner’s Lee, James Hendler, and Ora Lassila Defination:
The Semantic Web is an extension of the current web in which
information is given well defined meaning, better enabling computers
and people to work in co-operation. [Lee et. al., 2001]
 W3C and Tim Berner’s Lee Defination: The Semantic Web is a
vision: the idea of having data on the web defined and linked in a way
that it can be used by machines not just for display purposes , but for
automation, integration, and reuse of data across various applications.
 Tim Berner’s Lee Defination: The Semantic Web will bring structure
to the meaningful content of web pages, creating an environment
where software agents roaming from page to page can readily carry
out sophisticated tasks for users [Shadbolt et al., 2006].

 
Why Semantic Web
 Till today , web search is typically
based on keyword searching.
 The Semantic web is proposed to
include more involved questions,
relationships and trust.
 Instead of word matching web will be
able to show related items showing new
relationships.
 For ex- how does the weather effects
the stock market? Crime? Birth rate?
Vision and Goal

 Tim Berners-Lee’s Vision of the Semantic Web [1999]:


“I have a dream for the Web [in which computers] become
capable of analyzing all the data on the Web – the content,
links, and transactions between people and computers. A
‘Semantic Web’, which should make this possible, has yet to
emerge, but when it does, the day-to-day mechanisms of
trade, bureaucracy and our daily lives will be handled by
machines talking to machines. The ‘intelligent agents’
people have touted for ages will finally materialize.”
 The goal of semantic web is to be “a web talking to
machine”, i.e. in which machine can provide better help to
people and they can take advantage of the content of the
web easily and effectively.

23
The Semantic Web: Web of
Data
Semantic Web enables connecting
new information with data and
knowledge stored in various places
and querying such linked data as one
distributed database
The Semantic Web of
Things:
Semantic Web: Resource
Integration

Semantic
annotation
Shared
ontology

Web resources /
services / DBs / etc. 27
Semantic Web: which resources to annotate ?
This is just a small part of Technologicl
Semantic Web concern !!!
and business External world
processes resources

Web resources /
services / DBs /
etc.

Semantic
annotation

Shared
ontology
Multimedi
a
resources
Web users
(profiles,
preferences)

Smart Web agents


machines, /
devices, applications
Web access devices homes, / software
and communication components
etc.
networks
Architecture (Research Concerns)
Various Layers of Sir Tim Berner’s LEE
Architecture Research Concerns
Worldwide

 Sir Tim Berner’s Lee proposed the layered architecture pyramid


model of Semantic Web as a basis of research which is also the
foundation which comprise of various layer components like
Ontology(OWL), SPARQL, RDF, and others. Various Layers of the
architecture are the concerns of research worldwide.
 HTML describes documents and the links between them whereas
Semantic Web talks about a step ahead by using technologies
like RDF, OWL, URI and XML etc, to provide descriptions that
supplement or replace the content of Web documents for
automated information discovery and gathering. The layered
architecture for Semantic Web as proposed by Sir Tim Berner’s
Lee as shown below.
Layered architecture for Semantic Web
proposed by Sir Tim Berner’s LEE
Significant Concerns

31
Components
 URI- Uniform Resource Identifier (URI) is a string of a standardized form that allows to
uniquely identify resource.
 XML- It is Extensible Markup Language (XML) layer with XML namespace and XML
schema definitions is a general purpose markup language for documents containing
structured information.
 RDF- RDF is a core data representation format for Semantic Web. RDF is a simple
metadata representation framework, using URIs to identify web based resources and a
graph model for describing relationships between resources. It creates statements in a
form of triples i.e. subject-predicate-objects.
 RDF SCHEMA- RDFS can be used to describe taxonomies of classes and properties and
use them to create lightweight ontologies.
 ONTOLOGIES- Ontologies provides the building blocks for expressing semantics in a
well defined manner. It is a formal conceptualization of a domain that is usable by a
computer. Detailed ontologies can be created with Web Ontology Language OWL.
 SPARQL- For querying RDF data, RDFS and OWL ontologies with knowledge bases, a
Simple Protocol and RDF Query Language (SPARQL) is available.
 RIF- It is Rule Interchange format which is used to create a standard for exchanging
rules among Web rule systems.
 LOGIC LAYER- This layer functions on the basic principle of first order predicate logic,
so the information is displayed accurately on the web.
 PROOF- In this layer, the ultimate goal of semantic web is to create a much smarter
content which could be understood by the machines. 
 TRUST- In this trustworthiness of information should be subjectively evaluated by each
information consumers.
 DIGITAL SIGNATURE(Crypto)- It helps to validate the integrity of metadata.
Semantic Web
 Three Major components of Semantic
web.
 RDF / XML

 Ontology (OWL)

 SPARQL
XML
• XML lets us to create our own tags.
• These tags can be used by the script
programs in sophisticated ways to perform
various tasks, but the script writer has to
know why the page writer has used each
tag.
• In short, XML allows you to add arbitrary
structure to the documents but says
nothing about what the structure means.
• It has no built-in mechanism to tell the
meaning of a user’s new tags to other
users.
RDF
 A standard of W3C
 Defines relationships between
documents
 Consisting of triples or sentences:
 <subject, property, object>

<“Krishna”, composed, “The Magic Flute” >
 RDFS has extended RDF with standard
“ontology vocabulary”:
 Class, Property
 Type, subClassOf
 domain, range
RDF (cont.)
 Resource Description Framework (RDF) is
a data model of semantic web.
 It means data in Semantic Web tools is
denoted by RDF.
 RDF web resources are in the form of
subject-predicate-object (s-p-o)
expressions.
 For example “The sky has the colour blue”
in RDF as the triple a subject denoting
“the sky”, predicate denoting “has” and
object denoting the “the colour blue”.
RDF Tools
SPARQL
 SPARQL(Simple Protocol and RDF Query
Language), a W3C recommendation, is a
pattern-matching query language.
 SPARQL is use to retrieve and data stored in
Resource Description Framework (RDF).
 It provides a mechanism to express constraints
and facts and the entities matching those
constraints are returned to the user.
 SPARQL 1.0 is the first version of SPARQL and
SPARQL 1.1 is the additional feature of SPARQL.
SPARQL, RDF and Ontology

44
SPARQL, RDF and
Ontology(contd..)
Key Steps are as follows:
1) Create the OWL Ontology using some
Ontology Editor like Protégé, SWOOP etc.
2) Export Ontology as RDF using Jena API’s
like Model API etc.
3) Import Ontology to triplestore

4) Import data to triplestore

5) Query data with SPARQL

45
SQL and SPARQL
Comparison
S.N
o
SQL SPARQL
1. SQL is based on Tuple SPARQL is based on Triple
Relation Calculus. Relation Calculus.
2. SQL is designed to query SPARQL is designed to query
Relational data. RDF data.
3. In SQL char, varchar, number, In SPARQL subject, predicate,
long etc. data type use. object, uri, literal etc. data type
use.
4. In SQL data is accessed from In SPARQL the data accessed
the Table. from RDF data files.
5. Relational data model stores RDF data is stored in
data in Structured form. Unstructured form.
6. Syntax: Syntax:
   
SELECT< column_list > SELECT< variable_list>
FROM< table_list > WHERE{< graph_pattern>}
WHERE< condition >
Some SPARQL Tools
 Jena Feuseki Server. (Jena Toolkit)
 ARQ (Jena Toolkit)
 Twinkle
 Dbpedia (Virtuoso SPARQL)
Ontology
 According to Gruber’s definition, “an Ontology is an explicit
specification of a conceptualization” where explicit means that it
cannot be implicitly assumed and should be processable by
machines The knowledge in Ontologies can be formalized using
certain key components, like: classes or concepts, relations,
instances, and formal axioms [Gruber, 1995].

Classes Instances
or Concepts

Ontology
Key Components
Relations Formal
Axioms

•In general, an ontology describes formally a domain of discourse.


•An ontology consists of a finite list of terms and the relationships
between the terms.
•The terms denote important classes and objects of the domain.
•Ontology enables sharable conceptualization of a specific domain
of interest in a machine-understandable format and thus acts as a
backbone to incorporate semantics required.
Ontology (Cont.)
 In the context of the Web, ontologies
provide a shared understanding of a
domain.
 Such a shared understanding is necessary
to overcome the difference in terminology.
 Ontologies are useful for improving
accuracy of Web searches.
 Web searches can exploit generalization
/specialization information.
Why Ontology ?
 To define web resources more precisely and make
them more amenable to machine processing
 To make domain assumptions explicit
 Easier to change domain assumptions

 Easier to understand and update legacy data

 To separate domain knowledge from operational


knowledge
 Re-use domain and operational knowledge separately

 A community reference for applications


 To share a consistent understanding of what information
means
Ontology Issues
Ontology Issue Refers to

Creation Designing and developing an Ontology from scratch or appending to an existing Ontology

Merging Merging different Ontologies of same type about the same subject into a single one that
unifies all of them

Integration Building a new Ontology reusing other available Ontologies


Deployment & Implementation Using for real life usage and applications in different domains like medical etc

Maintenance Management of existing Ontologies efficiently


Tools/Methodologies Using different tools like protégé,Swoop etc and methodologies (support Ontology building
process) like On-To Knowledge, MethOntology, Uschold and King’s method, Cyc method etc
depending on the complexity and characteristics of the Ontology.
Selection Choosing an appropriate Ontology that satisfy user needs like Ontology popularity, richness
of semantic data and topic coverage.
Validation/Evaluation Various ways of validating and evaluating an Ontology in terms of appropriateness, usability
and efficiency are available ie, measuring quality of an Ontology. Metrics may be used for an
Ontology evaluation. The choice of approach may depend on the purpose of evaluation, the
application in which the Ontology is to be used, and on what aspect of the Ontology trying to
be evaluated . Various Ontology evaluation tools are available such as OntoAnalyser,
OntoGenerator, ONE-T etc. Ontology versioning and evolution refers to the process of
management of Ontology change. and their effects by creating and maintaining different
variants of the Ontology.
Import/Export Using one Ontology into/ from another
Mapping/Matching/Alignment Mapping entities of different Ontologies and matching them, align them if required
accordingly.
Ontology Engineering Ontology engineering refers to all set of activities that are involved in the design and
development, and maintenance of an Ontology ie; different methodologies, tools, languages
and other processes that constitute Ontology building [1004] and involves steps, namely
requirement (domain) analysis, conceptualization, and implementation.

Ontology modeling Models for constructing Ontology like verbal, logic-based, structural, hybrid etc.

Ontology comparison & ranking Different parameters for Ontology comparison and ranking

Query Execution Executing SPARQL Queries on OWL Code for output

Debugging Finding anomalies/inconsistencies in the definations of the concepts.


Ontology Construction Steps
Ontology Construction Steps

Step Ontology Construction Steps/Activity


1 Define domain and scope of Ontology; What Ontology will cover? How it
will be used?
Who will use it?
2 Evaluate it.
Whether it can be extended or reused?

3 Define Taxanomy to list all terms for overlapping concepts


4 Define properties for internal structure of concepts.
5 Define facets where properties add cardinality, values and characteristics.
6 Define Slots cardinality: the number of values a slot has.
7 Define instances of a class which requires choosing a class, creating an
individual instance of that class and filling in the slot values

8 Check for anomalies such as incompatible domain and range definitions.


Ontology Construction: Problems/Tasks

 Extending the Existing Ontology: Using some text.


 Learning Relations between concepts for an existing
Ontology.
 Ontology Construction based on Clustering: Split each
text document into sentences, parse the text and apply
clustering.
 Ontology Construction based on Semantic Graphs: Parse
the text documents, perform coreference resolution, anaphora
resolution, extraction of subject-predicate-object triples, and
construct semantic graphs.
 Ontology Construction from a Collection of News based
on named entities.
Ontology Editing and
Development Tools
Ontology Editing and
Development Tools (cont..)
Ontology design using Protégé

Protégé(www.stanford.edu) is one of the most widely used


Ontology development tools which is a free, open-source platform
that provides a growing user community with a suite of tools to
construct domain models and knowledge-based applications with.
It is an Ontology editor which we can use to define classes and
hierarchy, slots, and slot-value restrictions, relationships between
58
Jorge Cardoso Survey
Ontology Editor Market Share/Usage Inference

Protégé 68.20% Most Widely used

SWOOP 13.60% Second most widely used

Ontology Language Users Inferences

OWL 75.90 % Most Widely used

RDFS 64.90 % Second most widely used

Domain Ontology Development Inference

Education 31.00 % Most Widely used

Software 28.00 % Second most widely used


Social Networks
 Social Networks and Ontology integratedly have become key factors towards
enabling machines to understand data and work on behalf of humans for
more efficient search results.
 A Social Network is a set of people (Social structure or Organizations or other
social entities) connected by a set of social relationships, such as friendship
etc and Social Network Analysis focuses on patterns of relations among
people, organizations, states, etc which are playing a significant role in
today’s scenario where the study of these social structures uses social
network analysis to identify local and global patterns, locate influential
entities, and examine network dynamics.
 The Semantic Web and Social Network models support one another and are
integratedly playing a key role in network analysis and visualization for
identity extraction and inferences towards an intelligent web.
 They are integratedly playing a key role for a machine-understandable Web
and Social Network Analysis. To understand how information and concepts
are spread on the Web and how to establish their provenance and
trustworthiness, both Semantic Web and Social Network Analysis are crucial.
VISONE stands for Visual Social Networks (available in Java for Windows,
Linux, and Mac OS) (http://www.visone.info), an important tool in Social
Network Analysis, is used for analysis and visualization of Social Networks.
Visone tool for Analysis and
Visualization

64
 Observation 1: Node 1 and Node 4 has highest degree centrality
(18.182)
 Observation 2: Node 1 has highest Betweenness centrality (33.333)
 Observation 3: Node 1 has highest Closeness centrality (13.811)
 Observation 4: Node 1 has highest Closeness centrality (18.964)
 Observation 5: Node 4 has highest pagerank centrality (17.681)
 Observation 6: Node “1,3,5” has highest Eccentricity centrality (12.821)
 Results: This analysis may be used in various applications to know
the most prominent node involved.
Type of Analysis
Figure No. Highest Value Inference Net Analysis
Performed

Degree Node 1, 4:
Figure 12 18.182
Centrality important

Figure 13 Betweenness Centrality 33.333 Node 1: important

Figure 14 Closeness Centrality 13.811 Node 1 : Important


Node 1: Most important: Acted as a bridge in the
network and Node 4 as next most important.
Figure 15 Eigenvector Centrality 18.964 Node 1 : Important

Figure 16 Pagerank Centrality 17.681 Node 4 : Important

Node 1,3,5 :
Figure 17 Eccentricity Centrality 12.821
Important

 Net Result : Node 1: Most important: acted as a bridge in the


network and Node 4 as next most important node.
Semantic Annotation
 Semantic Annotation (SA) is a technique of
extracting information semantically and when used
with semantic search patterns endows the web with
intelligence by assigning to the entities in the text
and links to their semantic descriptions and thus
makes it possible to add semantics to unstructured
and semi-structured documents on the web. It is a
type of information extraction technique which
refers to tying semantic models and natural
language together towards dynamic creation of
inter relationships between Ontologies and
documents.

 Tools used for Annotations are KIM, Piggy Bank,


Kiwi.
Information Extraction and
Retrieval
 In the present vast web of information chaos, there is a need to recognize
and extract relevant fragments of information. Extracting information from
websites is typically handled by specialized extraction rules, called
wrappers which extract information from unseen collection of pages.
Information exists only in natural language form.
 Information extraction and retrieval in an intelligent way is one of the key
foundations of Semantic Web. Information Extraction is the method for
filtering information from large volumes of text .
 To extract useful information, it should be distilled into a more structured
form. Extracting useful or meaningful information from scientific journals,
legal decisions, or from hospital reports etc is important and should save
time along with providing accurate results. It aims to process natural
language text and to retrieve occurrences of a particular class of objects or
events and occurrences of relationships among them.
Information Retrieval
 The need is to find documents (may be, sometimes, also referred as text
parsers) based on user requirements that attempt to analyze text and
extract their semantic contents where the data can be understood if
represented as Ontologies and therefore information extraction
techniques may be used for automatic Ontology creation and population.
 This filtering and refinement of information from the huge quantity of
content on the network is called “Information Extraction” in which the
goal is to automatically achieve the exact user’s requirement from natural
language text.
 Information Retrieval refers to the keyboard-based search i.e. the
process to find out the online documents as per user’s requirements
through which user extracts the data needed. For eg :-Web based search
engine, Library catalogs, store catalogs, cookbook indexes etc.
 Information Extraction is a subfield of information retrieval and is the
process of extracting structured data from unstructured sources and the
filtering of meaningful information from unstructured sources is a difficult
and challenging task.

75
Software Agents
Software Agents can
 collect web content from diverse sources.

 process that information and exchange the

results with other programs(agents).


 also exchange proofs “proofs” written in

Semantic Web’s Unified Language. Eg. Verify


Cook’s place.

76
Web Usage Mining
 Web Usage mining has been defined as the application of
data mining techniques to discover usage patterns from
Web data in order to understand and better serve the
needs of Web-based applications. Web usage mining
consists of three phases, namely preprocessing, pattern
discovery, and pattern analysis Structure of information
should be good which will allow extracting knowledge from
log files. Web Usage Mining may be applied to data such as
contained in logs files. A log file contains information
related to the user queries on a website. Web usage mining
may be used to improve the website structure or giving
recommendations to visitors.

77
Web Usage mining

78
Jena Framework
 Jena was originally developed by researchers
in HP Labs, starting in Bristol, UK, in 2000.
 Jena is a Java framework for building Semantic
Web applications. (Semantic Web Toolkit)
 It provides a extensive Java libraries for
helping developers develop code that handles
RDF, RDFS, RDFa, OWL and SPARQL.
 Jena includes a rule-based inference engine to
perform reasoning based on OWL and RDFS
ontologies, and a variety of storage strategies
to store RDF triples in memory or on disk.
Jena Framework
 The Jena Framework includes:

A RDF API

Reading and writing RDF in RDF/XML, N3
and N-Triples

An OWL API

In-memory and persistent storage

Query Tools (RDQL – a query language
for RDF and Feuseki Server for SPARQL)
Semantic Web and AI?
 No human-level intelligence claims
 As with today’s WWW
 large, inconsistent, distributed
 Requirements
 scalable, robust, decentralised
 tolerant, mediated
 Semantic Web will make extensive use of current AI,
 any advancement in AI will lead to a better Semantic Web
 Current AI is already sufficient to go towards realizing the
semantic web vision
 As with WWW, Semantic Web will (need to) adapt
fast

81
Semantic Web & Knowledge
Management
 Organising knowledge in conceptual
spaces according to its meaning.
 Enabling automated tools to check
for inconsistencies and extracting
new knowledge.
 Replacing query-based search with
query answering.
 Defining who may view certain parts
of information

82
Research Aspects toward
Semantic Web
 Emerging Technology (W3C working
group is doing research in various
domain for implementing semantic
web)
 Many researchers are working in other
countries in the field of Semantic Web
but in India the percentage of people
is very less.
 Lots of scope in research for handling
Semantic web Data.( Managing
Existing Data, Interoperability with
other data formats, Handling large
data sets and many more).
Research Aspects toward
Semantic Web (Cont.)
 Lots of research Scope in Data Retrieval
strategies along with optimization.
 How to add AI concept with Semantic web
Data in Searching.
 Integration of existing Ontologies of various
domain.
 Semantic web Services and Web Usage
Mining.
 Semantic web Query Processing and
Optimization.
 Semantic web and Linked open data.
Semantic Web
Major Communities
 RDF
 SPARQL
 Ontology
Semantic Web Journals
 Journal of Web Semantics (ELSEVIER)
 Semantic Web journal (by IOS Press)
SWJ
 Journal on Data Semantics
(SPRINGER)
 International Journal on Semantic
Web and Information Systems
(IJSWIS) –IGI Global
 International Journal of Metadata,
Semantics and Ontologies (INDERSCIENCE)
Semantic Web Journals
(Cont..)
 Open Journal of Semantic Web
(OJSW)
 International Journal of Web &
Semantic Technology (IJWesT)
Semantic Web Books
John Hebler, Dean Allemang,
Matthew Fisher, James Hendler,
Ryan Blace , Elsevier , Morgan
Andrew Perez Kufman
Lopez, Willey , Publication,
2009.

Bob Ducharme , Toby Segaran, Colin


O’Reiily , 2013. Evans, and Jamie
Taylor, O’Reilly.
Additional Reading
Johan Hjelm, “Creating the Dieter Fensel: “Ontologies: A Silver
Semantic Web with RDF”, Bullet for Knowledge Management
John Wiley, 2001 and Electronic Commerce”, Springer
Verlag, 2001
John Davies, Dieter Fensel & Dieter Fensel, Wolfgang Wahlster,
Frank van Harmelen:, “Towards Henry Lieberman, James Hendler
the Semantic WEB – Ontology
(Eds.): “Spinning the Semantic Web:
Driven Knowledge Management”,
John Wiley, 2002 Bringing the World Wide Web to Its
Full Potential”, MIT Press, 2002
Michael C. Daconta, Leo J. Obrst,
Thomas B. Passin, "Explorer's Kevin T. Smith: “The Semantic Web:
Guide to the Semantic Web", A Guide to the Future of XML, Web
ISBN 1932394206, June 2004 Services, and Knowledge
Management”, John Wiley, 2003
Jeff Pollock and Ralph Hodgson,
"Adaptive Information: Improving M. Klein and B. Omelayenko (eds.),
Business Through Semantic “Knowledge Transformation for the
Interoperability, Grid Computing, Semantic Web”, Vol. 95, Frontiers in
and Enterprise Integration“, Wiley Artificial Intelligence and
Computer Publishing, September Applications, IOS Press, 2003
2004
89
Major website (Suggested
Readings)
 W3C Semantic Web
https://www.w3.org/2001/sw/
 The Semantic Web Community
Portal,
http://www.semanticweb.org
 Apache Jena Framework
 http://jena.sourceforge.net/
Some interesting links...
91
 http://semanticweb.com/
 http://patterns.dataincubator.org/book/
 http://www.w3.org/standards/semanticweb/
 http://spinrdf.org
 Wikipedia
 http://semanticweb.com/breaking-into-the-nosql-conversation_b27146
 http://gigaom.com/2012/03/11/is-big-data-new-or-have-we-forgotten-
its-old-heroes/
 http://www.snee.com/bobdc.blog/2012/10/sparql-and-big-data-and-
nosql.html
 http://dret.net/netdret/docs/soa-rest-www2009/rest
 http://www.mkbergman.com/
 http://www.cambridgesemantics.com/semantic-university

Introduction to the Semantic Web


...and some books
92
 David Wood, Linked Data, Manning
 Bob DuCharme, Learning SPARQL, O’Reilly
 Toby Segaran, Programming the Semantic Web,
O’Reilly
 John Hebeler, Semantic Web Programming, Wiley
 David Siegel, Pull: The Power of the Semantic
Web to Transform Your Business, Portfolio 

Introduction to the Semantic Web


Challenge and Issue for
Research
 Making Semantic Web a reality is
abstract and is a big challenge where
researchers are working world wide on
it’s various concerned issues. The
foundation of research concerns towards
the realization of Semantic Web lies in
it’s layered architectures as proposed by
Sir Tim Berner’s LEE and a few others
towards achieving an intelligent web.

93

You might also like