0% found this document useful (0 votes)
10 views7 pages

Csc 212 Issures in Computer Chapter Three

The Semantic Web extends the existing World Wide Web by adding machine-interpretable metadata to data, enabling computers to understand and process information similarly to humans. It encompasses Linked Open Data and Semantic Metadata, facilitating improved information retrieval, data integration, and knowledge management. The adoption of standards like RDF and SPARQL has led to the development of knowledge graphs, enhancing the capabilities of data management across various sectors.

Uploaded by

yontableriot3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Csc 212 Issures in Computer Chapter Three

The Semantic Web extends the existing World Wide Web by adding machine-interpretable metadata to data, enabling computers to understand and process information similarly to humans. It encompasses Linked Open Data and Semantic Metadata, facilitating improved information retrieval, data integration, and knowledge management. The adoption of standards like RDF and SPARQL has led to the development of knowledge graphs, enhancing the capabilities of data management across various sectors.

Uploaded by

yontableriot3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

CHAPTER THREE

Introduction to the Semantic Web

The Semantic Web is a vision about an extension of the existing World Wide Web, which
provides software programs with machine-interpretable metadata of the published information
and data. In other words, we add further data descriptors to otherwise existing content and data
on the Web. As a result, computers are able to make meaningful interpretations similar to the
way humans process information to achieve their goals.

The ultimate ambition of the Semantic Web, as its founder Tim Berners-Lee sees it, is to enable
computers to better manipulate information on our behalf. He further explains that, in the context
of the Semantic Web, the word “semantic” indicates machine-processable or what a machine is
able to do with the data. Whereas “web” conveys the idea of a navigable space of interconnected
objects with mappings from URIs to resources.

The Evolving Vision of the Semantic Web

What’s behind the original vision of the Semantic Web comes under the umbrella of three things:
Automation of information retrieval, the Internet of Things and Personal Assistants.

With time, however, the concept evolved into two important types of data, which, taken together,
implement its vision today. These are Linked Open Data and Semantic Metadata.

The semantic web is structured into three main parts ie:

1) The automation of information retrieval


2) The internet of things
3) Information Retrieval

 Linked Open Data: The Path Through the Labyrinth of Data

For the Semantic Web to function, computers must have access to structured collections of
information and sets of inference
CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E
MUKETE Page 1
e rules that they can use
to conduct automated reasoning.

Linked Open Data (LOD) is structured data modeled as a graph and published in a way that
allows interlinking across servers. This was formalized by Tim Berners-Lee in 2006 as the Four
rules of linked data:

1. Use URIs as names for things.


2. Use HTTP URIs so that people can look up those names.
3. When someone looks up a URI, provide useful information, using the standards (RDF*,
SPARQL).
4. Include links to other URIs. so that they can discover more things.

LOD enables both people and machines to access data across different servers and interpret its
semantics more easily. As a result, the Semantic Web transcends from a space comprising of
linked documents to a space comprising of linked information. Which, in turn, empowers the
creation of a richly interconnected network of machine-processable meaning.

Linked Open Data includes:

 Factual data about specific entities and concepts (e.g., Varna, WW2 or the Global
warming theory);
 Ontologies – semantic schemata defining:
o Classes of objects (e.g., Person, Organization, Location and Document);
o Relationship types (e.g., a parent of or a manufacturer of);
o Attributes (e.g., the DoB of a person or the population of a geographic region).

CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E


MUKETE Page 2
Today, there are thousands of datasets published as LOD across different sectors such as
encyclopedia, geographic data, government data, scientific database and articles, entertainment,
traveling, etc. In Life Sciences alone, there are more than 100 scientific databases published as
LOD.

Because of their linking, these datasets form a giant web of data or a knowledge graph, which
connects a vast amount of descriptions of entities and concepts of general importance. For
example, there are several descriptions of the city of Varna (e.g., one derived from Wikipedia,
another from GeoNames, etc.).

Semantic Metadata: Tagging the Existing Web

Semantic metadata amounts to semantic tags that are added to regular Web pages in order to
better describe their meaning. For instance, the home page of the Bulgarian Institute for
Oceanography can be semantically annotated with references to several appropriate concepts and
entities, e.g., Varna, Academic Institution and Oceanography.

Such metadata makes it much easier to find Web pages based on semantic criteria. It resolves
any potential ambiguity and ensures that when we search for Paris (the capital of France), we
will not get pages about Paris Hilton.

CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E


MUKETE Page 3
If we want to have a well-determined relationship between the subject of the Web page and the
corresponding page or document, it is best to use one of the structured data metadata schemes.
Currently, the most popular such scheme is Schema.org, which was established by Google,
Yahoo, Microsoft and Yandex. According to a recent study of the University of Mannheim, in
2015, 30% of the Web pages contained Semantic Metadata.

Understanding The Semantic Web

The Semantic Web provides a common framework that allows data to be shared and reused
across application, enterprise, and community boundaries. It is a collaborative effort led by W3C
with participation from a large number of researchers and industrial partners.

Fundamental for the adoption of the Semantic Web vision was the development of a set of
standards established by the international standards body – the World Wide Web Consortium
(W3C):

 Resource Description Framework (RDF) – a simple language for describing objects


and their relations in a graph;
 SPARQL Protocol and RDF Query Language (SPARQL) – a protocol and query
language for RDF data;
 Uniform Resource Identifier (URI) – a string of characters designed for unambiguous
identification of resources and extensibility via the URI scheme.

The availability of such standards fostered the development of an ecosystem of different tools
from various providers, e.g.: database engines, such as GraphDB, that deal with RDF data
(known as triplestores), ontology editors, tagging tools that use text analysis to automatically
generate semantic metadata, semantic search engines and much more.

Knowledge Graphs: The Next Reincarnation of the Semantic Web

Although knowledge graphs came later, they quickly became a powerful driver for the adoption
of Semantic Web standards and all the semantic technologies that implement them. Knowledge

CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E


MUKETE Page 4
graphs bring the Semantic Web paradigm to the enterprises, by introducing semantic metadata to
drive data management and content management to new levels of efficiency and breaking silos to
let them synergize with various forms of knowledge management.

Enterprise knowledge graphs use ontologies to make explicit various conceptual models
(schemas, taxonomies, vocabularies, etc.) used across different systems in the enterprise. Using
the enterprise data management slang, knowledge graphs represent a premium sort of semantic
reference data: a collection of interlinked descriptions of entities – objects, events or concepts.

In this way, knowledge graphs help organizations smarten up proprietary information by using
global knowledge as context for interpretation and source for enrichment.

Other Details on How the Semantic Web works

The Semantic Web enhances computer capabilities by enabling machines to understand and
interpret the meaning of data, not just its format, making it more usable for tasks like information
retrieval, knowledge management, and automated reasoning. This is achieved through the use of
structured data formats like RDF and ontologies, which define relationships and meanings
between data elements.

Enhanced Computer Capabilities:

 Machine-Readable Data:

The Semantic Web focuses on making data accessible and understandable by computers,
not just humans.

 Data Integration and Interoperability:


It facilitates the linking of data from different sources and applications, allowing for a more
holistic view of information.
 Knowledge Management and Inference:
By defining relationships and meanings, the Semantic Web allows computers to make inferences
and draw conclusions from data.
CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E
MUKETE Page 5
 Improved Information Retrieval:
Semantic search engines can use the meaning of data to find more relevant information than
traditional search engines based on keywords.
 Automated Tasks and Workflows:
The Semantic Web can be used to automate tasks that involve understanding and processing
information, such as data validation or knowledge discovery.

How it Works:

 Structured Data Formats:

RDF (Resource Description Framework) and OWL (Web Ontology Language) are used
to represent data in a structured and machine-readable format.

 Ontologies:
These define the concepts, relationships, and rules within a specific domain, allowing computers
to understand the meaning of data.
 Query Languages:
SPARQL is a query language used to retrieve and manipulate data stored in RDF format.
 Semantic Web Services:
These are web services that have their properties and behavior defined in a machine-interpretable
language.
Benefits:

 Improved efficiency and accuracy in data processing and analysis.


 Facilitates knowledge sharing and collaboration.
 Enables the development of more intelligent applications and systems.
 Supports the creation of new applications and services based on semantic web
technologies.

CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E


MUKETE Page 6
In essence, the Semantic Web transforms the web from a collection of documents into a vast
network of linked data, enabling computers to understand and interact with information in a
more meaningful way

In Conclusion

The Semantic Web is the web of connections between different forms of data that allow a
machine to do something it wasn’t able to do directly.

Thanks to their capacity to boost the generation, integration and understanding of data, the
Semantic Web concepts were rapidly adopted in data and information management. Today,
multiple organizations use Linked Data as a mechanism to publish master data internally. The
Semantic Web standards are widely used in the development of knowledge graphs in different
domains: government (for instance Legislation.gov.uk), media (BBC was the pioneer), science
(both Elsevier and Springer Nature use GraphDB), financial services, etc.

CSC212 –2024/2025 (Issues in Computer) Dept of Computer, F.Sc UB By S.E


MUKETE Page 7

You might also like