Business Process Reengineering Informati
Business Process Reengineering Informati
INTERNATIONALJournal of Information
JOURNAL Technology
OF&INFORMATION
Management Information System (IJITMIS), ISSN
TECHNOLOGY &
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
MANAGEMENT INFORMATION SYSTEM (IJITMIS)
ABSTRACT
In this paper we classify the different link patterns during the evolution of the World
Wide Web and the methods used to support data modeling and navigation, we identify six
patterns based on our observation and analysis and examine the core technology supporting
them (especially the link mechanism) and how data is structured within them. First, we
review the document web paradigm known popularly as Web 1.0, which has primarily
focused on linking documents. We overview the common algorithms used to link people to
documents and service providers. Then we review Web 2.0, which has primarily focused on
linking Web Services (known as mashups) and people (known as the Social Web). Finally,
we review two more recent patterns: one for linking objects to create a global object web and
one for linking data to create a global data space. As part of our review, we identify some of
the challenges and opportunities presented by each pattern.
Keywords: Big data, classic web, global document space, graph databases, global object
space, global data space, Linked Data, NoSQL, semantic web, social web, relational
databases, Web 1.0, Web 2.0, Web 3.0.
I. INTRODUCTION
The World Wide Web (WWW) has enjoyed phenomenal growth and has received
wide global adoption without regard to factors such as the age, ethnicity and location of its
users. The web’s user base has grown continually since its inception and is expected to
comprise nearly 3 billion users by 2015 [1]. Many features have contributed to the Web’s
success, such as its ease of use, the near-ubiquity of its access, and the wealth of valuable
content it contains. The ability to link to content and to navigate from one web page to
96
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
another is a core functionality of the original architecture of the classic web. The web’s
evolution since its inception can be characterized according to the nature of what it has
linked. The web has grown from a document repository to an application platform allowing
users to conduct transactions such as buying books, paying bills, taking online classes, and
connecting to each other. As a result, the general function of the web has evolved from
merely linking documents to linking people to service providers, linking services to other
services, and linking people to people. Researchers have made many attempts to build a more
intelligent web, and in this paper we will focus on linking objects and linking data.
The web presents many opportunities and challenges to the research community,
including how to better interlink the classic web and obtain knowledge from the information
available on the web. Web mining techniques can be classified according to three categories:
web structure, web content and web usage mining [2]. These techniques are characterized
based on the data (i.e., structure and linkage) used for the mining process. Web structure
mining focuses on the structure of the links between the documents. Web content mining
extracts the content of the document as an input to the mining process. Web usage mining
uses the user’s interactions on the web as an input to the mining process. Web 2.0 has
contributed to the increase of the size of data available on the web and necessitate research
for structuring and linking data differently and more intelligently.
This paper is organized into four parts. Section II reviews the data structure and
linkage in classic web. It identifies two link patterns: among documents and between people
and documents (including service providers). Section III focuses on Web 2.0 and the
explosion of data leading to Big Data. It identifies two link patterns: linking services and
linking people. Section IV reviews the Web of Object architecture and how objects are
interlinked. Finally, section V will review Linked Data and its technology stack.
97
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
Fig. 1 depicts an internet user is using a browser to request a static page on the internet
by specifying the protocol (HTTP) and the location (URL) for the resource. The request is
routed over the internet until it reaches the server hosting the resource. The server responds to
the user with the resource and the browser is using the HTML representation of the resource to
render the resource information.
Most of the current applications on the internet follow a three-tier architecture to
provide a dynamic and interactive user experience (see Fig. 2):
Presentation Layer: This layer is responsible for presenting information to the user and
invoking the subsequent Business Logic Layer to perform requested tasks. Users interact
with this layer directly.
Business Logic Layer (BLL): This layer contains business rules for the application and
invokes the subsuquent Data Access Layer to obtain the requested data.
Data Access Layer (DAL): This layer contains classes responsible for connecting and
executing commands on the database based on the BLL request. It is beyond the scope of
this paper to present an overview of all the various web application architectures.
<!DOCTYPE html>
<html>
<title>
Sample Page for Link Patterns in the World Wide Web
</title>
<body>
</body>
</html>
Figure 3. Sample HTML code
98
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
99
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
Hyperlink Induced Topic Search (HITS) was developed at the IBM Almaden
Research Center and uses the hyperlink structure of web pages to infer notions of authority.
When a page has a hyperlink to another page, it implicitly endorses it. HITS defines two
types of pages: hubs, which provide a collection of links to authorities, and authorities, which
provide the best source of information about a subject. The central concept of this algorithm
is that there is a mutually reinforcing relationship between authorities and hubs (i.e., a good
hub is a page that points to many good authorities and vice versa). However, the web does
not always conform to this model: many pages could point to other pages for reasons such as
paid advertisement and may not necessarily endorse them.
PageRank was developed at Stanford University by Larry Page and Sergey Brin, the
co-creators of Google. PageRank is similar to HITS in its method for finding authoritative
pages. The key difference is that not all links (votes) are considered equal in status. Highly
linked pages have more importance than scarcely linked pages. Backlinks (incoming links)
from high-ranked pages count more than links from lower ones. To calculate the PageRank of
a page (or node), it is mandatory to calculate the PageRank of all pages (nodes) pointing to it
[12].
There is a great interest among the research community in determining how to optimize
search engine algorithms and to improve search results by taking into account the content of a
page (such as its title, headings, and tags) as well as user behavior (such as clicks and co-
visited pages) [8–11]. The search engine must also be capable of identifying and blocking the
efforts of spammers to spuriously increase page ranking using methods such as doorway
pages (pages full of keywords related to the site), pages dedicated to creating links to a
specific target page, and cloaking (pages that deliver different content to web crawlers than
that seen by regular users).
2.4. Evaluation
The goal of the World Wide Web is to provide a decentralized and dynamic
environment for interlinked, heterogeneous documents. Linking documents is, of course, a
built-in feature of HTML. Search engines satisfy the need for linking people to documents.
Web crawlers follow hyperlinks to create search indexes and thereby centralize information
about the decentralized web using web mining algorithms.
Many challenges face search engines. Web documents are written for many different
locales in many different languages, dialects, and styles. The number of documents on the
web is always on the rise. Efforts by spammers to improve their visibility and ranking within
search results are continuously evolving. There is also no guarantee that a site will be indexed
in the crawling process by a certain time or at all (unless there are many other external links
pointing to the page). Crawlers may not be able to follow links that use Java events such as
onclick. Crawlers will also fail to index documents that are accessible only by a search box
(dynamically generated content) or that require authentication. In addition, the web is
constantly moving target for indexing: it is a dynamic environment and its content is
frequently added, updated, dynamically generated, and deleted.
3. WEB 2.0
100
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
Web 2.0 was not clearly defined until Tim O'Reilly prescribed the concepts and
principles of Web 2.0 [13].
101
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
102
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
Web Services are software components accessible on the web and exposing their
operations to others for use. Linking web services is the foundation for SOA (Service Oriented
Architecture). There are two models for linking web services, as illustrated in Fig. 10. In the
orchestration model, participating web services are controlled by a central web service that
manages the other services’ engagement. Meanwhile, in the choreography model, each web
service must know when to become active and with which service it needs to interact.
Figure 10. An illustration of two models for SOA: web service orchestration and
choreography
103
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
levels deep who bought the same product. Relational databases have been widely used and
successfully accommodated to business needs; unfortunately, they lack efficiency in
addressing strongly interlinked datasets.
104
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
In a graph database model, real instances are depicted but it does not contain a general
entity relationship model like relational database models do. The graph model consists of
nodes (i.e., entities), relationships (directed edges or connections that have a label and are not
generic), and properties (in the form of a key-value pair) for both nodes and relationships.
Adding properties to the relationship provides a great value by representing metadata (such as
the strength of the connection) between entities. Another advantage of the graph model is that
it is schema free. New nodes and relationships can be added without the need for migration or
downtime as is the case with relational databases.
3.4. Evaluation
Although Web 1.0 has enjoyed many successes and made access to information nearly
seamless, it has provided only read access to users and has prevented them from contributing
to its knowledge base. Researchers have tried to compensate the lack human feedback by
developing algorithms looking at links between pages, content, and user behavior.
Web 2.0 embraced the human intelligence and changed the role of online users from
passive consumers to contributors. It created an attractive, easy to use, and powerful platform
for collaboration. This platform is used not only for people to contribute to pages (with
reviews, tags, comments, and other content.) but to also link services and people.
Developers are able to quickly build powerful applications by calling ready-made web
services instead of building them from scratch. Researchers have developed various
languages (such as WSFL, WSCI, as BPEL) for web service composition to build a business
processes. Manual composition of web services is time consuming and not scalable. This has
led researchers to automate the composition process based on functionality and QoS
attributes (such as availability, reliability, security, and cost) for the service selection process
[20]–[24]. As a testament to the importance of this topic, a Google Scholar search for
“dynamic web service composition” yields more than 500,000 articles and research papers.
However, OWL-S is a service ontology that enables dynamic web service discovery,
composition, and monitoring. It has three parts: the service profile, the process model, and the
grounding [25]. A key observation is that the service profile (which includes QoS
information) is provided by the service publisher and consumers cannot contribute to it. This
is reminiscent of the read-only restriction that is characteristic of Web 1.0. Researchers have
tried to address this issue by proposing new frameworks that include the application of social
networks to web service composition [26]–[30].
Web 2.0 has also created a convenient and intelligent environment for people to
connect to each other. People are increasingly using social networks to connect with friends
and others from various backgrounds. Web applications are charged with managing
information in an optimized way and applying analytical and inferential algorithms in order
to make intelligent recommendations based on the network topology. Traditional relational
databases fall short due to their rigid constraints and their inability to scale horizontally.
However, social networks are a natural fit for the graph model. The ability to model social
network data in a graph provides a great advantage due to the several hundred years of
mathematical and scientific studies made on graphs. Breadth-first (one layer at a time) and
depth-first (one path at a time) search algorithms can be used for traversing the graph.
Dijkstra’s algorithm (using a breadth-first search) can be used to find the shortest path
between two nodes in the graph. There are also many graph theory techniques that can be
applied to the graph for analysis and inference. Triadic closure (if node A is connected to
node B and also connected to node C, there is a possibility that B has some relation to C) and
105
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
106
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
4.4. Evaluation
Functional Observer REST (FOREST) is a resource-oriented framework for
implementing domain and application logic by creating functional dependencies among linked
objects. An object is identified via a unique identifier and its states are evaluated according to
its current sb0tate along with the states of the objects’ that it observes. However, a given
object cannot tell by whom it is observed. The interactions and the state dependencies are
based on the application logic and are not globally realized. That is, objects are not
semantically described but work together according to the application constraints. Overall, the
framework provides interoperability (objects can be serialized to XML, JSON, or XHTML),
scalability (objects can be distributed and linked), and evolvability (observed objects’ state can
be pushed and pulled).
Many people use the terms Linked Data, Semantic Web, and Web 3.0
interchangeably. It is critical for our discussion to clarify the distinction among them. The
next version of the World Wide Web (Web 3.0) focuses on supplementing raw data on the
web with metadata and links to make it understandable by machines for automation,
discovery, and integration. Semantic Web employs a top-down technology stack (RDF,
OWL, SPARQL, and RIF) to support this goal. Linked Data is a bottom-up approach that
uses Semantic Web technologies to enact Web 3.0.
There are other approaches for enacting Web 3.0 without the use of Semantic Web
technologies. Microformats and microdata are examples. Microformats provide standard
class definitions to represent commonly used objects on the web in HTML (objects such as
people, blog posts, products, and reviews) . This allows web crawlers and APIs to easily
understand the content of the site. It has also rel attribute that provides a relationship meaning
for the hyperlink. For example, rel="home" in <a href="URL" rel="home">Home</a> gives
the hyperlink a meaning that the link is relative to the homepage [40].
Similarly, schema.org provides schemas (i.e., itemtype which has properties similar to
microformats classes and properties). Microdata format (such as itemscope, which includes
the itemtype and itemprop) is used to provide metadata to HTML content [42].
Linked Data is similar to Microdata and Microformats in its method for enacting Web
3.0. However, it uses a Semantic Web technology stack (i.e. OWL) for the vocabulary and
RDF for the data model. In addition, vocabulary in Linked Data (classes in microformats and
itemtype in microdata) is not limited to a certain organization for updates. Finally, another
key difference is that the described data item in microformats and microdata does not have a
unique identifier as it does in Linked Data (URI). For these reasons, our focus in this section
will be on Linked Data.
107
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
predicate, and order id:12 as an object. In Fig. 14, the RDF model describes two persons
(Tawfiq and Mike) and establishes a connection (“knows”) between them using FOAF
(Friend-of-A-Friend) vocabulary
5.4. Evaluation
The intent of Web 3.0 is to make data understandable by machines to improve reuse
and integration. Linked data is based on two major Semantic Web technologies: RDF as a
graph-based data model and OWL (Web Ontology Language) as a vocabulary for describing
classes and properties including the relationships among classes, equality, cardinality, and
enumerated classes.
Linked Data has received worldwide adoption and from various domains. Linking
Open Data Cloud (LOD Cloud) group catalogs datasets that are presented on the Web as
Linked Data has became an immense repository for publishers. As of September 2011 there
were more than 290 datasets available which include more than 30 billion triples published
by various domains [43]. Search engines such as Falcons and SWSE enable users to search
for Linked Data using keywords. In addition Sindice, Swoogle, Watson, and CKAN provide
APIs for applications to look up Linked Data.
108
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
While access to Linked Data provides great opportunities for publishers and
consumers, they are also faced with many challenges. The human interaction with Linked
Data is not as user friendly and intuitive as Web 1.0. HTML presents data in a friendly
manner for users to view, however, data formatting is missing in Linked Data as it is intended
for machines to understand. In addition, applications face many challenges in order to search,
integrate, and present the data in a unified view. There are thousands of ontologies (third
party and user-generated) used to describe the data. Data fusion requires data integration and
aggregation from different sources written in different languages thus requires data cleansing
(including deduplication and removal of irrelevant or untrustworthy data) and mapping of the
schemas used to describe the data. Many researchers have tried to address the issues of data
quality and trustworthiness by using cross-checking, voting, and machine learning techniques
[44]. Inference techniques can also improve quality by discovering new relationships and
automatically analyzing the content of data to discover inconsistencies during the integration
process.
Link maintenance presents another challenge since RDF contains links to other data
sources that could be deleted at any time, causing the links to be dangling. Frameworks to
validate links on a regular basis or use syndication technology to communicate changes are
proposed means to address this issue. Another interesting research area is related to
automatically interlinking data based on its similarity.
Another challenge in Linked Data is related to the core technology that it uses. OWL
has significant expressivity limitations. OWL 2.0 was introduced to resolve some of the
shortcomings of OWL 1.1 such as expressing qualified cardinality restrictions, using keys to
uniquely identify data as well as the partOf relations (asymmetric, reflexive, and disjoint). In
addition, OWL is written in RDF (triples) which makes it complex to express relations such
as “class X is the union of class Y and Z.” Essentially, OWL is a description language based
on first order logic and it is unable to describe integrity constraints or perform closed-world
querying [46]. The Rule Interchange Format (RIF) was introduced on top of OWL to include
rules using Logic Programming. However, a true rule specification in logic programming is
fundamentally incompatible with OWL [47].
6. CONCLUSION
The World Wide Web has been adopted by billions of users globally for its wealth of
information and its ease of use. Information on the web has been linked in many ways since
its inception to optimize automation, discovery, and reuse. The more relationships and links
that are applied to the data, the better knowledge can be induced from it. It is important to
recognize the different link patterns in the web to identify some of the opportunities and
challenges in each pattern and to be able to recommend new patterns to better serve online
users and service providers. It is evident that each iteration of the web’s evolution—from
linking documents to documents, to linking people to documents, to linking services to
services, to linking people to people, to linking objects to objects, and finally to linking data
to data—has increased the value of the network and has made it an increasingly rich and
valuable platform. These efforts and their results are inspiring researchers to work on the
challenges, recommend new patterns in the web, or to apply these patterns to real life
physical objects as we see in Internet of Things (IoT) initiative. This paper is also used as a
foundation for our research for a new web pattern to optimize the use of data on the web and
overcome some of the shortcomings in the current methods by focusing on better methods for
publishing data and improve linkage.
109
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
Link Patterns
Linking Linking People Linking Linking People Linking Linking Data
Documents to Documents services Objects
Web Version Web 1.0 Web 1.0 Web 2.0 Web 2.0 Web 2.0 Web 3.0
Link Type Explicit Implicit Implicit Implicit Explicit Explicit
User Access Read Only Read Only Read/Write Read/Write N/A Read &
& Only Application
Application Application Level
Level Level
Link Hyperlinks Search engines UDDI Request/accept a Object URI
Mechanism Search and connection GUID
auto
discovery
Impact Global Ease of access Service Social Network Global Global Data
Document to information Mashups Object Space Space
Space and service
providers
Shortcomings/ • No typed • Crawlers are • Optimize • Published data • No typed • Limited
Challenges links used to dynamic is not represented links expressivity
• No centralize and service semantically between in OWL and
collaboration index composition • Sentiment objects. RDF
capabilities. documents using analysis, opinion • Object • Link
• Not instead of semantic mining and interactions maintenance
understand- realtime or near- and social community and state • Data
able by realtime lookup. web detection. dependencies integration
machines. • Search techniques. • Optimize data are and
algorithms • Quality of management for constrained aggregation
mainly rely on Service intelligent by domain • Lack of
links which may verification recommendations and support for
be used for based on the application integrity
other purposes. network topology logic. constraints
• Web usage • No and closed-
and content metadata is world
mining are used used to querying
to optimize describe the •
results. object Automatically
• Web content interlink
mining must similar data
accommodate
different
languages and
locales.
• Spurious
efforts to
increase
visibilty (spam)
must be
blocked.
• Users are not
able to provide
feedback on
results.
110
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
REFERENCES
[1] R. Kalakota, “Big Data Infographic and Gartner 2012 Top 10 Strategic Tech Trends,”
http://practicalanalytics.wordpress.com/2011/11/11/big-data-infographic-and-gartner-
2012-top-10-strategic-tech-trends/, Nov. 2011. Web. Nov. 2013
[2] A. Hotho and G. Stumme, “Mining the World Wide Web,” Künstliche Intelligenz,
vol. 3, pp. 5–8, 2007.
[3] T. Berners-Lee, R. Fielding, and L. Masinter, “RFC 2396 - Uniform Resource
Identifiers (URI): Generic Syntax,” http://www.isi.edu/in-notes/rfc2396.txt, Aug.
1998. Web. Nov. 2013
[4] R. Fielding, “Hypertext Transfer Protocol – http/1.1. Request for Comments: 2616,”
http://www.w3.org/Protocols/rfc2616/rfc2616.html, 1999. Web. Nov. 2013
[5] D. Raggett, A. Le Hors, and I. Jacobs, “HTML 4.01 Specification - W3C
Recommendation,” http://www.w3.org/TR/html401, 1999. Web. Nov. 2013
[6] “HTML Examples,” W3Schools,
http://www.w3schools.com/html/html_examples.asp. Web. Nov. 2013
[7] S. Brin and L. Page. “The Anatomy of a Large-Scale Hypertextual Web Search
Engine,” Computer Networks and ISDN Syst., vol. 30, pp. 107–117, 1998.
[8] G-R. Xue, H-J Zeng, Z. Chen, Y. Yu, W-Y Ma, W. Xi, and W. Fan. “Optimizing Web
Search Using Web Click-Through Data.” Proc. 13th ACM Int'l Conf. Inform. and
Knowledge Manage., pp. 118–126, 2004.
[9] S. Ding, and T. Adviser-Suel, “Index Compression and Efficient Query Processing in
Large Web Search Engines,” PhD dissertation, Polytechnic Inst. of New York Univ.,
Mar. 2013.
[10] C.N. Pushpa, S. Girish, S.K. Nitin, J. Thriveni, K.R. Venugopal, and L.M. Patnaik,
“Computing Semantic Similarity Measure Between Words Using Web Search
Engine,” Computer Sci. and Inform. Technology, pp. 135–142, 2013.
[11] L.G. Giri, P.L. Srikanth, S.H. Manjula, K.R. Venugopal, and L.M. Patnaik,
“Mathematical Model of Semantic Look: An Efficient Context Driven Search
Engine,” Int'l J. Inform. Process., vol. 7, no. 2, pp. 20–31, 2013.
[12] L. Page. “The PageRank Citation Ranking: Bringing Order to the Web.” Tech. report,
Stanford Univ., Jan. 1998.
[13] T. O'Reilly. “What is Web 2.0: Design Patterns and Business Models for the Next
Generation of Software,” Int'l J. Digital Econ., no. 65, pp. 17–37, Mar. 2007.
[14] L.D. Paulson. “Building Rich Web Applications with Ajax,” Computer, vol. 38, no.
10, pp. 14–17, 2005.
[15] “XML Essentials,” W3C, http://www.w3.org/standards/xml/core. Web. Nov. 2013
[16] P.P-S. Chen. “The Entity-Relationship Model—Toward a Unified View of Data,”
ACM Trans. Database Syst., vol. 1, no. 1, pp. 9–36, 1976.
[17] E.F. Codd. “"A Relational Model of Data for Large Shared Data Banks," Commum.,
ACM, vol. 26, no. 1, pp. 64–69, 1983.
[18] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S.
Sivasubramanian, P. Vosshall, and W. Vogels. “Dynamo: Amazon's Highly Available
Key-Value Store,” 21st ACM Symp. Operating Syst. Principles, pp. 205–220, 2007.
[19] F. Chang, J. Dean, S. Ghemawat, W.C. Hsieh, D.A. Wallach, M. Burrows, T.
Chandra, A. Fikes, and R.E. Gruber. “Bigtable: A Distributed Atorage Aystem for
Atructured Sata,” ACM Trans. Computer Syst., vol. 26, no. 2, pp. , 2008.
111
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
112
International Journal of Information Technology & Management Information System (IJITMIS), ISSN
0976 – 6405(Print), ISSN 0976 – 6413(Online) Volume 4, Issue 3, September - December (2013), © IAEME
[39] T. Heath and C. Bizer. “Linked Data: Evolving the Web Into a Global Data Space,”
Synthesis Lectures on the Semantic Web: Theory And Technology, vol. 1, no. 1,
pp. 1–136, 2011.
[40] “What Are Microformats?,” Microformats Wiki RSS,
http://microformats.org/wiki/what-are-microformats. Web. Nov. 2013
[41] “Semantic Web,” W3C, http://www.w3.org/standards/semanticweb/. Web. Nov. 2013
[42] I. Hicks. “HTML Microdata,” W3C, http://dev.w3.org/html5/md-LC/, May 2011.
Web. Nov. 2013
[43] C. Bizer, and A. Jentzsch, “State of the LOD Cloud,”
http://lod-cloud.net/state/, Sept. 2011. Web. Nov. 2013
[44] J. Madhavan, S.R. Jeffery, S. Cohen, X.(L.) Dong, D. Ko, C. Yu, and A. Halevy,
“Web-Scale Data Integration: You Can Only Afford to Pay as You Go,” Proc. 3rd
Biennial Conf. on Innovative Data Systems Research, pp. 342–350, 2007.
[45] C. Bizer, T. Heath, and T. Berners-Lee. “Linked Data: Principles and State of the
Art,” Proc. 17th Int'l World Wide Web Conf., 2008.
[46] B. Motik, I. Horrocks, R. Rosati, and U. Sattler, “Can OWL and Logic Programming
Live Together Happily Ever After?,” Proc. 5th Int'l Semantic Web Conf., pp. 501–
514, 2006.
[47] M. Kifer, J. de Bruijn, H. Boley, and D. Fensel, “A Realistic Architecture for the
Semantic Web,” Proc. 1st Int'l Conf. on Rules and Rule Markup Languages for the
Semantic Web, pp. 17–29, 2005.
[48] Shaymaa Mohammed Jawad Kadhim and Dr. Shashank Joshi, “Agent Based Web
Service Communicating Different IS’s and Platforms”, International Journal of
Computer Engineering & Technology (IJCET), Volume 4, Issue 5, 2013, pp. 9 - 14,
ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[49] Jaydev Mishra and Sharmistha Ghosh, “Normalization in a Fuzzy Relational Database
Model”, International Journal of Computer Engineering & Technology (IJCET),
Volume 3, Issue 2, 2012, pp. 506 - 517, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.
[50] Sanjeev Kumar Jha, Pankaj Kumar and Dr. A.K.D.Dwivedi, “An Experimental
Analysis of MYSQL Database Server Reliability”, International Journal of Computer
Engineering & Technology (IJCET), Volume 3, Issue 2, 2012, pp. 354 - 371,
ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[51] Houda El Bouhissi, Mimoun Malki and Djamila Berramdane, “Applying Semantic
Web Services”, International Journal of Computer Engineering & Technology
(IJCET), Volume 4, Issue 2, 2013, pp. 108 - 113, ISSN Print: 0976 – 6367,
ISSN Online: 0976 – 6375.
[52] A. Suganthy, G.S.Sumithra, J.Hindusha, A.Gayathri and S.Girija, “Semantic Web
Services and its Challenges”, International Journal of Computer Engineering &
Technology (IJCET), Volume 1, Issue 2, 2010, pp. 26 - 37, ISSN Print: 0976 – 6367,
ISSN Online: 0976 – 6375.
113