A Comprehensive Survey of Graph Neural Networks For Knowledge Graphs
A Comprehensive Survey of Graph Neural Networks For Knowledge Graphs
ABSTRACT The Knowledge graph, a multi-relational graph that represents rich factual information among
entities of diverse classifications, has gradually become one of the critical tools for knowledge management.
However, the existing knowledge graph still has some problems which form hot research topics in recent
years. Numerous methods have been proposed based on various representation techniques. Graph Neural
Network, a framework that uses deep learning to process graph-structured data directly, has significantly
advanced the state-of-the-art in the past few years. This study firstly is aimed at providing a broad, complete
as well as comprehensive overview of GNN-based technologies for solving four different KG tasks, including
link prediction, knowledge graph alignment, knowledge graph reasoning, and node classification. Further,
we also investigated the related artificial intelligence applications of knowledge graphs based on advanced
GNN methods, such as recommender systems, question answering, and drug-drug interaction. This review
will provide new insights for further study of KG and GNN.
INDEX TERMS Deep learning, distributed embedding, graph neural network, knowledge graph,
representation learning.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 75729
Z. Ye et al.: Comprehensive Survey of Graph Neural Networks for Knowledge Graphs
KRL can significantly improve computational efficiency. that solve KG-related tasks. In addition, it offers discussions
At the same time, it can ease the issue of data sparseness and and comparisons of the introduced methods.
achieve the goal of combining heterogeneous information. –We explored almost all the state-of-the-art graph learning
Graph Neural Network (GNN) appears as a framework methods based on graph neural networks for four typical tasks
that has used deep learning to learn graph-structured data in the knowledge graph. We presented their critical technolo-
directly in recent years. The essence of GNN is to gather gies and characteristics from a variety of perspectives.
information from the neighborhood to the target node –Considering the vast application foreground of knowl-
according to the message passing rules so that entities edge graphs, we have further conducted a thorough investi-
with similar neighborhoods are close to each other in the gation of downstream tasks, covering recommender systems,
embedding space, and it excels in capturing the global or question answering, drug-drug interaction, etc.
local structural information of the graph [15]. On the other The following are the main points of the remainder of
hand, because GNN has a nonlinear solid fitting ability this study: The definitions of knowledge graphs and graph
to graph-structure data, it has higher accuracy and better neural networks are briefly introduced in Section II. Then,
robustness on problems in different fields [16]. A great Section III shows the approaches proposed for learning
deal of variants of the GNN algorithm and framework knowledge graph representations using GNN technology.
have been proposed in the past few years. Furthermore, The implementation of such information in downstream
these GNN-based KGE models can combine domain data activities, such as question-answer systems, is next explored
in the knowledge graphs with business scenarios and help in Section IV. Finally, the conclusion of this investigation is
domain business upgrade. Typical downstream applications shown in Section V.
include recommender systems [17], intelligent question
answering [18], and drug-drug interactions [19]. II. PRELIMINARY KNOWLEDGE
Previous survey papers solely focus on general knowledge This section first introduces the basic concepts of knowledge
graphs problems or graph neural networks technology. This graphs and graph neural networks.
study performs a systematic and broad survey of knowledge
graph learning in accordance with graph neural network A. THE CONCEPT OF KNOWLEDGE GRAPH
methods. The framework of this article is shown in Fig. 1, A knowledge graph refers to a semantic network graph which
and the contributions of this study are presented as follows: is consisted of diverse entities, concepts, and relationships in
the real world. It is used to formally describe various things
and their associations in the real world [20].
Knowledge graphs are generally represented in triples
G={E, R, F}. Among them, E represents the entity set
{e1 , e2 , · · · ,eE }, and the entity e is the most basic element
in the knowledge graph, referring to the items that exist
objectively and can be distinguished from each other. R
represents the relation set {r1 , r2 , · · · ,rR }, and the relation r
is an edge in the knowledge graph, representing a specific
connection between different entities. F represents the fact
set {f1 ,f2 , · · · ,fF }, and each f is defined as a triple (h, r, t)
∈ f , in which h denotes the head entity, r stands for the
relationship, and t indicates the tail entity.
H = F (H , X ) , (1)
FIGURE 1. The framework of the contents of this study. O = G (H , XN ) . (2)
–This is the first comprehensive survey paper of graph where H represents the state of all nodes; O represents
neural network models for knowledge graph problems to the the result after outputting all nodes; X represents the edge
best of our knowledge. This work covers GNN techniques feature; XN represents the feature of all nodes; F(·), G(·)
respectively represent the global transformation function and TABLE 1. Representative models for link prediction.
the global output function.
It can be seen that when the state of all nodes is updated
from t to t + 1, it can be expressed as
H t+1 = F H t , X .
(3)
A. LINK PREDICTION
Link prediction seeks to forecast missing information (links
or relations) between the elements in knowledge graphs.
The overview of recent GNN-based link prediction models
on the knowledge graph is presented in Table 1. In fact,
KGs can be represented as multi-relational directed graphs
with nodes and edges representing entities and relations,
respectively [22]. Therefore, most researchers are motivated
to stress the essential effects of various connection patterns
between entities, which result in the sufficient capture of the
relationships. The models that belong to the Relation-Aware
GNN category are shown in Fig. 2.
triples in the KG and elements of the feature vectors in TABLE 2. Representative models for knowledge graph alignment.
the innermost and outermost layers of the GNN. Hence,
the predicted triples can be read out directly from the last
layer of the GNN without the requirement for additional
components.
The previously described models can only handle the
Out-of-knowledge-graph (OOKG) entity problem. Moreover,
they cannot handle the issue of unobserved relations, which is
a new zero-shot scenario put forward currently. Convolutional
Transition and Attention-Based Aggregation Graph Neural
Network [24] was regarded as a novel method that can be
used to generate embedding vectors of OOKG relations.
This approach novelly sets a convolutional transition function
with the purpose of transferring information for OOKG
entities and relations in parallel. When computing the
embedding of relations, for example, the propagation model
for relations in the neighborhood is used. For the purpose
of determining the weight value of each information vector, B. KNOWLEDGE GRAPH ALIGNMENT
the author also proposed an attention-based aggregation In the field of knowledge fusion, Entity Alignment, also
network. known as Entity Matching or Entity Resolution, is a critical
According to the literature review, there are other and foundational technology. Knowledge graphs that depict
perspectives on the link prediction problem for knowl- the same real-world entity can be identified by aligning
edge graphs. The SEAL [30] framework develops a new their entities [39]. In this study, we consider two groups of
γ -decaying heuristic theory and translates link prediction knowledge graphs, static KGs and dynamic KGs. Table 2
into a subgraph classification problem in a comprehensive illustrates the representative publications of GNN-based
manner. Initially, the SEAL can extract its h-hop enclosing knowledge graph alignment.
subgraph A for each target link and constructs its node We first introduce different embedding models for static
information matrix X , including structural node labels, latent KGs. Since the seed alignments are usually insufficient
embeddings, and explicit attributes of nodes. Subsequently, for high-quality entity embedding, most efforts failed to
SEAL may send (A, X ) into a GNN to categorize the consider structure heterogeneity between different KGs.
existence of the link. By taking unique local structures Alignment-oriented knowledge graph (KG) embeddings can
like cycles and stars into account, a novel graph attention be learned using MuGNN [45], a Multi-channel Graph Neural
network named LSA-GAT [26] derives a sophisticated Network model that robustly encodes two KGs using multiple
representation covering both the semantic and structural channels. Each channel encodes KGs using various relation
information. Lightweight Framework for Context-Aware weighting strategies for self-attention toward KG completion
Knowledge Graph Embedding (LightCAKE) [37] focuses and cross-KG attention for trimming exclusive entities
on graph context. The novel aspect of this technique is independently, both of which are thoroughly integrated using
the construction of a context star network to model the pooling approaches. It can also consistently conclude and
entity/relation context. Following that, each entity/relation transfer rule knowledge for the completion of two KGs.
node in the newly-framed context star graph combines MuGNN may be able to resolve the structural differences
information from its surrounding context nodes using a between two KGs and better utilize seed alignment data.
scoring algorithm to determine weights. The association While MuGNN finds the structure incompleteness of KGs
rules enhanced knowledge graph attention network (AR- and aims at the rule-based KG completion, the counter-
KGAT) [35] aggregates neighborhood information with both part entities have non-isomorphic neighborhood structures
association-rules-based and graph-based attention weights. unavoidably. As a result, a new KG alignment network, called
Since a knowledge graph is a directed labeled graph in which AliNet [42], was proposed to mitigate the different neigh-
the labels have well-defined meanings, the Gravity-Inspired borhood structures end-to-end. AliNet introduces distant
Model [27] is proposed as a new gravity-inspired decoder neighbors to enlarge the overlap between their neighborhood
scheme for the link prediction in directed graphs. Inspired structures using an attention mechanism and restricts the
by Newton’s theory of universal gravity, this framework equivalent entity pairs’ two entities to have the same hidden
learns node embedding from directed graphs, using graph state in each GAT layer. A relation loss is finally used
AE and VAE frameworks. Besides, Newton’s equations in to refine entity representations. The entity-pair embedding
the resulting embedding are applied, the acceleration ai→j = approach (EPEA) [43] for KG alignment introduced the
Gmj /r 2 of a node i towards a node j is applied because of pairwise connectivity graph (PCG) of KGs, whose nodes are
gravity in the embedding to represent the likelihood that i entity-pairs and edges are in line with relation-pairs. This
associated with j in the directed graph. approach tries to encode attribute features from entity-pairs
using a CNN and then enhance the features propagation TABLE 3. Representative models for knowledge graph reasoning.
among the neighbors of entity-pairs through GNN with edge-
aware attention.
Many attention mechanism-based models for KG repre-
sentation learning have recently obtained advanced perfor-
mance in entity alignment tasks. For example, decentralized
attention networks (DANs) [46] compute the attention
score by purely relying on the target entity’s neighbors
and ignoring the requirement of its embedding throughout.
Such characteristic enables DAN to induce embeddings
on unseen entities. Since there are different entity types C. KNOWLEDGE GRAPH REASONING
and relation types in a knowledge graph. Considering the The knowledge reasoning for knowledge graphs studied in
nature of entity and relation types, it makes sense to have this paper refers to using specific methods to infer new
a different alignment strategy for the different entity types. conclusions or identify wrong information based on existing
Therefore, a multi-type entity alignment algorithm named data. For example, a fact like triple (X , BirthPlace, Y ) is
CG-MuAlign [40] was created to collectively align entities given in DPpedia, the missing triple (X ,Nationality, Y ) can be
of different types and make predictions on unseen entities obtained through reasoning [50]. To some extent, knowledge
using attention mechanisms and neighborhood information. reasoning is similar to link prediction, while the relationships
Furthermore, it used relation-aware neighborhood sampling acquired by knowledge reasoning mostly need a multi-hop
to enhance the computational efficiency of the approach in reasoning process in the knowledge graph. Table 3 illustrates
large-scale data collection. the representative publications of GNN-based knowledge
Many other applicable circumstances can be regarded as graph reasoning.
problems of graph alignment. For example, Map Fusion (MF) Graph neural networks learn the feature representa-
aims to identify nodes from both road networks that match tion automatically and provide structured explanations
each other [48]. Still, the current GNN approaches show suitable for knowledge reasoning. The target relational
poor performance for the MF task since the information from attention-oriented reasoning (TRAR) [54] model proposes
non-overlapping areas negatively affects the learned node a novel embedding-based approach to aggregate the infor-
representations. The Graph Alignment Network (GrAN) [41] mation by designing node-level and relational subgraph-
aggregates information from neighbors by emphasizing level attention mechanisms. In addition, the multiple target
nodes that have a good match in the counterpart graph, relational attention-oriented layers concentrate more on
leading to an inductive bias that neighboring nodes that the relations that match the target relation, whereas each
are likely in the overlapping area are more useful for the subgraph uses a hierarchical attention mechanism for obtain-
target node representation. Cross-lingual entity alignment ing node-level information. The Dynamic Pruned Message
associates semantically similar entities in knowledge graphs Passing Networks (DPMPN) [52] also model the reasoning
with different languages. However, current approaches fail to process for large-scale knowledge graphs by constructing
model the meta semantics or complex relations such as n- local subgraphs dynamically. This approach developed a
to-n and multi-graphs. A new method, Meta Relation Aware two-GNN framework to learn the reasoning by explaining
Entity Alignment (MRAEA) [44], operates on cross-lingual through some understandable form. The cascaded attention
KGs through leveraging meta relation-aware embedding and mechanism makes explanation efficient by selecting relevant
relation-aware self-attention. In addition, this work further nodes to construct subgraphs regardless of how large the
adopted an effective iterative training strategy on the basis underlying graph is. Even though the above-mentioned
of the asymmetric nature of alignments. embedding methods have successfully acquired promising
The prior discussion is based on static KGs, i.e., the findings in specific KG reasoning tasks, they fall short for
knowledge is represented at a specific point in time [49]. modeling multi-hop relational paths in more complicated
However, because of initial incompleteness, almost every KG reasoning tasks. The Deep-IDA∗ [51] framework empowers
has some evolution in practice. Moreover, this evolution is embedding-based methods by combining path-based algo-
smooth with small changes rather than drastic modifications rithms. It is the first to integrate the traditional path searching
of large subgraphs in most cases. Therefore, to efficiently algorithms and deep neural networks for KG reasoning.
tackle the issue of updating entity embeddings for the evolv- Furthermore, the knowledge graphs reasoning ability can
ing graph topology, a family of algorithms (DINGAL) [47] be widely applied in many downstream tasks, and the
using graph convolutional networks is proposed. The key illustration is shown in Fig. 4. Developing a reasoning method
idea is to distance the coupling between the parameter matrix is one of the specific directions of knowledge graph in the
in GCN and the underlying graph topology. This work is military field. It is a critical technology that can stimulate
believed to be the first to study the dynamic knowledge graph the intellectual development of military combat command.
alignment problem, and more reaches will likely be made on A knowledge reasoning method was proposed for military
dynamic knowledge graphs in the future. decisions by mixing rule learning, rule injection, and graph
D. NODE CLASSIFICATION
Node classification, in which an attribute of each node
in a graph is predicted, is another of the most popular
and commonly adopted tasks on graph data—for example,
assigning a categorical class to each node (binary or
multiclass classification) or forecasting a continuous number
(regression). Table 4 illustrates the representative publica-
tions of GNN-based knowledge graph node classification. FIGURE 5. Illustration of user-item interactions and the knowledge graph.
To solve the problem of the limited receptive field
caused by the lack of the ‘‘graph pooling’’ mechanism,
a novel deep Hierarchical Graph Convolutional Network A. RECOMMENDER SYSTEM
(H-GCN) [58] is presented for semi-supervised node classifi- Recommender Systems (RS), as one of the most famous
cation. Coarsening layers and symmetric refining layers make and significant uses of Artificial Intelligence (AI), have been
up the H-GCN model. This approach can achieve a more widely adopted to assist consumers in making appropriate
significant receptive field and good information propagation choices among the enormous amount of products and
by clustering structurally related nodes into hyper-nodes. services available. However, when the data has cold-start
Still, labeled data is not always available, and the training problems, depending merely on user-item interactions spoils
sample scarcity problem has aroused extensive research inter- recommendation performance. As a result, existing research
est. The Heterogeneous Deep Graph Infomax (HDGI) [57] suggests using knowledge graphs (KGs) as side information
attempted to learn high-level representations containing to investigate implicit or high-order connectivity relations
graph-level structural information without any supervised between users or items to improve their representations
label by maximizing local-global mutual information. HDGI and thus improve recommendation effectiveness, as shown
also combines the meta-path technique to represent the in Fig. 5. The GNN technique addresses two critical
composite relations with distinct semantics in heterogeneous downstream tasks: explicit feedback and implicit feedback.
graph studies. For example, Ranking prediction models employ explicit
feedback to deliver a customized ranked list of recommended TABLE 5. Representative models of GNN-based knowledge-aware
recommender system.
products to the user. On the other hand, Click-through rate
(CTR) prediction leverages implicit feedback to forecast the
likelihood of people clicking adverts or objects. As a result,
the knowledge-aware recommendation task can be expressed
as
relations in heterogeneous graphs with different semantics. TABLE 6. Representative models for knowledge base question answering.
The Knowledge-aware Graph Neural Networks with Label
Smoothness regularization (KGNN-LS) [67] calculates user-
specific item embeddings by initially employing a trainable
function to identify meaningful knowledge graph relation-
ships for a given user. For example, a given user is more
concerned with the ‘‘director’’ relationship between movies
and people than the ‘‘lead actor’’ relationship. Furthermore,
the proposed label smoothness constraint and leave-one-
out loss offer strong regularization for studying the edge TABLE 7. Representative models for drug-drug interactions.
weights in KGs. The Dual Knowledge Multimodal Network
(DKMN) [61] is an extension work of KGNN-LS. This work
aims to work on knowledge graphs and other multimodal
data, e.g., textual reviews and pictures.
Unlike previous studies, which have primarily focused on
exploring novel neural networks, some researchers consider base (KB). In recent years, academics have focused chiefly
reducing the massive computational cost while maintaining on GNN-based solutions for addressing the difficulties of
the pattern of extracting features. The GraphSW [66] answering complicated questions, as shown in Table 6 for
technique is based on a stage-wise training framework relevant models.
that only examines a subset of KG entities at each stage. Subgraph reasoning is a common method for obtaining the
In the succeeding steps, the network receives the learned answer. QA-GNN [70] is an end-to-end question answering
embedding from the previous stages, and the model can model in which the QA context as an additional node
gradually learn the information from the KG. It has been connects the topic entities to form the subgraph of KG.
discovered that the existing non-sampling strategy computes It is then proposed to use pre-trained language models to
the gradient over the entire data set, resulting in high score KG nodes’ relevance to the QA context based on their
computational costs. A novel Jointly Non-Sampling learning importance. Finally, the joint graph representation is updated
model for Knowledge graph enhanced Recommendation through graph-based message passing. Deciphering Entity
(JNSKR) [62] first designed a new efficient non-sampling Links from Free Text (DELFT) [71] produces a dense and
loss for knowledge graph embedding learning, significantly high-coverage semantic subgraph by linking question entity
reducing complexity. The surrounding entities of an item nodes to candidate entity nodes using text sentences from
are then aggregated with attention mechanisms to help learn Wikipedia. This innovative graph neural network performs
accurate user preferences over items. better on entity-rich questions due to the extensive coverage
Also, there are some other methods for studying of its free-text evidence.
GNN-based recommender systems. The Feature interaction Open-domain question answering is a difficult task in
Graph Neural Networks (Fi-GNN) [65] identified a limitation KBQA, which aims to answer a question in natural language
in modeling sophisticated interactions using simple unstruc- in accordance with large-scale unstructured documents. The
tured combinations. Hence, it intends to consider the structure Relational GNN for Open-domain Question Answering [69]
of multi-field features. The Fi-GNN adopted a graph structure is an OpenQA architecture. This proposed model can update
representing multi-field features called a feature graph. Each embeddings from a knowledge graph and a collection
node in the graph intuitively is consistent with a feature of linked texts together to learn contextual knowledge
field, and diverse fields can make the interaction through graph embeddings. Contextualized relations are utilized in
edges. As a result, modeling sophisticated interactions knowledge graphs to enrich them. The bi-directional attention
between feature fields can be decreased to modeling node mechanism and hierarchical representation learning are also
interactions on the feature graph. The novel Split-And- used for open-domain question answering tasks in this
ReCombine strategy (SARC) [64] separates the user-item- approach as well.
entity interactions into three two-way interactions: the
‘‘user-item’’, ‘‘user-entity’’, and ‘‘item-entity’’ interactions. C. DRUG-DRUG INTERACTIONS
Besides, the two-way interactions can be represented as a Over the years, graph neural network has been an emerging
graph, thus be modeled using Graph Neural Networks (GNN) tool for Drug-drug interaction (DDI) prediction but has not
and knowledge graph embeddings. In the second stage, been widely applied, and the concerned models are shown
SARC uses the representation of users and items learned in in Table 7. The graph energy neural network (GENN) [72]
the first step to make a suggestion. is the first proposed model explicitly for drug link type
correlations. Motivated by the intuition that an ‘‘energy’’
B. KNOWLEDGE BASE QUESTION ANSWERING can be derived over the graph, a new energy function
Knowledge base question answering (KBQA) aims to defined by the graph neural networks is formulated and
respond to a question using information from a knowledge used to incorporate the dependency structures. However, this
approach only considers the correlations between drugs, and the knowledge graphs help to construct an easy-to-understand
a neglected deficiency is that the relations between drugs and alarm knowledge system, which leads to good accuracy.
other entities such as targes and genes are not considered. There are three methods used in Few-Shot KG-To-Text
To address this limitation, a novel end-to-end frame- Generation Model [80]: representation alignment to help
work, the Knowledge graph neural network (KGNN) [73], bridge the semantic gap between KG codings and PLMs,
is introduced to explore the topological information of each relation-biased KG linearization to derive input representa-
entity in the knowledge graph, which is beneficial for DDI tions, as well as multi-task learning to learn how KG and text
prediction. In addition, KGNN aggregates rich neighborhood correspond to each other. All of these assists in producing
information with a bias to learn both high-order structures and effective semantic representations for both few-shot and
semantic relations of the KG. fully supervised settings. According to the News Knowledge
Driven Graph Neural Network (NKD-GNN) [81], the KG
D. RESEARCH IN OTHER AREAS was built using named entities extracted from the TopNews
The GNN-based knowledge graphs also have applications in dataset, and the relationships between those entities were
many other fields. Table 8 lists the recently proposed models, explored across the KG. Therefore, by examining all
including scenarios of Fake News Detection (FND), Fault relationships and implicitly assuming relationships between
Localization (FL), Image-Text Matching (ITM), Personalized named entities in the news knowledge graph, this model
Review Generation (PRG), Situational Awareness (SA), is able to choose the best candidate for each placeholder
Knowledge Tracing (KT), and Power System Network in the news image template caption. The KG-Enhanced
Topology Identification (PSNTI). Review Generation Model [79] adopted the Caps-GNN to
The DEAP-FAKED [76] encoded news content using learn graph capsules to encode underlying characteristics
an NLP-based approach and then identified, extracted, and from the HKG. The generation process contains aspect
mapped the named entities to a KG. Finally, the entities sequence generation and sentence generation. This model
in the KG were encoded using a GNN-based technique. performs better than all the baselines due to the significant
This approach can achieve better results by utilizing only difference that KG information was included in the multi-
the news articles’ titles and handling the bias. The Optical stage generation process. The SA GNN [74] discussed four
Network Fault Localization [77] designed an alarm KG to ideas of making predictions with collective AI and proposed
assist network administrators to explore and visualize the a GNN framework that jointly learns object representations
relationship between alarms. Then, a GGNN-based method from multiple agents. Since each AI has a unique, incomplete
was put forward to reason the association between alarms view of the knowledge graph with noise, multiple AI
and detect the root alarm. It is experimentally verified that agents are required to produce a forecast collectively.
The Graph-Based Knowledge Tracing [78] reformulated [8] C. Wang, M. Gao, X. He, and R. Zhang, ‘‘Challenges in Chinese knowledge
the knowledge tracing task as a time-series node-level graph construction,’’ in Proc. 31st IEEE Int. Conf. Data Eng. Workshops,
Apr. 2015, pp. 59–61.
classification issue in the GNN. This approach exhibited [9] A. Rossi, D. Barbosa, D. Firmani, A. Matinata, and P. Merialdo, ‘‘Knowl-
better interpretable predictions because it directly models the edge graph embedding for link prediction: A comparative analysis,’’ ACM
knowledge state for each concept and further models the Trans. Knowl. Discovery Data, vol. 15, no. 2, pp. 1–49, Apr. 2021.
[10] Z. Sun, W. Hu, Q. Zhang, and Y. Qu, ‘‘Bootstrapping entity alignment with
edge weights using K separate neural networks for K edge knowledge graph embedding,’’ in Proc. 27th Int. Joint Conf. Artif. Intell.,
types. The graph neural network can serve as a technological Stockholm, Sweden, Jul. 2018, pp. 4396–4402.
tool among entities in the knowledge graph, which works [11] X. Chen, S. Jia, and Y. Xiang, ‘‘A review: Knowledge reasoning
over knowledge graph,’’ Expert Syst. Appl., vol. 141, Mar. 2020,
for missing information. Therefore, both can be merged to Art. no. 112948.
be adopted in topological recognition. The Power System [12] N. Guan, D. Song, and L. Liao, ‘‘Knowledge graph embedding with
Network Topology Identification [75] contained an additional concepts,’’ Knowl.-Based Syst., vol. 164, pp. 38–44, Jan. 2019.
process for inferring conflicting information. Meanwhile, [13] Z. Liu, M. Sun, and Y. Lin, ‘‘Knowledge representation learning:
A review,’’ J. Comput. Res. Develop., vol. 53, no. 2, pp. 247–261, 2016.
the knowledge inference on contradictory information was [14] X. Zhao, W. Zeng, J. Tang, W. Wang, and F. M. Suchanek, ‘‘An
conducted on the basis of GNN. experimental study of state-of-the-art entity alignment approaches,’’ IEEE
Trans. Knowl. Data Eng., vol. 34, no. 6, pp. 2610–2625, Jun. 2022.
[15] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini,
V. CONCLUSION ‘‘The graph neural network model,’’ IEEE Trans. Neural Netw., vol. 20,
Knowledge graph, a form of data representation that uses no. 1, pp. 61–80, Jan. 2008.
graph structure to model the connections between things, has [16] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and
M. Sun, ‘‘Graph neural networks: A review of methods and applications,’’
attracted much attention and faces many challenges. This AI Open, vol. 1, pp. 57–81, Apr. 2021.
paper depicted the importance and necessity of knowledge [17] I. Portugal, P. Alencar, and D. Cowan, ‘‘The use of machine learning
graph embedding and how KGE is used to solve KG algorithms in recommender systems: A systematic review,’’ Expert Syst.
Appl., vol. 97, pp. 205–227, May 2018.
problems. We present a thorough review of existing GNN- [18] W. Ahmed, A. Dasan, and A. P. Babu, ‘‘Developing an intelligent question
based approaches that mainly focus on four types of KG tasks, answering system,’’ Int. J. Educ. Manage. Eng., vol. 7, no. 6, pp. 50–61,
i.e., link prediction, knowledge graph alignment, knowledge Nov. 2017.
[19] B. Percha and R. B. Altman, ‘‘Informatics confronts drug–drug interac-
graph reasoning, and node classification. We went over tions,’’ Trends Pharmacolog. Sci., vol. 34, no. 3, pp. 178–184, Mar. 2013.
the specifics of the model as well as the benefits and [20] S. Auer, V. Kovtun, M. Prinz, A. Kasprzik, M. Stocker, and M. E. Vidal,
contributions of such strategies. After that, this work focuses ‘‘Towards a knowledge graph for science,’’ in Proc. 8th Int. Conf. Web
Intell., Mining Semantics, Novi Sad, Serbia, Jun. 2018, pp. 1–6.
on how the KG method based on GNN can be applied to
[21] Z. Liu and J. Zhou, ‘‘Introduction to graph neural networks,’’ Synth.
practical application areas such as recommender systems, Lectures Artif. Intell. Mach. Learn., vol. 14, no. 2, pp. 1–127, Mar. 2020.
question answering, and drug-drug interaction. This is the [22] Y. Zhang, Q. Fang, S. Qian, and C. Xu, ‘‘Multi-modal multi-relational fea-
first complete graph neural network technology survey for ture aggregation network for medical knowledge representation learning,’’
in Proc. 28th ACM Int. Conf. Multimedia, Seattle, WA, USA, Oct. 2020,
knowledge graphs. We believe that the investigation of KGs pp. 3956–3965.
on using GNN will receive increasing attention in the near [23] Z. Zhang, F. Zhuang, H. Zhu, Z. Shi, H. Xiong, and Q. He, ‘‘Relational
future. graph neural network with hierarchical attention for knowledge graph com-
pletion,’’ in Proc. AAAI-AI, New York, NY, USA, 2020, pp. 9612–9619.
[24] M. Zhao, W. Jia, and Y. Huang, ‘‘Attention-based aggregation graph
ACKNOWLEDGMENT networks for knowledge graph information transfer,’’ in Proc. PAKDD,
The authors would like to thank Universiti Teknikal Malaysia Singapore, 2020, pp. 542–554.
[25] Y. Zhang, X. Chen, Y. Yang, A. Ramamurthy, B. Li, Y. Qi, and L. Song,
Melaka for supporting this research. ‘‘Efficient probabilistic logic reasoning with graph neural networks,’’
2020, arXiv:2001.11850.
[26] K. Ji, B. Hui, and G. Luo, ‘‘Graph attention networks with local structure
REFERENCES
awareness for knowledge graph completion,’’ IEEE Access, vol. 8,
[1] S. Sagiroglu and D. Sinanc, ‘‘Big data: A review,’’ in Proc. CTS, San Diego, pp. 224860–224870, 2020.
CA, USA, 2013, pp. 42–47. [27] G. Salha, S. Limnios, R. Hennequin, V.-A. Tran, and M. Vazirgiannis,
[2] D. Fensel, U. Şimşek, K. Angele, E. Huaman, E. Kärle, O. Panasiuk, ‘‘Gravity-inspired graph autoencoders for directed link prediction,’’ in
I. Toma, J. Umbrich, and A. Wahler, Knowledge Graphs. Cham, Proc. 28th ACM Int. Conf. Inf. Knowl. Manage., Beijing, China, Nov. 2019,
Switzerland: Springer, 2020, pp. 1–10. pp. 589–598.
[3] B. Jiang, L. Tan, Y. Ren, and F. Li, ‘‘Intelligent interaction with virtual [28] S. Liu, B. Grau, I. Horrocks, and E. Kostylev, ‘‘INDIGO: GNN-based
geographical environments based on geographic knowledge graph,’’ ISPRS inductive knowledge graph completion using pair-wise encoding,’’ in Proc.
Int. J. Geo-Inf., vol. 8, no. 10, p. 428, Sep. 2019. Adv. Neural Inf. Process. Syst., 2021, pp. 2034–2045.
[4] D. Ringler and H. Paulheim, ‘‘One knowledge graph to rule them all? [29] K. Teru, E. Denis, and W. Hamilton, ‘‘Inductive relation prediction
Analyzing the differences between DBpedia, YAGO, Wikidata & co.,’’ by subgraph reasoning,’’ in Proc. 37th Int. Conf. Mach. Learn., 2020,
in Proc. Joint German/Austrian Conf. Artif. Intell., Dortmund, Germany, pp. 9448–9457.
2017, pp. 366–372. [30] M. Zhang and Y. Chen, ‘‘Link prediction based on graph neural networks,’’
[5] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives, in Proc. 32nd NeurIPS, Montreal, QC, Canada, 2018, pp. 5171–5181.
‘‘DBpedia: A nucleus for a web of open data,’’ in The Semantic Web. Berlin, [31] S. Wang, X. Wei, C. N. N. D. Santos, Z. Wang, R. Nallapati, A. Arnold,
Germany: Springer, 2007, pp. 722–735. B. Xiang, P. S. Yu, and I. F. Cruz, ‘‘Mixed-curvature multi-relational graph
[6] H. Paulheim, ‘‘Knowledge graph refinement: A survey of approaches and neural network for knowledge graph completion,’’ in Proc. Web Conf.,
evaluation methods,’’ Semantic Web, vol. 8, no. 3, pp. 489–508, Dec. 2016. Ljubljana, Slovenia, Apr. 2021, pp. 1761–1771.
[7] J. Pujara, H. Miao, L. Getoor, and W. W. Cohen, ‘‘Large-scale knowledge [32] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov,
graph identification using PSL,’’ in Proc. AAAI-FSS, Arlington, TX, USA, and M. Welling, ‘‘Modeling relational data with graph convolutional
2013, pp. 1–4. networks,’’ in Proc. ESWC, Crete, Greece, 2018, pp. 593–607.
[33] X. Liu, H. Tan, Q. Chen, and G. Lin, ‘‘RAGAT: Relation aware graph [56] A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi,
attention network for knowledge graph completion,’’ IEEE Access, vol. 9, T. Kaler, T. Schardl, and C. Leiserson, ‘‘EvolveGCN: Evolving graph
pp. 20840–20849, 2021. convolutional networks for dynamic graphs,’’ in Proc. AAAI Conf. Artif.
[34] Z. Wang, Z. Ren, C. He, P. Zhang, and Y. Hu, ‘‘Robust embedding with Intell., New York, NY, USA, 2020, pp. 5363–5370.
multi-level structures for link prediction,’’ in Proc. 28th Int. Joint Conf. [57] Y. Ren, B. Liu, C. Huang, P. Dai, L. Bo, and J. Zhang, ‘‘Heterogeneous
Artif. Intell., Macao, China, Aug. 2019, pp. 5240–5246. deep graph infomax,’’ 2019, arXiv:1911.08538.
[35] Z. Zhang, J. Huang, and Q. Tan, ‘‘Association rules enhanced knowledge [58] F. Hu, Y. Zhu, S. Wu, L. Wang, and T. Tan, ‘‘Hierarchical graph
graph attention network,’’ Knowl.-Based Syst., vol. 239, Mar. 2022, convolutional networks for semi-supervised node classification,’’ 2019,
Art. no. 108038. arXiv:1902.06667.
[36] Z. Li, H. Liu, Z. Zhang, T. Liu, and N. N. Xiong, ‘‘Learning knowledge [59] Y. Liu, S. Yang, Y. Xu, C. Miao, M. Wu, and J. Zhang, ‘‘Contextualized
graph embedding with heterogeneous relation attention networks,’’ IEEE graph attention network for recommendation with item knowledge graph,’’
Trans. Neural Netw. Learn. Syst., early access, Feb. 19, 2021, doi: IEEE Trans. Knowl. Data Eng., early access, May 24, 2021, doi:
10.1109/TNNLS.2021.3055147. 10.1109/TKDE.2021.3082948.
[37] Z. Ning, Z. Qiao, H. Dong, Y. Du, and Y. Zhou, ‘‘LightCAKE: [60] Y. Wang, Z. Liu, Z. Fan, L. Sun, and P. S. Yu, ‘‘DSKReG: Differentiable
A lightweight framework for context-aware knowledge graph embedding,’’ sampling on knowledge graph for recommendation with relational
in Proc. PAKDD, 2021, pp. 181–193. GNN,’’ in Proc. 30th ACM Int. Conf. Inf. Knowl. Manage., Oct. 2021,
[38] A. Hogan, E. Blomqvist, M. Cochez, C. d’Amato, G. D. Melo, pp. 3513–3517.
C. Gutierrez, S. Kirrane, J. E. L. Gayo, R. Navigli, and S. Neumaier, [61] J. Ouyang, Y. Jin, and X. Zhou, ‘‘Dual knowledge multimodal network for
‘‘Knowledge graphs,’’ Synth. Lect. Comput. Archit., vol. 12, no. 2, recommender system,’’ to be published.
pp. 1–257, Nov. 2021. [62] C. Chen, M. Zhang, W. Ma, Y. Liu, and S. Ma, ‘‘Jointly non-sampling
[39] B. D. Trisedya, J. Qi, and R. Zhang, ‘‘Entity alignment between knowledge learning for knowledge graph enhanced recommendation,’’ in Proc. 43rd
graphs using attribute embeddings,’’ in Proc. AAAI-AI, Honolulu, HI, USA, Int. ACM SIGIR Conf. Res. Develop. Inf. Retr., Xi’an, China, Jul. 2020,
2019, pp. 297–304. pp. 189–198.
[40] Q. Zhu, H. Wei, B. Sisman, D. Zheng, C. Faloutsos, X. L. Dong, and J. Han, [63] X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, X. He, and T.-S. Chua,
‘‘Collective multi-type entity alignment between knowledge graphs,’’ in ‘‘Learning intents behind interactions with knowledge graph for rec-
Proc. Web Conf., Taipei, Taiwan, Apr. 2020, pp. 2241–2252. ommendation,’’ in Proc. Web Conf., Ljubljana, Slovenia, Apr. 2021,
[41] E. Faerman, O. Voggenreiter, F. Borutta, T. Emrich, M. Berrendorf, and pp. 878–887.
M. Schubert, ‘‘Graph alignment networks with node matching scores,’’ in [64] W. Zhang, Y. Cao, and C. Xu, ‘‘SARC: Split-and-recombine networks for
Proc. NeurIPS, Vancouver, BC, Canada, 2019, pp. 1–7. knowledge-based recommendation,’’ in Proc. IEEE 31st Int. Conf. Tools
[42] Z. Sun, C. Wang, W. Hu, M. Chen, J. Dai, W. Zhang, and Y. Qu, Artif. Intell. (ICTAI), Portland, OR, USA, Nov. 2019, pp. 652–659.
‘‘Knowledge graph alignment network with gated multi-hop neighborhood
[65] Z. Li, Z. Cui, S. Wu, X. Zhang, and L. Wang, ‘‘Fi-GNN: Modeling
aggregation,’’ in Proc. AAA-AI, New York, NY, USA, 2020, pp. 222–229.
feature interactions via graph neural networks for CTR prediction,’’ in
[43] Z. Wang, J. Yang, and X. Ye, ‘‘Knowledge graph alignment with entity-pair Proc. 28th ACM Int. Conf. Inf. Knowl. Manage., Beijing, China, Nov. 2019,
embedding,’’ in Proc. Conf. Empirical Methods Natural Lang. Process. pp. 539–548.
(EMNLP), Punta Cana, Dominican Republic, 2020, pp. 1672–1680.
[66] C.-Y. Tai, M.-R. Wu, Y.-W. Chu, and S.-Y. Chu, ‘‘GraphSW: A training
[44] X. Mao, W. Wang, H. Xu, M. Lan, and Y. Wu, ‘‘MRAEA: An efficient protocol based on stage-wise training for GNN-based recommender
and robust entity alignment approach for cross-lingual knowledge graph,’’ model,’’ 2019, arXiv:1908.05611.
in Proc. 13th Int. Conf. Web Search Data Mining, Houston, TX, USA,
Jan. 2020, pp. 420–428. [67] H. Wang, F. Zhang, M. Zhang, J. Leskovec, M. Zhao, W. Li, and
Z. Wang, ‘‘Knowledge-aware graph neural networks with label smoothness
[45] Y. Cao, Z. Liu, C. Li, Z. Liu, J. Li, and T.-S. Chua, ‘‘Multi-channel graph
regularization for recommender systems,’’ in Proc. 25th ACM SIGKDD Int.
neural network for entity alignment,’’ 2019, arXiv:1908.09898.
Conf. Knowl. Discovery Data Mining, Anchorage, AK, USA, Jul. 2019,
[46] L. Guo, W. Wang, Z. Sun, C. Liu, and W. Hu, ‘‘Decentralized knowledge pp. 968–977.
graph representation learning,’’ 2020, arXiv:2010.08114.
[68] C. Gao, X. Wang, X. He, and Y. Li, ‘‘Graph neural networks for
[47] Y. Yan, L. Liu, Y. Ban, B. Jing, and H. Tong, ‘‘Dynamic knowledge graph recommender system,’’ in Proc. 15th ACM Int. Conf. Web Search Data
alignment,’’ in Proc. AAAI-AI, 2021, pp. 4564–4572. Mining, Phoenix, AZ, USA, Feb. 2022, pp. 1623–1625.
[48] Y. Yue, C. Yang, Y. Wang, P. G. C. N. Senarathne, J. Zhang, M. Wen,
[69] S. Vivona and K. Hassani, ‘‘Relational graph representation learning for
and D. Wang, ‘‘A multilevel fusion system for multirobot 3-D mapping
open-domain question answering,’’ 2019, arXiv:1910.08249.
using heterogeneous sensors,’’ IEEE Syst. J., vol. 14, no. 1, pp. 1341–1352,
Mar. 2020. [70] M. Yasunaga, H. Ren, A. Bosselut, P. Liang, and J. Leskovec, ‘‘QA-GNN:
Reasoning with language models and knowledge graphs for question
[49] Y. Tao, Y. Li, and Z. Wu, ‘‘Temporal link prediction via reinforcement
answering,’’ 2021, arXiv:2104.06378.
learning,’’ in Proc. IEEE Int. Conf. Acoust., Speech Signal Process.
(ICASSP), Toronto, ON, Canada, Jun. 2021, pp. 3470–3474. [71] C. Zhao, C. Xiong, X. Qian, and J. Boyd-Graber, ‘‘Complex factoid
[50] G. Wan, S. Pan, C. Gong, C. Zhou, and G. Haffari, ‘‘Reasoning like human: question answering with a free-text knowledge graph,’’ in Proc. Web Conf.,
Hierarchical reinforcement learning for knowledge graph reasoning,’’ in Taipei, Taiwan, Apr. 2020, pp. 1205–1216.
Proc. 29th Int. Joint Conf. Artif. Intell., Yokohama, Japan, Jul. 2020, [72] T. Ma, J. Shang, C. Xiao, and J. Sun, ‘‘GENN: Predicting correlated
pp. 1926–1932. drug-drug interactions with graph energy neural networks,’’ 2019,
[51] Q. Wang, Y. Hao, and F. Chen, ‘‘Deepening the IDA* algorithm arXiv:1910.02107.
for knowledge graph reasoning through neural network architecture,’’ [73] X. Lin, Z. Quan, Z.-J. Wang, T. Ma, and X. Zeng, ‘‘KGNN: Knowledge
Neurocomputing, vol. 429, pp. 101–109, Mar. 2021. graph neural network for drug-drug interaction prediction,’’ in Proc. 29th
[52] X. Xu, W. Feng, Y. Jiang, X. Xie, Z. Sun, and Z.-H. Deng, ‘‘Dynamically Int. Joint Conf. Artif. Intell., Yokohama, Japan, Jul. 2020, pp. 2739–2745.
pruned message passing networks for large-scale knowledge graph [74] M. Jiang, ‘‘Improving situational awareness with collective artificial
reasoning,’’ 2019, arXiv:1909.11334. intelligence over knowledge graphs,’’ Proc. SPIE, vol. 11413, Apr. 2020,
[53] K. Nie, K. Zeng, and Q. Meng, ‘‘Knowledge reasoning method for military Art. no. 114130.
decision support knowledge graph mixing rule and graph neural networks [75] C. Wang, J. An, and G. Mu, ‘‘Power system network topology identifi-
learning together,’’ in Proc. Chin. Autom. Congr. (CAC), Shanghai, China, cation based on knowledge graph and graph neural network,’’ Frontiers
Nov. 2020, pp. 4013–4018. Energy Res., vol. 8, p. 403, Feb. 2021.
[54] X. Zhao, Y. Jia, A. Li, R. Jiang, K. Chen, and Y. Wang, ‘‘Target rela- [76] M. Mayank, S. Sharma, and R. Sharma, ‘‘DEAP-FAKED: Knowledge
tional attention-oriented knowledge graph reasoning,’’ Neurocomputing, graph based approach for fake news detection,’’ 2021, arXiv:2107.10648.
vol. 461, pp. 577–586, Oct. 2021. [77] Z. Li, Y. Zhao, Y. Li, S. Rahman, F. Wang, X. Xin, and J. Zhang,
[55] L. Li, Z. Bi, H. Ye, S. Deng, H. Chen, and H. Tou, ‘‘Text-guided legal ‘‘Fault localization based on knowledge graph in software-defined
knowledge graph reasoning,’’ in Proc. CCKS, Guangzhou, China, 2021, optical networks,’’ J. Lightw. Technol., vol. 39, no. 13, pp. 4236–4246,
pp. 27–39. Apr. 8, 2021.
[78] H. Nakagawa, Y. Iwasawa, and Y. Matsuo, ‘‘Graph-based knowledge GOH ONG SING is currently an Assistant Vice
tracing: Modeling student proficiency using graph neural network,’’ Chancellor at Office of Industry and Commu-
in Proc. IEEE/WIC/ACM Int. Conf. Web Intell., Thessaloniki, Greece, nity Network. His research interests include the
Oct. 2019, pp. 156–163. development of intelligent agent, machine learning
[79] J. Li, S. Li, W. X. Zhao, G. He, Z. Wei, N. J. Yuan, and J.-R. Wen, and speech technology, conversational robot, and
‘‘Knowledge-enhanced personalized review generation with capsule graph mobile services. He has led research grants funded
neural network,’’ in Proc. 29th ACM Int. Conf. Inf. Knowl. Manage., by Malaysian Government’s Intensified Research.
Ireland, U.K., Oct. 2020, pp. 735–744.
[80] J. Li, T. Tang, W. X. Zhao, Z. Wei, N. J. Yuan, and J.-R. Wen, ‘‘Few-shot
knowledge graph-to-text generation with pretrained language models,’’
2021, arXiv:2106.01623.
[81] Z. Yumeng, Y. Jing, G. Shuo, and L. Limin, ‘‘News image-text matching
with news knowledge graph,’’ IEEE Access, vol. 9, pp. 108017–108027,
2021.
FENGYAN SONG received the M.S. degree from
the Department of Physics, Peking University.
He is proficient in quantum field theory. He cur-
rently switches to the field of artificial intelligence.
ZI YE received the bachelor’s degree in mathemat-
ics and statistical science from University College
London, U.K., in 2009, and the master’s degree in
applied statistics from the University of Oxford,
U.K., in 2010. She is currently pursuing the
Ph.D. degree with Universiti Teknikal Malaysia
Melaka. Her research interests include artificial
intelligence and machine learning.