Using Graph Neural Networks For Social Recommendat
Using Graph Neural Networks For Social Recommendat
Article
Using Graph Neural Networks for Social Recommendations
Dharahas Tallapally 1 , John Wang 2 , Katerina Potika 1, * and Magdalini Eirinaki 2
1 Department of Computer Science, San José State University, San José, CA 95192, USA;
[email protected]
2 Department of Computer Engineering, San José State University, San José, CA 95192, USA;
[email protected] (J.W.); [email protected] (M.E.)
* Correspondence: [email protected]; Tel.: +1-408-924-5134
Abstract: Recommender systems have revolutionized the way users discover and engage with
content. Moving beyond the collaborative filtering approach, most modern recommender systems
leverage additional sources of information, such as context and social network data. Such data
can be modeled using graphs, and the recent advances in Graph Neural Networks have led to the
prominence of a new family of graph-based recommender system algorithms. In this work, we
propose the RelationalNet algorithm, which not only models user–item, and user–user relationships
but also item–item relationships with graphs and uses them as input to the recommendation process.
The rationale for utilizing item–item interactions is to enrich the item embeddings by leveraging the
similarities between items. By using Graph Neural Networks (GNNs), RelationalNet incorporates
social influence and similar item influence into the recommendation process and captures more
accurate user interests, especially when traditional methods fall short due to data sparsity. Such
models improve the accuracy and effectiveness of recommendation systems by leveraging social
connections and item interactions. Results demonstrate that RelationalNet outperforms current
state-of-the-art social recommendation algorithms.
Keywords: social recommendation algorithm; graph neural networks; recommender systems; social
network; influence diffusion
In recent years, deep learning (DL) models have emerged as the dominant underlying
architecture for RSs, garnering substantial interest in both academic research and industrial
applications [4–9]. The allure of deep learning lies in its ability to capture complex and
non-linear relationships between users and items. By leveraging sophisticated neural
architectures, DL models excel at capturing intricate user–item dynamics, thus enhancing
the accuracy and relevance of recommendations. Additionally, these models offer the
flexibility to seamlessly integrate diverse data sources, such as contextual information,
textual data, and visual cues, thereby enriching the recommendation process with a wealth
of information.
Within the realm of DL algorithms, there exists a distinct category known as graph
learning-based methods, which offer a unique perspective in RSs. In these methods, RS
data are represented and analyzed through the lens of graphs. Specifically, the interactions
between users and items can be depicted as interconnected nodes in a graph, where the
links reflect the relationships between them. By leveraging graph-based representations,
RSs gain the advantage of incorporating structured external information, such as social
relationships among users, into the recommendation process. This integration of graph
learning provides a unified framework for modeling the diverse and abundant data present
in RSs.
Early explorations in graph learning-based RSs have focused on utilizing graph em-
bedding techniques to capture the relationships between nodes. These techniques can be
further categorized into factorization-based methods, distributed representation-based
methods, and neural embedding-based methods [10]. Their main aim is to learn mean-
ingful representations of nodes in the graph that capture their inherent relationships and
characteristics.
Recently, there has been a surge of interest in employing Graph Neural Networks
(GNNs) for recommendation tasks, owing to their exceptional ability to learn from graph-
structured data [8,11–13]. GNN-based recommendation models have attracted significant
attention due to their capacity to effectively capture the complex relationships and depen-
dencies among users, items, and other relevant features within the graph structure. By
leveraging the expressive power of GNNs, these models hold great promise for enhancing
the accuracy and effectiveness of recommender systems.
The utilization of graph learning techniques in RSs provides a valuable avenue for
leveraging the rich and interconnected nature of user–item interactions. By incorporating
graph structures and employing advanced methods like GNNs, RSs can effectively harness
the power of heterogeneous and interconnected data sources to generate more accurate and
personalized recommendations. As the field continues to advance, graph learning-based
methods are poised to play a pivotal role in the evolution of RSs, offering novel approaches
to address the challenges posed by diverse and interconnected data.
In this paper, we propose RelationalNet, an architecture that leverages GNNs to repre-
sent the relationships between users and items using various graph forms, which in turn
are fed into the prediction process to yield more accurate recommendations. In particular,
the proposed architecture encompasses the collaborative filtering notion of user–item and
item–item connections, while also incorporating the users’ social network relationships
(user–user). Such a rich and robust representation allows for more accurate predictions,
even in the case of sparse user–item input data, which is most often the main characteristic
of such systems, and can be used to address the cold-start problem, as item–item connec-
tions are inherent to the application domain and not dependent on the users’ ratings (or
absence thereof). Our approach is an extension of the successful DiffNet++ architecture [13]
that introduces additional types of embeddings. Via experimental evaluation using the
Yelp dataset, we demonstrate that our approach not only outperforms more established
graph-based social recommendation models but also improves on the recently proposed
DiffNet++ in generating accurate top-n recommendations for the user.
The rest of the paper is organized as follows: in Section 2, we review in detail the state
of the art (SOTA) in the areas of social recommenders, GNNs, and their intersection; in
Algorithms 2023, 16, 515 3 of 18
2. Related Work
With the widespread adoption of online social platforms, social RSs have emerged
as a highly promising approach that leverages social networks among users to enhance
recommendation performance [14–16]. Grounded in sociological concepts, such as ho-
mophily and social influence [17], this field of study operates under the premise that
users’ preferences are more profoundly shaped by those of their interconnected peers
than by those of unfamiliar users [18]. Tang et al. [15] give a narrow definition of social
recommendation as “any recommendation with online social relations as an additional
input, i.e., augmenting an existing recommendation engine with additional social signals”.
Researchers have long recognized the influence of social connections on individuals, high-
lighting the phenomenon of similar preferences among social neighbors as information
diffuses within social networks [1,2,11,19–23]. Social regularization [1,2] has been shown to
be effective in social recommendation scenarios, operating under the assumption that users
with similar preferences exhibit shared latent preferences within popular latent factor-based
models [24].
In the realm of social recommendations, GNNs have emerged as a powerful tool for
capturing the intricate relationships between users, items, and other contextual features,
such as time and location [11–13,22]. The incorporation of GNNs into social recommenda-
tion models allows for a comprehensive understanding of the complex dynamics present
in social networks, resulting in more accurate and contextually relevant recommenda-
tions. Recent studies investigate various aspects, such as modeling user–item interactions,
capturing social influence, incorporating contextual information, and addressing scala-
bility challenges. Through a deeper understanding of these developments, researchers
and practitioners can gain insights into the potential benefits and challenges associated
with integrating GNNs into social recommendation frameworks, thereby fostering further
innovation and advancements in this exciting research area.
Ying et al. [22] introduced PinSage, a novel framework based on GNNs, designed
specifically for personalized feed recommendations. The motivation behind PinSage stems
from the scalability limitations observed in traditional CF methods. PinSage revolutionizes
the landscape of personalized feeds (news) recommendations by constructing a graph
representation that encompasses both items and users. Leveraging the expressive power
of GNNs, PinSage efficiently learns personalized feed representations for each user. This
graph-based approach enables the model to capture the complex relationships between
items and users, thereby facilitating accurate and relevant pin recommendations. PinSage
innovatively combines content-based and collaborative filtering approaches.
In their paper, Fan et al. [11] presented GraphRec, an innovative recommendation
algorithm that leverages the power of graphs. They highlight the limitations of tradi-
tional recommendation techniques, particularly in dealing with the cold-start problem
and effectively capturing intricate user–item connections. GraphRec aims to overcome
these challenges by using graphs to model user–item interactions in the form of a diverse
graph, rating scores, and differentiating the ties strengths by considering the heterogeneous
strengths of social relations.
Wu et al. [12] proposed Diffnet, a neural influence diffusion model for social recom-
mendations. Please note that this term refers to information diffusion in a social graph in
this context (not to be confused with Generative AI diffusion). Diffnet utilizes a user’s social
network data to provide personalized recommendations. Its neural architecture comprises
four main components: the embedding layer, the fusion layer, the layer-wise influence
diffusion layers, and the prediction layer. Once the influence diffusion process reaches
stability, the output layer predicts the final preference for a user–item pair. Compared
Algorithms 2023, 16, 515 4 of 18
to other existing social recommendation models, the Diffnet architecture leverages both
user–item interaction data and social network information to enhance the recommendation
accuracy.
In subsequent research work, Wu et al. [13] introduced an enhanced version of the
Diffnet model, called Diffnet++. This enhanced model builds upon the neural influence
diffusion framework for social recommendations. In addition to learning user embeddings
through influence diffusion from their social network, Diffnet++ incorporates user interest
embeddings acquired through interest diffusion from user–item interactions. As each
user is connected to their social connections, they form a user–user graph used to learn
the user influence embeddings. Similarly, the user is connected to items, enabling the
learning of user interest embeddings from item interactions, represented as user–consumed
items graphs.
Diffnet++ incorporates a neural architecture consisting of four essential components:
the embedding layer, the fusion layer, the layer-wise influence diffusion layers, and the
prediction layer. To ensure the efficacy of the user embeddings in both influence and
interest diffusion graphs, a node attention layer is employed to selectively emphasize
the most relevant information from the surrounding connections. Subsequently, after
training the user–influence embeddings and user–interest embeddings separately, they
are aggregated in a graph attention layer to generate the final user embeddings from the
influence and interest perspectives. The model predicts the final preference for a user–item
pair once the influence and interest diffusion processes have reached a stable state. This
comprehensive architecture empowers the model to effectively capture both influence and
interest dynamics.
By incorporating item relations, the RelationalNet algorithm emphasizes both user and
item influences and interests by adding the layer-wise item diffusion layer. This extension
enables the algorithm to capture and leverage the complex relationships between users
and items. Consequently, the algorithm aims to provide enhanced user recommendations
by considering the individual interests of users and the intricate interplay between users
and items. The inclusion of the item–item graph and the item–consumed users graph in
RelationalNet further enriches the modeling capabilities, allowing for a more comprehen-
sive representation of the user–item ecosystem. Through the integration of these additional
relational graphs, the RelationalNet model aspires to deliver more accurate and contex-
tually relevant recommendations to users, considering a broader spectrum of influences
and connections.
3. Methodology
In this section, we discuss in detail the proposed architecture of the RelationalNet
model for social recommendations utilizing GNNs.
GS using an adjacency matrix S of size Rm×m , where each element su1 u2 = 1 if user u1 is
friends with/follows/trusts user u2 ; otherwise, it is equal to 0. Note that in the case of
friendships (bidirectional edge), the adjacency matrix S is symmetric, whereas in the case
of follows/trust (unidirectional edge), the matrix is asymmetric.
Similar to user–user social connections, item–item relations are modeled as a graph
GT = (V, VT ), where VT represents the item–item connections (edges) between items
(nodes). We represent graph GT using an adjacency matrix T of size Rn×n , where each
element ti1 i2 = 1 if item i1 is similar to item i2 ; otherwise, it is equal to 0. Consequently, this
is a symmetric adjacency matrix. We can define a similarity function that is domain based,
like common characteristics, category, etc.
Additionally, every user is associated with a real-valued embedding denoted as x a
in the user embedding matrix X ∈ R D×m , where D is the number of embedded features
(or dimensions) of each user a. Similarly, every item is associated with a real-valued
embedding denoted as yi in the item embedding matrix Y ∈ R D×n , where D is the number
of embedded features (or dimensions) of each item i.
As previously mentioned, in CF RSs, the objective is to predict unknown/unseen
ratings (or, in general preferences) of users for items given as input a utility matrix that
includes the past users’ ratings (or preferences) for items. In our work, we generalize this
objective to incorporate additional relations, as follows.
Given a utility matrix R consisting of the ratings rui of users u for items i, the user–user
social network adjacency matrix (S), the item–item network adjacency matrix (T), and
the two associated real-valued embedded matrices for users (X) and items (Y) predict the
unknown/unseen preferences/ratings of users towards items, represented by the m × n
matrix R̂:
R̂ = f ( R, S, T, X, Y ) (1)
Figure 1. RelationalNet algorithm with k layers. Top part: User–user graph and user–consumed–
items graph. Lower part: Item–consumed–users graph and item–item graph.
Algorithms 2023, 16, 515 6 of 18
• Similarly, let Q ∈ Rn× D represent the free latent embeddings matrices of items with
D-dimensions. The embedding layer executes an index selection operation to generate
the unrestrained latent vector of the item qb for item b at the bth row.
Figure 3. Embedding layer example for users’ free latent embedding matrix.
lik+1 = α0 (W k × [lTi
k +1 k
, li ]) (7)
Figure 5. Diffusion at Node attention layer, i.e., Layer 1 is from 1-hop neighbors, Layer 2 is from
2-hop neighbors, and Layer 3 is from 3-hop neighbors.
Algorithms 2023, 16, 515 9 of 18
Similarly, the consumed items by each user and those consumed for each item are
trained for k layers using attention.
• Let each user a have h̃ka as latent embedding at the kth layer, then the user a latent
embeddings for the consumed item nodes in GR at layer k + 1 are
k +1
h̃Va = g(h̃kb |b ∈ Va ) (8)
l˜Si
k +1
= g(l˜bk |b ∈ Si ) (10)
where Si represents users who consumed item i and g( x ) is an attention function. The
item i latent embeddings can be trained for layer k + 1 with a combination of l˜ik and
l˜Sk+1 consumed users’ latent embeddings:
i
where MLP1 , MLP2 are the multi-layer perceptrons used to gain an understanding of
the intricate connections between item relations and user embeddings.
Algorithms 2023, 16, 515 10 of 18
Figure 6. Example of the graph attention at the (k + 1)th layer for the user embedding uka+1 only.
M N
minimize( L( R, R̂)) = ∑ ∑ σ (r̂ ai − r ai ) (15)
a =1 i ∈ D a
4. Experimental Evaluation
4.1. Dataset
Similar to previous work, we employed the Yelp dataset [28] to evaluate our approach.
The Yelp dataset is ideal, as it includes not only ratings but also additional information
on businesses, reviews, check-ins, tips, and users. It contains data on various types of
businesses, including restaurants, bars, and cafes. These details include their names,
addresses, phone numbers, categories, ratings, operational hours, and reviews. Individual
users provide a rating of 1 to 5 (5 being the best) stars and compose written evaluations
for businesses. Most importantly, the dataset incorporates social network information,
such as the followers of each Yelp user and the number of fans subscribed to a business.
To analyze the Yelp dataset, a preprocessing step is performed by converting star ratings
of 3 or higher to a rating of 1, indicating a positive sentiment towards the business, and
ratings below 3 are converted to 0, indicating negative sentiment. To obtain insights from
the written reviews, the gensim tool and the Word2vec [29] model are employed to generate
the embedding of each word. By doing so, it becomes possible to produce a feature vector
for each user by computing the mean of the trained word vectors that are correlated to their
reviews. A similar process is employed to create feature vectors for every item (business).
The feature vectors for both users and items serve as inputs to the model, denoted as X
and Y, respectively.
Using the user followers’ information, we create the user–user graph. In this graph,
a link is established between user a and user b if user a follows user b, and the link is
assigned a weight of 1, which is used for creating the social adjacency matrix S. The dataset
also contains information on the items (businesses) and their categories. Businesses with
at least seven common categories are considered similar, and a link is assigned between
them to create a business–business (item–item) adjacency matrix T. The user–user and
item–item inputs S and T are respectively used for creating the graphs GS and GT for the
neural network to capture the graph relationships.
During the training and testing process, the dataset is filtered to exclude information
that may not be reliable or useful. Users with inadequate information, such as those with
less than ten reviews or ten followers, are removed from the dataset. By applying these
filters, the dataset is refined to ensure that only relevant and reliable information is utilized
for the analysis. Table 1 summarizes the statistics of the original Yelp dataset as well as the
filtered one used in our experiments.
Dataset Yelp
Users 1,987,897
Businesses 150,346
Reviews 6,990,280
Filtered Users 15,519
Filtered Businesses 24,648
Filtered Reviews 815,777
Social Links (user–user) 836,186
Item links (item–item) 196,010
Reviews Sparsity 0.213%
Avg no. of ratings/user 37
Median no. of ratings/user 22
users graph GR0 , user–user graph GS , and item–item graph GT . The fixed input values are
utilized in various layers of the training model. In the training process, the hyperparameters
are fine-tuned using the validation dataset.
During each epoch, different mini-batches of users with varying sizes (100, 250, 500,
and 1000) are tested, and it is found that a batch size of 500 yields comparatively better
results than other batch sizes. The Adam optimizer is utilized to optimize the model, with
an initial learning rate of [0.001, 0.0025, 0.005] and a decay learning rate to minimize the
loss function given by Equation (15). To train the model to be unbiased, a certain number
of false negative ratings are added for each user from randomly selected unrated items.
The GNN models utilize the depth parameter K to gauge the impact of diffusion on
the overall model (as shown in Equations (12) and (13)). To evaluate the performance of
the model, it is trained with different values of K = 2, 3. The size of the user and item-free
embeddings, denoted as D, is determined by the number of dimensions in the fusion layer
as well as the subsequent diffusion layers. For the fusion layers, we use a sigmoid function
as the non-linear function g( x ) to transform each value into the range (0, 1) (as shown in
Equations (2) and (3)). The output size of each layer is set to D = 64 dimensions.
| Hu |
HR@n = (18)
m
where Hu = 1 (i.e., a “hit”) when the algorithm manages to recommend at least one item
that was in the user’s original list (test set), and 0 otherwise; n is the size of the recommen-
dation list considered; and m is the total number of users in the test set. Essentially, HR
assesses whether the model is able to recommend at least one relevant item to the user.
Algorithms 2023, 16, 515 13 of 18
For example, let us consider a test set of 100 users and their corresponding ground
truth sets of movies. The RS generates a list of top-5 movie recommendations for each
user. After evaluating the recommendations, it is found that 30 users have at least one
recommended movie in their ground truth sets. In this case, the hit rate would be calculated
as 30/100 = 0.3 or 30%.
A higher hit rate serves as an indicator of the effectiveness of an RS, as it suggests that
the system is successfully recommending items that match users’ preferences. However,
it is essential to recognize that the hit rate metric solely focuses on whether the recom-
mended items are present in the user’s ground truth set, without considering their ranking
or relevance. As a result, it is common practice to complement the hit rate with other
evaluation metrics to obtain a more comprehensive assessment of the performance of the
RS. By incorporating additional metrics, the evaluation process gains insights into the
system’s ability to provide accurate and highly relevant top-n recommendations to users.
DCG@n
NDCG@n = (19)
IDCG@n
where n represents the position in the recommendation list. The Discounted Cumulative
Gain (DCG) is defined as:
n
2reli − 1
DCG@n = ∑ (20)
i =1
log2 (i + 1)
where reli is the score that indicates how relevant the item recommended at position i is,
and IDCG (Ideal Discounted Cumulative Gain) is the DCG score of the ideal (i.e., ground
truth) list of recommended items. The higher the NDCG score (closer to 1), the better
the recommender system is performing in terms of providing relevant and well-ranked
recommendations.
In this paper, HR and NDCG are evaluated on the test data to determine how well the
models perform. By utilizing these top-n metrics, we can effectively gauge the ability of
the model to make accurate recommendations and compare the effectiveness of different
models.
Model RMSE
SocialMF [24] 2.1990
GraphRec [11] 0.6640
Diffnet++ [13] (GNN Layers K = 2) 0.1676
Diffnet++ [13] (GNN Layers K = 3) 0.1668
RelationalNet Model (GNN Layers K = 2) 0.1701
RelationalNet Model (GNN Layers K = 3) 0.1674
Table 3. Hit rate (HR) for top-5, top-10, and top-15 recommendations.
Table 4 shows the results for the NDCG metrics for the different models, and again
the RelationalNet model with two layers has the best performance. We observe that
both RelationalNet variations perform better than those of Diffnet++ for all sizes of top-n
recommendation lists with up to 10% improvement for top-5 recommendations and an
average overall improvement of 6.2%.
Table 4. Normalized Discounted Cumulative Gain (NDCG) for top-5, top-10, and top-15 recommen-
dations.
Figure 7a,b illustrate the comparison of the model’s training loss with different learning
rates to analyze their performance under various hyperparameters. The visual representa-
tion of the loss function enables us to evaluate the model’s convergence rate. These figures
reveal that a learning rate of 0.0025 leads to a smoother decrease in training loss than a
learning rate of 0.005, indicating that the former results offer better convergence. Moreover,
the RelationalNet model outperforms the Diffnet++ model in terms of training loss for
Algorithms 2023, 16, 515 15 of 18
most learning rates, indicating its ability to learn from input data effectively and suggesting
that it can achieve higher accuracy than the Diffnet++ model.
Our findings illustrate the efficacy of the RelationalNet model in tackling the social
recommendations generation problem. RelationalNet has achieved accurate user preference
prediction in various (realistic) lengths of top-n recommendation lists and outperformed
several SOTA social recommender systems. The outcomes indicate that integrating GNN
layers to facilitate interest and influence diffusion leads to an enhancement in the recom-
mendation accuracy. The RelationalNet model can potentially enhance the recommendation
accuracy in social RSs and can be effectively used in real-world applications.
(a) (b)
Figure 7. Training loss. (a) Training loss of the Diffnet++ model [13]. (b) Training loss of the
RelationalNet model.
Author Contributions: Conceptualization, D.T. and K.P.; methodology, D.T.; software, D.T.; formal
analysis, D.T. and K.P.; investigation, D.T., K.P. and M.E.; validation, J.W.; resources, D.T. and J.W.;
data curation, D.T. and J.W.; writing—original draft preparation, D.T., K.P. and M.E.; writing—review
and editing, J.W., K.P. and M.E.; visualization, D.T.; supervision, K.P. and M.E.; funding acquisition,
K.P.; project administration K.P. All authors have read and agreed to the published version of the
manuscript.
Funding: Research reported in this publication was supported by the Division of Research and
Innovation at San José State University under Award Number 22-RSG-08-034 for Katerina Potika.
The content is solely the responsibility of the authors and does not necessarily represent the official
views of San José State University.
Data Availability Statement: Publicly available datasets were analyzed in this study. This data can
be found here: https://www.yelp.com/dataset, accessed on 1 October 2022.
Conflicts of Interest: The authors declare no conflict of interest.
Algorithms 2023, 16, 515 17 of 18
Abbreviations
The following abbreviations are used in this manuscript:
CF Collaborative Filtering
DCG Discounted Cumulative Gain
DL Deep Learning
GNN Graph Neural Network
HR Hit Rate
IDCG Ideal Discounted Cumulative Gain
MLP Multi-Layer Perceptron
NDCG Normalized Discounted Cumulative Gain
NLP Natural Language Processing
RMSE Root Mean Square Error
RS Recommender System
SOTA State Of The Art
References
1. Ma, H.; Zhou, D.; Liu, C.; Lyu, M.R.; King, I. Recommender Systems with Social Regularization. In Proceedings of the Fourth
ACM International Conference on Web Search and Data Mining, WSDM ’11, Hong Kong, China, 9–12 February 2011; pp. 287–296.
[CrossRef]
2. Gulati, A.; Eirinaki, M. With a Little Help from My Friends (and Their Friends): Influence Neighborhoods for Social Recommen-
dations. In Proceedings of the World Wide Web Conference, WWW ’19, San Francisco, CA, USA, 13–17 May 2019; pp. 2778–2784.
[CrossRef]
3. Easley, D.; Kleinberg, J. Networks, Crowds, and Markets; Cambridge Books: Cambridge, UK, 2012.
4. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural Collaborative Filtering. In Proceedings of the 26th International
Conference on World Wide Web, WWW’17, Perth, Australia, 3–7 April 2017; pp. 173–182. [CrossRef]
5. Cheng, H.T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. Wide
& deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems,
Boston, MA, USA, 15 September 2016; pp. 7–10. [CrossRef]
6. Covington, P.; Adams, J.; Sargin, E. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM
Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; pp. 191–198. [CrossRef]
7. Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep learning based recommender system: A survey and new perspectives. ACM Comput.
Surv. (CSUR) 2019, 52, 1–38. [CrossRef]
8. Wu, S.; Sun, F.; Zhang, W.; Xie, X.; Cui, B. Graph neural networks in recommender systems: A survey. ACM Comput. Surv. 2022,
55, 1–37. [CrossRef]
9. Tallapally, D.; Sreepada, R.S.; Patra, B.K.; Babu, K.S. User Preference Learning in Multi-Criteria Recommendations Using Stacked
Auto Encoders. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys ’18, Vancouver, BC, Canada, 2–7
October 2018; pp. 475–479. [CrossRef]
10. Wang, S.; Hu, L.; Wang, Y.; He, X.; Sheng, Q.Z.; Orgun, M.A.; Cao, L.; Ricci, F.; Yu, P.S. Graph Learning based Recommender
Systems: A Review. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Survey
Track, Montreal, QC, Canada, 19–26 August 2021; pp. 4644–4652. [CrossRef]
11. Fan, W.; Ma, Y.; Li, Q.; He, Y.; Zhao, E.; Tang, J.; Yin, D. Graph Neural Networks for Social Recommendation. In Proceedings of
the World Wide Web Conference, WWW ’19, San Francisco, CA, USA, 13–17 May 2019; pp. 417–426. [CrossRef]
12. Wu, L.; Sun, P.; Fu, Y.; Hong, R.; Wang, X.; Wang, M. A Neural Influence Diffusion Model for Social Recommendation. In
Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19,
Paris, France, 21–25 July 2019; pp. 235–244. [CrossRef]
13. Wu, L.; Li, J.; Sun, P.; Hong, R.; Ge, Y.; Wang, M. DiffNet++: A Neural Influence and Interest Diffusion Network for Social
Recommendation. IEEE Trans. Knowl. Data Eng. 2022, 34, 4753–4766. [CrossRef]
14. Jiang, M.; Cui, P.; Wang, F.; Zhu, W.; Yang, S. Scalable Recommendation with Social Contextual Information. IEEE Trans. Knowl.
Data Eng. 2014, 26, 2789–2802. [CrossRef]
15. Tang, J.; Hu, X.; Liu, H. Social recommendation: A review. Soc. Netw. Anal. Min. 2013, 3, 1113–1133. [CrossRef]
16. Eirinaki, M.; Gao, J.; Varlamis, I.; Tserpes, K. Recommender Systems for Large-Scale Social Networks: A review of challenges and
solutions. Future Gener. Comput. Syst. 2018, 78, 413–418. [CrossRef]
17. McPherson, M.; Smith-Lovin, L.; Cook, J.M. Birds of a Feather: Homophily in Social Networks. Annu. Rev. Sociol. 2001,
27, 415–444. [CrossRef]
18. Weng, J.; Lim, E.; Jiang, J.; He, Q. TwitterRank: Finding topic-sensitive influential twitterers. In Proceedings of the Third
International Conference on Web Search and Web Data Mining, WSDM 2010, New York, NY, USA, 4–6 February 2010; pp. 261–270.
[CrossRef]
Algorithms 2023, 16, 515 18 of 18
19. Anagnostopoulos, A.; Kumar, R.; Mahdian, M. Influence and Correlation in Social Networks. In Proceedings of the 14th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, Las Vegas, NV, USA, 24–27 August 2008;
pp. 7–15. [CrossRef]
20. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and
applications. AI Open 2020, 1, 57–81. [CrossRef]
21. Micheli, A. Neural Network for Graphs: A Contextual Constructive Approach. IEEE Trans. Neural Netw. 2009, 20, 498–511.
[CrossRef] [PubMed]
22. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph Convolutional Neural Networks for Web-Scale
Recommender Systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, London, UK, 19–23 August 2018; pp. 974–983. [CrossRef]
23. Gulati, A.; Eirinaki, M. Influence Propagation for Social Graph-based Recommendations. In Proceedings of the IEEE International
Conference on Big Data (IEEE BigData 2018), Seattle, WA, USA, 10–13 December 2018; pp. 2180–2189. [CrossRef]
24. Jamali, M.; Ester, M. A matrix factorization technique with trust propagation for recommendation in social networks. In
Proceedings of the Fourth ACM Conference on Recommender Systems, Barcelona, Spain, 26–30 September 2010; pp. 135–142.
[CrossRef]
25. Levy, O.; Goldberg, Y. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers), Baltimore, MD, USA, 22–27 June 2014; pp. 302–308.
26. Rendle, S. Factorization Machines with LibFM. ACM Trans. Intell. Syst. Technol. 2012, 3, 1–22. [CrossRef]
27. Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian Personalized Ranking from Implicit Feedback. In
Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, Arlington, VA, USA, 18–21 June
2009; pp. 452–461.
28. Yelp. Yelp Dataset. Available online: https://www.yelp.com/dataset (accessed on 1 November 2023).
29. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. In Proceedings of the 1st
International Conference on Learning Representations, ICLR 2013, Scottsdale, AZ, USA, 2–4 May 2013.
30. Deshpande, M.; Karypis, G. Item-Based Top-N Recommendation Algorithms. ACM Trans. Inf. Syst. 2004, 22, 143–177. [CrossRef]
31. Chavan, N.; Potika, K. Higher-order link prediction using triangle embeddings. In Proceedings of the 2020 IEEE International
Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 4535–4544. [CrossRef]
32. Anjan, K.; Andreopoulos, W.; Potika, K. Prediction of higher-order links using global vectors and Hasse diagrams. In Proceedings
of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 4802–4811.
[CrossRef]
33. Eirinaki, M.; Moniz, N.; Potika, K. Threshold-Bounded Influence Dominating Sets for Recommendations in Social Networks. In
Proceedings of the 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and
Networking (SocialCom), Sustainable Computing and Communications (SustainCom), BDCloud-SocialCom-SustainCom 2016,
Atlanta, GA, USA, 8–10 October 2016; pp. 408–415. [CrossRef]
34. Kaple, M.; Kulkarni, K.; Potika, K. Viral Marketing for Smart Cities: Influencers in Social Network Communities. In Proceedings
of the Third IEEE International Conference on Big Data Computing Service and Applications, BigDataService 2017, Redwood
City, CA, USA, 6–9 April 2017; pp. 106–111. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.