0% found this document useful (0 votes)
4 views

Steam_Reviews_Aspect_Based_Sentiment_Analysis

This paper explores aspect-based sentiment analysis (ABSA) on Steam reviews to help game developers and players understand specific game features better. Two approaches, lexicon-based and machine learning, are implemented and compared, revealing strengths and limitations in analyzing game reviews. The findings indicate that while the lexicon-based method provides comprehensive insights, the machine learning approach, particularly using pre-trained models, can yield more accurate sentiment classifications for specific aspects.

Uploaded by

chiendinh2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Steam_Reviews_Aspect_Based_Sentiment_Analysis

This paper explores aspect-based sentiment analysis (ABSA) on Steam reviews to help game developers and players understand specific game features better. Two approaches, lexicon-based and machine learning, are implemented and compared, revealing strengths and limitations in analyzing game reviews. The findings indicate that while the lexicon-based method provides comprehensive insights, the machine learning approach, particularly using pre-trained models, can yield more accurate sentiment classifications for specific aspects.

Uploaded by

chiendinh2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Steam Reviews Aspect Based Sentiment Analysis

Thanh Nguyen and Chien Dinh


December 2024

1 Abstract
Due to Steam’s feature of specifying whether users recommend the game in a
review, it is easy to determine the overall sentiment from the review. However,
this sentiment may not be enough to determine which features a game gets
right or wrong. Instead, game developers may benefit from capturing players’
sentiments towards specific aspects. Aspect-based sentiment analysis (ABSA)
can aid developers’ work on future updates and games by helping them iden-
tify key strengths and areas that need targeted improvement. Furthermore,
Steam users may also utilize features created with aspect-based sentiment anal-
ysis to be quickly informed of strengths and concerns about a game and make
more informed purchase decisions. In this paper, we implement two approaches,
lexicon-based and machine learning, to perform aspect-based sentiment analy-
sis on Steam reviews and compare them to determine which may work best in
the context of game reviews. The comparison will involve the result of each
approach, the setup requirement, and the speed. Our findings highlight the
strengths and limitations of each approach, specifically for aspect-based senti-
ment analysis for game reviews.

2 Introduction
Steam is the largest video game distribution platform globally, offering a vast
collection of games, reviews, and user-generated data. Its popularity generates a
massive repository of insights into user preferences, especially concerning game
features such as gameplay, graphics, and music. Analyzing this data presents
opportunities to understand and enhance both player satisfaction and developer
strategies.
Steam’s review system allows users to indicate their overall sentiment about
a game through binary classifications—whether they recommend the game or
not. While straightforward, this approach often overlooks the nuanced senti-
ments players express about specific features. Developers seeking to improve
their games must identify what resonates with players and what needs atten-
tion. For instance, a game with numerous reviews praising its art direction
might use this feedback to reinforce such strengths. Conversely, aspect-based

1
sentiment analysis (ABSA) can reveal critical issues like balancing problems or
performance flaws, guiding targeted improvements. This process is particularly
valuable for live-service games like Dota or Counter-Strike, where maintaining
player satisfaction is crucial for long-term engagement.
Players can also benefit from an analysis of game aspects. A potential
buyer seeking compelling storytelling or strong performance on limited hard-
ware could make more informed decisions with aspect-specific sentiment ratings
or summaries. While forums and reviews provide some of this information, a
structured analysis can streamline the decision-making process by highlighting
critical strengths and weaknesses quickly.
ABSA has been widely applied in various domains, including product re-
views, social media analysis, and customer feedback. Companies like Amazon
and Best Buy already incorporate this technology to enhance customer insights
and decision-making processes. However, Steam has yet to leverage ABSA to en-
hance its review ecosystem. By applying ABSA to Steam reviews, we aim to pro-
vide actionable insights for developers while offering users data-driven analyses
of game features. This dual benefit empowers both demographics—developers in
refining their products and players in making more informed purchase decisions.
The two primary approaches to ABSA are lexicon-based and machine learning-
based, with hybrid methods also emerging as a promising alternative. Nath and
Dwivedi(2024) [5] outline these techniques, including lexical, KNN, and BERT-
based ABSA, demonstrating their distinct advantages and challenges. In this
paper, we implement and compare these approaches to identify the most ef-
fective method for analyzing game reviews. To achieve this, we use the Steam
Video Game and Bundle Data from previous studies. Kang and McAuley (2018)
[4] introduced a self-attention-based sequential model for next-item recommen-
dations, using a dataset of Steam reviews from 2010 to 2018. Wan and McAuley
(2018)[8] developed a recommendation framework leveraging monotonic depen-
dency structures in user feedback, focusing on Australian users. Pathak, Gupta,
and McAuley (2017) [6] contributed another dataset centered on personalized
bundle recommendations. By building on these datasets, our work aims to
bridge the gap between ABSA methodologies and their application to video
game reviews.

3 Methods
3.1 Data Preprocessing
The dataset was provided in JSON format, with each line representing a user and
their associated reviews. We first processed the dataset using the ast module to
parse each line into a dictionary. The dictionary was then converted into rows
of reviews for a Pandas data frame for further processing. The dataset contains
over 93 millions reviews, giving us a robust foundation for our analysis.

2
3.2 Lexicon-based Sentiment Analysis
3.2.1 Data Cleaning and Aspect Construction
Cleaning the data for this task involved several essential preprocessing steps to
ensure high-quality input for analysis. First, we identified and removed stop
words from the reviews using the SpaCy library [1]. Stop words, such as ”the,”
”is,” and ”and,” often add little meaning to the analysis and can be safely ex-
cluded to focus on more relevant terms. Next, contractions in the text (e.g.,
”can’t,” ”won’t”) were expanded into their full forms (”cannot,” ”will not”) us-
ing a predefined contraction dictionary. This dictionary was specifically sourced
from Rob Taylor’s repository, designed for analyzing Steam reviews [7]. By
expanding contractions, we standardized the text for consistent tokenization.
Finally, lemmatization was applied with SpaCy to reduce words to their base or
dictionary forms. For instance, words like ”running,” ”ran,” and ”runs” were
converted to ”run,” ensuring that different variants of a word were treated as a
single entity during analysis.

3.2.2 Aspect Construction and Term Selection


After cleaning the text, we moved on to constructing the aspects and associat-
ing relevant terms with them. This step began with creating a frequency table
of terms from the reviews, where we identified high-frequency words that we
felt were relevant to various aspects of a game. These aspects included cate-
gories such as gameplay, community, storytelling, visual, performance, audio,
and price. For example, terms like ”gameplay,” ”control,” and ”combat” were
manually associated with the gameplay aspect, while ”performance,” ”frame
rate,” and ”crash” were tied to performance.
The initial association of terms with aspects was done manually by evalu-
ating the context in which these terms appeared in the reviews. For instance,
”control” and ”combat” were commonly used by players to describe their expe-
rience with in-game mechanics, so they were logically grouped under gameplay.
Similarly, words like ”crash” and ”frame rate” frequently occurred in discussions
about technical performance, making them relevant to the performance aspect.
To ensure broader coverage and consistency, we expanded these manually se-
lected term lists using a more systematic approach. Inspired by the methodology
of Intellica.AI [3], we constructed average word embeddings for each aspect’s
term list using fastText. By calculating the cosine similarity between these
embeddings and other terms in the dataset, we identified additional words that
were semantically similar to our initial lists but had not been explicitly included.
For instance, terms like ”lag” and ”stuttering” were added to the performance
aspect based on their similarity to ”crash” and ”frame rate.” This approach al-
lowed us to refine and expand our aspect dictionaries, ensuring that all relevant
terms were captured.

3
3.2.3 Sentiment Analysis and Aspect Scoring
Once the aspect term lists were finalized, we analyzed the sentiment of each
sentence in the reviews. Using nltk’s lexicon-based sentiment analyzer tool,
Vader [2], we calculated the sentiment polarity (positive, negative, or neutral)
of sentences containing terms associated with specific aspects. If a sentence
included a term from an aspect’s dictionary, the sentiment score of that sentence
was assigned to the corresponding aspect. For example, if a sentence mentioned
”gameplay” in a positive context, that positive score was attributed to the
gameplay aspect.
By combining manual selection, embedding-based expansion, and lexicon-
based sentiment analysis, we were able to create a robust framework for ana-
lyzing aspect-based sentiment in game reviews. This process ensured that our
analysis was both comprehensive and adaptable to the nuances of player feed-
back.

3.3 Machine Learning


3.3.1 Supervised Method
Our second approach involves using a BERT based model specifically trained for
doing aspect-based sentiment analysis. We trained the model for the machine-
learning method using PyABSA [9], a framework designed to help train ABSA
models. We labeled the review data by manually identifying each aspect and
classifying its sentiment. The task of our dataset is identifying the aspect and
the sentiment for that aspect from a review. Thankfully, PyABSA also provided
a GUI to speed up this process. However, due to time constraints, only 200
reviews were labeled. As the labeled dataset is small, we broke the data into
the training and testing set with a ratio of 3 to 1. The task we selected is Aspect
Term Extraction and Sentiment Classification, and the featured model for this
task is FAST-LCF-ATEPC, which stands for fast local context focus aspect term
extraction and polarity classification and is an improved version of LCF-ATEPC
introduced by Yang et al. [10] and Zeng et al. [11]. The model, built upon
BERT, uses a local context focus mechanism, which enhances performance by
concentrating on the local context around aspect terms. This model will extract
terms and their corresponding sentiment of a review. We then categorize each
terms into the aforementioned aspects using a similar cosine similarity method
used in our lexicon based method. Finally, we experiment with training the
model from scratch, training the model from a checkpoint, and using the model
directly for analysis.

3.3.2 Unsupervised Method


For our final method, we chose to experiment with doing aspect-based senti-
ment analysis with a Large Language Model. First, we started by splitting the
reviews into sentences and filtering out short sentences that may not indicate
any aspects’ sentiment. We then generated vector embeddings of each sentence

4
of the reviews using the sentence-transformers library, which use SBERT model
for creating vector representation of sentences. The embeddings were then clus-
tered with the K-Means algorithm, with the goal of identifying the key topics of
these sentences related to different aspects of a game. The sentences closest to
the clusters’ centroids were selected as representatives of the aspects and inter-
preted for their sentiment. This step is crucial, as having the LLM analyze every
review would require considerable computation. Using K-Means to find repre-
sentative sentences helps to scale this approach to a very large amount of review
data. Finally, we set up a pipeline for using a large language model (LLM) for
aspect-based sentiment analysis and summary. We chose to pass the resulting
sentences into Llama 3.2 1b Instruct to leverage its text generation, summariza-
tion, and sentiment extraction capabilities, inspired by Amazon’s relatively new
AI feature ”Customers say”.

4 Results
With our first two methods, we performed aspect-based sentiment analysis on
three games’ reviews: Terraria, Sonic the Hedgehog 4 - Episode I, and No
Man’s Sky. These games were chosen based on their overall users’ sentiment at
the time of data collection—positive, negative, and mixed, respectively. This
allowed us to compare how these methods function for games with different
overall sentiments and subjectively gauge the performance of each method.

4.1 Lexicon-based Sentiment Analysis


Using the lexicon-based method, we calculated the average sentiment of each
predefined aspect for all three games on a scale from -1 to 1. We also define
each sentiment score as negative (less than -0.33), positive (more than 0.33),
and neutral (between -0.33 and 0.33) and count each aspect sentiment.
As shown in Figure 1, despite reviews being mostly positive, the average sen-
timent of each aspect of Terraria was not as high as we had expected. However,
we confirmed that the general aspect sentiment of Terraria was the highest by
calculating the sum of all aspect sentiments. Figure 2 show that all the aspects
receive positive sentiment.
Figure 3 shows that Sonic the Hedgehog 4 Episode 1’s reviews are overall
less positive, with only the sentiment towards the game’s performance dipping
below 0. We see that the general aspect sentiment of Sonic is the lowest among
the three games. Figure 4 shows that the sentiments towards the aspects of
Sonic the Hedgehog 4 Episode 1’s are more neutral and negative compared to
Terraria.
No Man’s Sky reviews were very mixed when the game first launched, with
more leaning towards the negative side. With the developers’ dedication to im-
proving the game, the reviews have since shifted to be mostly positive. However,
at the time of data collection, the overall sentiment of all the reviews was still
mixed. As shown in Figure 5, the aspect sentiment towards No Man’s Sky falls

5
Figure 1: Aspect Sentiment Scores for Terraria

Figure 2: Aspect Distribution of Terraria lexicon method

between the two previous games. Figure 6 also show that the sentiment towards
different features of No Man’s Sky as mostly neutral.

4.2 Supervised Method


For the model we trained with PyABSA, we ran tests directly on some sample
reviews of No Man’s Sky. When we trained the model from scratch, it could
only pick up some aspects as positive and barely any neutral or negative aspects.
This was a direct result of our labeled data being small and consisting mainly of
labeled positive aspects. If we trained on a pre-existing checkpoint, the model
produced great results, though it missed some specific aspects. When we ini-

6
Figure 3: Aspect Sentiment Scores for Sonic

Figure 4: Aspect Distribution of Sonic lexicon method

tially tested our model trained with PyABSA on sample reviews of No Man’s
Sky, we noticed a difference between using a pre-trained checkpoint directly and
fine-tuning the checkpoint. On one hand, using a pre-trained checkpoint allowed
the model to identify relevant aspects and their sentiments with impressive ac-
curacy. It correctly recognized positive experiences and negative factors, though
it sometimes overlooked certain specific aspects. On the other hand, fine-tuning
the model with our dataset led to a much less balanced performance. Due to
our aforementioned problems with the dataset, the fine-tuned model ended up
detecting mostly positive aspects and struggled to pick up on many neutral or
negative sentiments. This is one of our best result obtained from using the
fine-tuned model.

7
Figure 5: Aspect Sentiment Scores for No Man’s Sky

Figure 6: Aspect Distribution of No Man’s Sky lexicon method

I ’ m having a great <time:Positive Confidence:0.9864> with this one .


Sure , it suffered from a lackluster <launch:Negative Confidence:0.9392> ,
lack of promised <features:Neutral Confidence:0.4438> , and even after the
<Foundation Update:Neutral Confidence:0.9899> it ’ s not perfect , but it ’ s
lots of fun , and highly engrossing !

Here we think the model miscategorized ”feature” as neutral with somewhat


low level of confidence. On the other hand, using the pretrained model directly
give us a better result of classifying ”features”. We decided to do our analysis
using the pretrained model directly. Here is the result from analyzing the same
review with.

8
I ’ m having a great <time:Positive Confidence:0.9897> with this one . Sure , it suffered fr

Figure 7: Aspect-Based Sentiment Distribution of Terraria

Moving on to the broader results, Figure 8 presents the aspect-based senti-


ment distribution for the game Terraria. To put it simply, the “Community”
aspect received the strongest positive response, with nearly 1,000 positive re-
views praising this element. Next in line was “Gameplay,” which also saw sub-
stantial positive feedback, with over 200 positive sentiments. In comparison,
aspects like “audio,” “performance,” “price,” “storytelling,” and “visual” re-
ceived fewer mentions. Despite their relatively small numbers, “visual” and
“price” still stood out slightly, with more positive remarks than negative. Neg-
ative and neutral opinions were few and far between compared to the number
of positive ones. In other words, Terraria is overwhelmingly loved by players,
especially for its community and gameplay. There may still be room for im-
provement in areas like performance or audio, but these issues appear minor in
the big picture.
In contrast, the aspect-based sentiment distribution for Sonic, as shown in
Figure 9, tells a different story. “Gameplay” took a clear hit, with over 200
negative sentiments pointing to significant player dissatisfaction. Other areas
like “audio,” “visual,” and “performance” also drew considerable negative feed-
back, each surpassing 50 negative reviews. As a result, positive sentiments were
harder to find, with “audio,” “community,” and “visual” gathering around 50
positive reviews each. Neutral sentiments remained minimal, suggesting that
reviews were strongly polarized rather than mixed. Ultimately, the data indi-
cates that Sonic struggles to meet players’ expectations, especially in core areas
like gameplay, leaving significant room for improvement.
Looking at another perspective, Figure 10 focuses on No Man’s Sky, reveal-
ing a more complex sentiment. “Community” and “Gameplay” aspects, while
mixed, are more on the positive side. This suggests that players may appreciate
the community and core gaming experience that No Man’s Sky offers. However,

9
Figure 8: Aspect-Based Sentiment Distribution of Sonic

Figure 9: Aspect-Based Sentiment Distribution of No Man Sky

the “Price” and “Performance” aspects weren’t as well-received, each drawing


substantial negative feedback compared to positive and neutral ones. “Price,”
in particular, stood out as the top concern for dissatisfied players. Other factors
like “Visual” and “Storytelling” garnered moderate praise, while “Audio” re-
ceived relatively fewer mentions, indicating that it wasn’t a major talking point.
Taken together, this pattern suggests that while No Man’s Sky succeeds in areas
that promote engagement and enjoyment, certain factors like cost and technical
performance could be improved.
Bringing it all together, the supervised learning analysis highlights clear
differences across Terraria, Sonic, and No Man’s Sky. On one side, Terraria
thrives thanks to its vibrant community and engaging gameplay, although it
could benefit from some improvement in performance and audio. On another

10
side, Sonic shows widespread dissatisfaction, especially in gameplay, audio, and
performance, signaling the need for substantial improvements. Meanwhile, No
Man’s Sky largely delights players with its community and gameplay, but stum-
bles when it comes to pricing and performance issues. In the end, these findings
emphasize the importance of balancing various aspects of a gaming experience
and addressing any shortcomings that may hinder player satisfaction.

4.3 Unsupervised Method


For our clustering method, we used the elbow technique to determine the best
number of clusters. Figure 10 shows the graph for determining the number of
clusters for Terraria’s reviews. We observed a similar trend in the other two
games’ reviews and thus chose 40 as the optimal number of clusters, as the
Sum of Squared Distances did not change significantly afterward, while still
extracting a diverse set of ideas from the reviews.

Figure 10: Elbow Method

We found that the representative sentences of clusters gave us many different


aspects of a game. For example, we extracted discussions from Terraria’s reviews
about boss fights, replayability, soundtracks, game crashes, etc. Here are a few
sentences we obtained from our K-Means steps:
’The game has many unique weapons and armours with their own special attacks.’,
’There’s tons of items to craft, biomes to explore and bosses to kill.’,
’Its fun, challenging, interesting and it never fails to make you want to play it more!’,
’It’s got a nice soundtrack, really fun to play with friends, and it’s pretty addicting.’,
’but it keeps on crashing every time i try to open this certian world.’,
However, there were still some similar sentences, such as those simply ex-
pressing fondness for the game or comparisons with Minecraft. Additionally,

11
since we gave these unconnected sentences to the LLM, it may not have had the
full context for some sentences, resulting in inaccurate responses. For example,
in one experiment, it gave us the analysis: ”Players appreciate the game’s ac-
cessibility, noting that they can complete the main story in under 200 hours.”
This is not true, as Terraria does not have a main story, and completing a game
in 200 hours is not typically considered accessible for the average player. Never-
theless, the model gave us coherent responses that could be insightful regarding
the features of the games and even output the correct overall sentiment of each
game. Here a full result from running a different experiment on No Man’s Sky
after tuning our prompt for the LLM:

The game’s base-building aspect is seen as fun and engaging, with some players appreciating

While predicting the overall sentiment is not our task and is not a direct way
of determining the approach’s performance, it gives us a form of sanity check
to see if the model output results consistent with the known overall sentiment.

5 Discussion
Overall, we see a major speed difference between using each approach. To
demonstrate, We selected a top review from Farming Simulator to run experi-
ments and time them. From our experiment, to analyze the review, using the
lexicon based sentiment analysis tool took 0.05, the supervised model took 1.3,
while the Llama 3.2 model took 7.7 seconds. The speed of the LLM is one of
the reasons we use K-Means to trim down the amount of data it has to process.
Using the Vader sentiment analysis tool was the fastest and allowed us to
extract metrics regarding sentiments towards different aspects of a game using
the entire dataset. For example, as shown in Figure 5, audio received the highest
sentiment score, indicating that No Man’s Sky reviewers are especially happy
with the game’s sound design. Storytelling and visuals also performed well,
suggesting that players appreciate these elements of the game. Interestingly, the
sentiment towards performance was the lowest for all three games. Reviewers,
when they mention the performance aspect of the game, seem to view it in a
negative light. This can be explained by players generally not writing reviews
to praise the performance of a game, but criticizing it when they encounter
performance issues. Additionally, this method requires some knowledge of games
and their design aspects to effectively construct an aspect terms list.
The unsupervised machine learning method with K-Means was easier to
set up, introduced the least human bias, and required minimal prior knowledge
regarding the data. It also provided the most natural yet still insightful analysis
with the help of an LLM. However, it was difficult to determine which aspects
were left out by selecting representative sentences.
Our specialized ABSA model approach required considerable time and effort
to label the data if we wanted to train the model from the start or adapt it
to specific domains like game reviews. However, the model is and potentially
better than both the lexicon tool and a pre-trained LLM for aspect extraction

12
and sentiment analysis. The supervised model’s results aligned much closer to
our expectation from the overall sentiment of the three games compared to the
lexicon-based approach. On the other hand, the model is faster than an LLM
for analyzing a single review. This means it can be used to analyze a huge
amount of reviews in a reasonable amount of time.
Our lexicon-based approach and supervised model approach give us the re-
sults in the form of metrics, while our K-Means clustering approach combined
with LLM give us summarized and coherent text about the reviews. These two
forms of result are fundamentally different, and it is hard to compare them di-
rectly. However, we think these two approach can be used in conjunction with
each other, as evidently used by Best Buy. The metric rating of each aspect can
be a quick way for users to identify strength and weakness of a game, while the
summarized text is helpful for providing nuance analysis for interested users.

5.1 Limitations
Our analysis has some limitations that need to be acknowledged. First, the
sentiment scoring might not fully account for the complexity of natural lan-
guage. Reviews that include sarcasm, ambiguous expressions, or mixed senti-
ments could lead to misclassification, affecting the accuracy of the sentiment
analysis. This is especially relevant for highly nuanced or community-specific
language often found in gaming reviews.
Second, the lexicon-based method is limited by its reliance on predefined
aspect terms and the subsequent aspect assignments. While it captures general
trends, it may miss emerging or game-specific aspects not included in the initial
list, such as new features introduced through updates. Furthermore, words can
be assigned to different aspects depending on the context. For example, ”fps”
could mean ”first-person shooter,” related to gameplay, or ”frames per second,”
related to performance. Our method operates on a sentence-by-sentence basis,
which means it does not perform well when different aspects exist within a single
sentence.
The clustering approach, while useful for grouping reviews, depends heavily
on the selection of the optimal number of clusters. The Elbow Method provides
a heuristic, but K-Means may not always capture the true underlying structure
of the data. Furthermore, running an LLM can be computationally expensive,
and text generated by it should always be questioned for correctness.
The small labeled dataset for the supervised machine learning method limits
the model’s ability to generalize to unseen data. Additionally, as we were the
only people manually labeling the dataset, the training data may contain errors
and bias.

5.2 Future Work


Future work could address these limitations by improving both the dataset and
methodologies. Expanding the labeled dataset would allow for better training of

13
machine learning models, resulting in improved aspect extraction and sentiment
classification.
Lastly, future work could include more granular aspect analysis. For ex-
ample, breaking down ”performance” into subcategories like ”loading times,”
”frame rates,” and ”stability” could provide better insights. Incorporating user-
generated tags and metadata from reviews could potentially help identify emerg-
ing trends or concerns related to newly released content or updates.

14
References
[1] Matthew Honnibal and Ines Montani. “spaCy 2: Natural language un-
derstanding with Bloom embeddings, convolutional neural networks and
incremental parsing”. To appear. 2017.
[2] C. Hutto and Eric Gilbert. “Vader: A parsimonious rule-based model for
sentiment analysis of social media text”. In: Proceedings of the Inter-
national AAAI Conference on Web and Social Media 8.1 (May 2014),
pp. 216–225. doi: 10.1609/icwsm.v8i1.14550.
[3] Intellica.AI. Aspect-based sentiment analysis-everything you wanted to know!
Feb. 2020. url: https://intellica- ai.medium.com/aspect- based-
sentiment-analysis-everything-you-wanted-to-know-1be41572e238.
[4] Wang-Cheng Kang and Julian McAuley. Self-Attentive Sequential Recom-
mendation. 2018. arXiv: 1808 . 09781 [cs.IR]. url: https : / / arxiv .
org/abs/1808.09781.
[5] Deena Nath and Sanjay K. Dwivedi. “Aspect-based sentiment analysis:
approaches, applications, challenges and trends”. In: Knowl. Inf. Syst.
66.12 (Aug. 2024), pp. 7261–7303. issn: 0219-1377. doi: 10.1007/s10115-
024-02200-9. url: https://doi.org/10.1007/s10115-024-02200-9.
[6] Apurva Pathak, Kshitiz Gupta, and Julian McAuley. “Generating and
Personalizing Bundle Recommendations on Steam”. In: Proceedings of the
40th International ACM SIGIR Conference on Research and Development
in Information Retrieval. SIGIR ’17. Shinjuku, Tokyo, Japan: Association
for Computing Machinery, 2017, pp. 1073–1076. isbn: 9781450350228. doi:
10.1145/3077136.3080724. url: https://doi.org/10.1145/3077136.
3080724.
[7] Rob Taylor. SA Steam Reviews. https://github.com/r0btaylor/SA_
steam_reviews. 2022.
[8] Mengting Wan and Julian McAuley. “Item recommendation on monotonic
behavior chains”. In: Proceedings of the 12th ACM Conference on Recom-
mender Systems. RecSys ’18. Vancouver, British Columbia, Canada: Asso-
ciation for Computing Machinery, 2018, pp. 86–94. isbn: 9781450359016.
doi: 10 . 1145 / 3240323 . 3240369. url: https : / / doi . org / 10 . 1145 /
3240323.3240369.
[9] Heng Yang, Chen Zhang, and Ke Li. “PyABSA: A Modularized Frame-
work for Reproducible Aspect-based Sentiment Analysis”. In: Proceedings
of the 32nd ACM International Conference on Information and Knowledge
Management. CIKM ’23. Birmingham, United Kingdom: Association for
Computing Machinery, 2023, pp. 5117–5122. isbn: 9798400701245. doi:
10.1145/3583780.3614752. url: https://doi.org/10.1145/3583780.
3614752.

15
[10] Heng Yang et al. “A Multi-task Learning Model for Chinese-oriented
Aspect Polarity Classification and Aspect Term Extraction”. In: CoRR
abs/1912.07976 (2019). arXiv: 1912.07976. url: http://arxiv.org/
abs/1912.07976.
[11] Biqing Zeng et al. “LCF: A Local Context Focus Mechanism for Aspect-
Based Sentiment Classification”. In: Applied Sciences 9.16 (2019). issn:
2076-3417. doi: 10.3390/app9163389. url: https://www.mdpi.com/
2076-3417/9/16/3389.

16

You might also like