Citations to retracted articles after they have been retracted (i.e., post-retraction citations) can be problematic if the retracted articles are cited as legitimate work. To gain a deeper understanding of how problematic post-retraction citations are, we analyzed the sentiments expressed in 3,156 post-retraction citation contexts to see whether the retracted articles were cited positively as legitimate work after their retractions. Our results showed that the vast majority of post-retraction citations cited retracted articles as legitimate work: 84.27% (2,660 out of 3,156) of the post-retraction citation contexts lacked acknowledgement of retraction and expressed positive sentiments. We also investigated the potential to automatically detect the sentiment. To evaluate how well sentiment could be automatically detected, supervised machine learning models based on logistic regression, support vector machine (SVM), convolutional neural network (CNN), and bidirectional long short-term memory (BiLSTM) were developed. The best-performing model was a CNN model augmented with sentence embeddings and hand-crafted features (0.79 accuracy and 0.60 macro F1). Our findings indicate that detecting citation sentiment is a challenging task. The improvement obtained from augmenting the word embeddings model with other features shows that sentence embeddings and hand-crafted features extracted from text similarity and a sentiment lexicon capture additional sentiment cues.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.