GR Semicolon EN

Show simple item record

dc.contributor.author
Konstantinidis, Ioannis
en
dc.date.accessioned
2019-03-29T08:52:11Z
dc.date.available
2019-03-30T01:00:14Z
dc.date.issued
2019-03-29
dc.identifier.uri
https://repository.ihu.edu.gr//xmlui/handle/11544/29281
dc.rights
Default License
dc.subject
Machine Learning
en
dc.subject
Misinformation
en
dc.subject
Natural Language Processing
en
dc.title
Disinformation Detection with Model Explanations
en
heal.type
masterThesis
en_US
heal.classification
Data Science
en
heal.language
en
en_US
heal.access
free
en_US
heal.license
http://creativecommons.org/licenses/by-nc/4.0
en_US
heal.fileFormat
pdf
en_US
heal.recordProvider
School of Science and Technology, MSc in Data Science
en_US
heal.publicationDate
2019-03-28
heal.bibliographicCitation
Ioannis Konstantinidis, Disinformation Detection with Model Explanations, School of Science and Technology, International Hellenic University, 2019
en
heal.abstract
Disinformation on the web has become an important problem to our society, and gener-ally refers to inaccurate information that is intended to harm the public. It includes fake news, imposter and fabricated content, hoaxes and other types of false information. Currently, most approaches to identify such content are based on fact-checking agencies manually searching for evidence that supports or contradicts the news statement. These approaches have natural limitations because they require a great amount of manual la-bour. The need for automatic disinformation detection tools, that can quickly and mas-sively detect disinformation by its source has therefore been widely acknowledged. Re-cent research studies deal with this problem by utilizing a variety of Machine Learning and Natural Language Processing techniques that in certain domains appear to achieve high prediction accuracy. However, none of them involves explanations for their classi-fication; for a model to be trustworthy, it has to provide the user with information on why it classified a content item as true or false. Machine learning model interpretability is an open research challenge that aims to enhance model transparency and reduce bias and prejudice. Opening the “black box” of algorithmic tools also allows for a deeper un-derstanding of the inherent characteristics of a domain, which can eventually lead to better approaches for the mitigation of its negative aspects. This thesis investigates the problem of disinformation detection by implementing various fake news classification models using only their textual content and subsequently evalu-ates state-of-the-art algorithms for explaining classifier decisions.
en
heal.sponsor
This thesis has received funding from the European Union’s Horizon 2020 research and innovation program (CoInform) under the Grant Agreement No. 770302
en
heal.advisorName
Peristeras, Vassilios
en
heal.committeeMemberName
Peristeras, Vassilios
en
heal.committeeMemberName
Berberidis, Christos
en
heal.committeeMemberName
Tjortjis, Christos
en
heal.academicPublisher
IHU
en
heal.academicPublisherID
ihu
en_US
heal.numberOfPages
90
en_US


This item appears in the following Collection(s)

Show simple item record

Related Items