A new approach to detecting content anomalies in Wikipedia

From WikiPapers
Jump to: navigation, search

A new approach to detecting content anomalies in Wikipedia is a 2013 conference paper written in English by Sinanc D., Yavanoglu U. and published in Proceedings - 2013 12th International Conference on Machine Learning and Applications, ICMLA 2013.

[edit] Abstract

The rapid growth of the web has caused to availability of data effective if its content is well organized. Despite the fact that Wikipedia is the biggest encyclopedia on the web, its quality is suspect due to its Open Editing Schemas (OES). In this study, zoology and botany pages are selected in English Wikipedia and their html contents are converted to text then Artificial Neural Network (ANN) is used for classification to prevent disinformation or misinformation. After the train phase, some irrelevant words added in the content about politics or terrorism in proportion to the size of the text. By the time unsuitable content is added in a page until the moderators' intervention, the proposed system realized the error via wrong categorization. The results have shown that, when words number 2% of the content is added anomaly rate begins to cross the 50% border.

[edit] References

This section requires expansion. Please, help!

Cited by

Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.