Automatic quality assessment of content created collaboratively by web communities: a case study of Wikipedia

From WikiPapers
Jump to: navigation, search

Automatic quality assessment of content created collaboratively by web communities: a case study of Wikipedia is a 2009 publication written in English by Daniel H. Dalip, Marcos A. Gonçalves, Marco Cristo, Pável Calado.

[edit] Abstract

The old dream of a universal repository containing all the human knowledge and culture is becoming possible through the Internet and the Web. Moreover, this is happening with the direct collaborative, participation of people. Wikipedia is a great example. It is an enormous repository of information with free access and edition, created by the community in a collaborative manner. However, this large amount of information, made available democratically and virtually without any control, raises questions about its relative quality. In this work we explore a significant number of quality indicators, some of them proposed by us and used here for the first time, and study their capability to assess the quality of Wikipedia articles. Furthermore, we explore machine learning techniques to combine these quality indicators into one single assessment judgment. Through experiments, we show that the most important quality indicators are the easiest ones to extract, namely, textual features related to length, structure and style. We were also able to determine which indicators did not contribute significantly to the quality assessment. These were, coincidentally, the most complex features, such as those based on link analysis. Finally, we compare our combination method with state-of-the-art solution and show significant improvements in terms of effective quality prediction.

[edit] References

This section requires expansion. Please, help!

Cited by

This publication has 3 citations. Only those publications available in WikiPapers are shown here:


Discussion

No comments yet. Be first!