On using crowdsourcing and active learning to improve classification performance
|On using crowdsourcing and active learning to improve classification performance|
|Author(s)||Costa J., Silva C., Antunes M., Ribeiro B.|
|Published in||International Conference on Intelligent Systems Design and Applications, ISDA|
|Keyword(s)||Active Learning, Crowdsourcing, Support Vector Machines, Text Classification (Extra: Active Learning, Classification performance, Crowdsourcing, Data sets, Mechanical turks, Real-world problem, Support vector, Text classification, Wikipedia, Classification (of information), Computer operating systems, Intelligent systems, Support vector machines, Systems analysis, Websites, Text processing)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
On using crowdsourcing and active learning to improve classification performance is a 2011 conference paper written in English by Costa J., Silva C., Antunes M., Ribeiro B. and published in International Conference on Intelligent Systems Design and Applications, ISDA.
Crowdsourcing is an emergent trend for general-purpose classification problem solving. Over the past decade, this notion has been embodied by enlisting a crowd of humans to help solve problems. There are a growing number of real-world problems that take advantage of this technique, such as Wikipedia, Linux or Amazon Mechanical Turk. In this paper, we evaluate its suitability for classification, namely if it can outperform state-of-the-art models by combining it with active learning techniques. We propose two approaches based on crowdsourcing and active learning and empirically evaluate the performance of a baseline Support Vector Machine when active learning examples are chosen and made available for classification to a crowd in a web-based scenario. The proposed crowdsourcing active learning approach was tested with Jester data set, a text humour classification benchmark, resulting in promising improvements over baseline results.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 5 time(s)