Browse wiki

Jump to: navigation, search
On using crowdsourcing and active learning to improve classification performance
Abstract Crowdsourcing is an emergent trend for genCrowdsourcing is an emergent trend for general-purpose classification problem solving. Over the past decade, this notion has been embodied by enlisting a crowd of humans to help solve problems. There are a growing number of real-world problems that take advantage of this technique, such as Wikipedia, Linux or Amazon Mechanical Turk. In this paper, we evaluate its suitability for classification, namely if it can outperform state-of-the-art models by combining it with active learning techniques. We propose two approaches based on crowdsourcing and active learning and empirically evaluate the performance of a baseline Support Vector Machine when active learning examples are chosen and made available for classification to a crowd in a web-based scenario. The proposed crowdsourcing active learning approach was tested with Jester data set, a text humour classification benchmark, resulting in promising improvements over baseline results.mising improvements over baseline results.
Abstractsub Crowdsourcing is an emergent trend for genCrowdsourcing is an emergent trend for general-purpose classification problem solving. Over the past decade, this notion has been embodied by enlisting a crowd of humans to help solve problems. There are a growing number of real-world problems that take advantage of this technique, such as Wikipedia, Linux or Amazon Mechanical Turk. In this paper, we evaluate its suitability for classification, namely if it can outperform state-of-the-art models by combining it with active learning techniques. We propose two approaches based on crowdsourcing and active learning and empirically evaluate the performance of a baseline Support Vector Machine when active learning examples are chosen and made available for classification to a crowd in a web-based scenario. The proposed crowdsourcing active learning approach was tested with Jester data set, a text humour classification benchmark, resulting in promising improvements over baseline results.mising improvements over baseline results.
Bibtextype inproceedings  +
Doi 10.1109/ISDA.2011.6121700  +
Has author Costa J. + , Silva C. + , Antunes M. + , Ribeiro B. +
Has extra keyword Active Learning + , Classification performance + , Crowdsourcing + , Dataset + , Mechanical turks + , Real-world problem + , Support vector + , Text classification + , Wikipedia + , Classification (of information) + , Computer operating systems + , Intelligent systems + , Support vector machines + , Systems analysis + , Websites + , Text processing +
Has keyword Active Learning + , Crowdsourcing + , Support Vector Machines + , Text Classification +
Isbn 9781457716751  +
Language English +
Number of citations by publication 0  +
Number of references by publication 0  +
Pages 469–474  +
Published in International Conference on Intelligent Systems Design and Applications, ISDA +
Title On using crowdsourcing and active learning to improve classification performance +
Type conference paper  +
Year 2011 +
Creation dateThis property is a special property in this wiki. 8 November 2014 03:20:38  +
Categories Publications without license parameter  + , Publications without remote mirror parameter  + , Publications without archive mirror parameter  + , Publications without paywall mirror parameter  + , Conference papers  + , Publications without references parameter  + , Publications  +
Modification dateThis property is a special property in this wiki. 8 November 2014 03:20:38  +
DateThis property is a special property in this wiki. 2011  +
hide properties that link here 
On using crowdsourcing and active learning to improve classification performance + Title
 

 

Enter the name of the page to start browsing from.