| recommender system|
(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
recommender system is included as keyword or extra keyword in 0 datasets, 0 tools and 31 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Capturing scholar's knowledge from heterogeneous resources for profiling in recommender systems||Amini B.
|Expert Systems with Applications||2014||In scholars' recommender systems, acquisition knowledge for construction profiles is crucial because profiles provide fundamental information for accurate recommendation. Despite the availability of various knowledge resources, identification and collecting extensive knowledge in an unobtrusive manner is not straightforward. In order to capture scholars' knowledge, some questions must be answered: what knowledge resource is appropriate for profiling, how knowledge items can be unobtrusively captured, and how heterogeneity among different knowledge resources should be resolved. To address these issues, we first model the scholars' academic behavior and extract different knowledge items, diffused over the Web including mediated profiles in digital libraries, and then integrate those heterogeneous knowledge items by Wikipedia. Additionally, we analyze the correlation between knowledge items and partition the scholars' research areas for multi-disciplinary profiling. Compared to the state-of-the-art, the result of empirical evaluation shows the efficiency of our approach in terms of completeness and accuracy. © 2014 Elsevier Ltd. All rights reserved.||0||0|
|A bookmark recommender system based on social bookmarking services and wikipedia categories||Yoshida T.
|SNPD 2013 - 14th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing||English||2013||Social book marking services allow users to add bookmarks of web pages with freely chosen keywords as tags. Personalized recommender systems recommend new and useful bookmarks added by other users. We propose a new method to find similar users and to select relevant bookmarks in a social book marking service. Our method is lightweight, because it uses a small set of important tags for each user to find useful bookmarks to recommend. Our method is also powerful, because it employs the Wikipedia category database to deal with the diversity of tags among users. The evaluation using the Hatena bookmark service in Japan shows that our method significantly increases the number of relevant bookmarks recommended without notable increase of irrelevant bookmarks.||0||0|
|A recommender system for wiki pages: Usage based rating approach||Agasta Adline A.L.
|International Conference on Recent Trends in Information Technology, ICRTIT 2012||English||2012||Online educational resources are in abundance, which urged the need for recommender systems to assist learners in identifying learning resources which suits their need. Wikipedia plays a major role in online delivery of educational resources. The open to access and edit feature of wiki allows anonymous users to include resources which necessitate the rating of wiki resources. Recommender systems suggests users with items that suits them the best. In this paper we propose a recommender system for wiki pages, which uses certain measures and metrics to rate the quality of wiki resources. The proposed model categorizes the wiki educational resources based on the purpose of usage.||0||0|
|A semantic approach to recommending text advertisements for images||Weinan Zhang
|RecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems||English||2012||In recent years, more and more images have been uploaded and published on the Web. Along with text Web pages, images have been becoming important media to place relevant advertisements. Visual contextual advertising, a young research area, refers to finding relevant text advertisements for a target image without any textual information (e.g., tags). There are two existing approaches, advertisement search based on image annotation, and more recently, advertisement matching based on feature translation between images and texts. However, the state of the art fails to achieve satisfactory results due to the fact that recommended advertisements are syntactically matched but semantically mismatched. In this paper, we propose a semantic approach to improving the performance of visual contextual advertising. More specifically, we exploit a large high-quality image knowledge base (ImageNet) and a widely-used text knowledge base (Wikipedia) to build a bridge between target images and advertisements. The image-advertisement match is built by mapping images and advertisements into the respective knowledge bases and then finding semantic matches between the two knowledge bases. The experimental results show that semantic match outperforms syntactic match significantly using test images from Flickr. We also show that our approach gives a large improvement of 16.4% on the precision of the top 10 matches over previous work, with more semantically relevant advertisements recommended. Copyright © 2012 by the Association for Computing Machinery, Inc. (ACM).||0||0|
|A technique for suggesting related Wikipedia articles using link analysis||Markson C.
|Proceedings of the ACM/IEEE Joint Conference on Digital Libraries||English||2012||With more than 3.7 million articles, Wikipedia has become an important social medium for sharing knowledge. However, with this enormous repository of information, it can often be difficult to locate fundamental topics that support lower-level articles. By exploiting the information stored in the links between articles, we propose that related companion articles can be automatically generated to help further the reader's understanding of a given topic. This approach to a recommendation system uses tested link analysis techniques to present users with a clear path to related high-level articles, furthering the understanding of low-level topics.||0||0|
|Exploiting scholar's background knowledge to improve recommender system for digital libraries||Amini B.
|International Journal of Digital Content Technology and its Applications||English||2012||Recommender systems for digital libraries have received increasing attention since they assist scholars to find the most appropriate articles for research purposes. Many research studies have recently conducted to model the user interests in order to suggest scientific articles based on the scholar's preferences. However, a major problem of such systems is that they do not subsume user's background knowledge into the recommendation process and scholars typically have to sift manually irrelevant articles retrieved from digital libraries. Therefore, a challenging task is how to collect and exploit sufficient scholar's academic knowledge into the personalization process in order to improve the recommendation accuracy. To address this problem, a recommender framework that consolidates scholar's background knowledge based on the ontological modeling is proposed. The framework exploits Wikipedia as a lexicographic database for concept disambiguation and semantic concept mapping. The practical evaluation by a group of scholars over CiteSeerX digital library indicates an improvement in accuracy of recommendation task.||0||0|
|Tasteweights: A visual interactive hybrid recommender system||Svetlin Bostandjiev
|RecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems||English||2012||This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content. Copyright © 2012 by the Association for Computing Machinery, Inc. (ACM).||0||0|
|A probabilistic approach to semantic collaborative filtering using world knowledge||Lee J.-W.
|Journal of Information Science||English||2011||Collaborative filtering, which is a popular approach for developing recommendation systems, exploits the exact match of items that users have accessed. If the users access different items, they are considered as unlike-minded users even though they may actually be semantically like-minded. To solve this problem, we propose a semantic collaborative filtering model that represents the semantics of users' preferences and items with their corresponding concepts. In this work, we extend the Bayesian belief network (BBN)-based model because it provides a clear formalism for representing users' preferences and items with concepts. Because the conventional BBN-based model regards the index terms derived from items as concepts, it does not exploit domain knowledge. We have therefore extended this conventional model to exploit concepts derived from domain knowledge. A practical approach to exploiting domain knowledge is to use world knowledge such as the Open Directory Project web directory or the Wikipedia encyclopaedia. Through experiments, we show that our model outperforms other conventional collaborative filtering models while comparing the recommendation quality when using different world knowledge.||0||0|
|Categorising social tags to improve folksonomy-based recommendations||Ivan Cantador
|Journal of Web Semantics||English||2011||In social tagging systems, users have different purposes when they annotate items. Tags not only depict the content of the annotated items, for example by listing the objects that appear in a photo, or express contextual information about the items, for example by providing the location or the time in which a photo was taken, but also describe subjective qualities and opinions about the items, or can be related to organisational aspects, such as self-references and personal tasks. Current folksonomy-based search and recommendation models exploit the social tag space as a whole to retrieve those items relevant to a tag-based query or user profile, and do not take into consideration the purposes of tags. We hypothesise that a significant percentage of tags are noisy for content retrieval, and believe that the distinction of the personal intentions underlying the tags may be beneficial to improve the accuracy of search and recommendation processes. We present a mechanism to automatically filter and classify raw tags in a set of purpose-oriented categories. Our approach finds the underlying meanings (concepts) of the tags, mapping them to semantic entities belonging to external knowledge bases, namely WordNet and Wikipedia, through the exploitation of ontologies created within the W3C Linking Open Data initiative. The obtained concepts are then transformed into semantic classes that can be uniquely assigned to content- and context-based categories. The identification of subjective and organisational tags is based on natural language processing heuristics. We collected a representative dataset from Flickr social tagging system, and conducted an empirical study to categorise real tagging data, and evaluate whether the resultant tags categories really benefit a recommendation model using the Random Walk with Restarts method. The results show that content- and context-based tags are considered superior to subjective and organisational tags, achieving equivalent performance to using the whole tag space. © 2010 Elsevier B.V. All rights reserved.||0||0|
|Content-based recommendation algorithms on the Hadoop mapreduce framework||De Pessemier T.
|WEBIST 2011 - Proceedings of the 7th International Conference on Web Information Systems and Technologies||English||2011||Content-based recommender systems are widely used to generate personal suggestions for content items based on their metadata description. However, due to the required (text) processing of these metadata, the computational complexity of the recommendation algorithms is high, which hampers their application in large-scale. This computational load reinforces the necessity of a reliable, scalable and distributed processing platform for calculating recommendations. Hadoop is such a platform that supports data-intensive distributed applications based on map and reduce tasks. Therefore, we investigated how Hadoop can be utilized as a cloud computing platform to solve the scalability problem of content-based recommendation algorithms. The various MapReduce operations, necessary for keyword extraction and generating content-based suggestions for the end-user, are elucidated in this paper. Experimental results on Wikipedia articles prove the appropriateness of Hadoop as an efficient and scalable platform for computing content-based recommendations.||0||0|
|Leveraging semantic networks for personalized content in health recommender systems||Wiesner M.
|Proceedings - IEEE Symposium on Computer-Based Medical Systems||English||2011||Since the emergence of the Internet in the early 90's of the last century medical knowledge is spreading around the globe increasingly fast. Though publicly available, it is a difficult task to determine individual relevance for most non professionals. Additionally, relationships between medical terms are hard to discover even for professionals. In this paper we present an approach on how semantic query expansion can be exploited to enhance classic information retrieval (IR) techniques in order to gather health information artifacts for consumers. The approach is based on health related semantic networks which are automatically generated from public resources such as Wikipedia. A scenario for integrating such networks is a so-called health recommender systems (HRS) which can be embedded into a personal health record system (PHRS). This way, relevant personalized medical content can be delivered automatically to end users and owners of health records.||0||0|
|Using Wikipedia to boost collaborative filtering techniques||Gilad Katz
|Adapting recommender systems to the requirements of personal health record systems||Wiesner M.
|IHI'10 - Proceedings of the 1st ACM International Health Informatics Symposium||English||2010||In the future many people in industrialized countries will manage their personal health data electronically in centralized, reliable and trusted repositories - so-called personal health record systems (PHR). At this stage PHR systems still fail to satisfy the individual medical information needs of their users. Personalized recommendations could solve this problem. A first approach of integrating recommender system (RS) methodology into personal health records - termed health recommender system (HRS) - is presented. By exploitation of existing semantic networks like Wikipedia a health graph data structure is obtained. The data kept within such a graph represent health related concepts and are used to compute semantic distances among pairs of such concepts. A ranking procedure based on the health graph is outlined which enables a match between entries of a PHR system and health information artifacts. This way a PHR user will obtain individualized health information he might be interested in.||0||0|
|Human computer collaboration to improve annotations in semantic wikis||Boyer A.
|WEBIST 2010 - Proceedings of the 6th International Conference on Web Information Systems and Technology||English||2010||Semantic wikis are promising tools for producing structured and unstructured data. However, they suffer from a lack of user provided semantic annotations, resulting in a loss of efficiency, despite of their high potential. We propose a system that suggests automatically computed annotations to users in peer to peer semantic wikis. Users only have to validate, complete, modify, refuse or ignore these suggested annotations. Therefore, the annotation task becomes easier, more users will provide annotations. The system is based on collaborative filtering recommender systems, it does not exploit the content of the pages but the usage made on these pages by the users. The resulting semantic wikis contain several kinds of annotations with different status: human, computer or human-computed provided annotations.||0||0|
|Supporting multi-agent reputation calculation in the Wikipedia Recommender System||Jensen C.D.||IET Information Security||English||2010||The Wikipedia is a web-based encyclopedia, written and edited collaboratively by Internet users. Over the past decade, the Wikipedia has experienced a dramatic growth in popularity and is considered by many the primary source of information on the Internet. The Wikipedia has an extremely open editorial policy that allows anybody, to create or modify articles. This has resulted in a broad and detailed coverage of subjects, but it has also caused problems relating to the quality of articles. The Wikipedia Recommender System (WRS) was developed to help human users determine the credibility of an article based on feedback from other Wikipedia users. The WRS calculates a personalised rating for any Wikipedia article based on feedback (recommendations) provided by other Wikipedia users. As part of this process, WRS users are expected to provide their own feedback about the quality of Wikipedia articles that they have read. This makes the WRS a rating-based collaborative filtering system, which implements trust metrics to determine the weight of feedback from different recommenders. In this paper the authors describe the WRS outlining some of the requirements and constraints that shaped the design of the system. The authors also provide a brief overview of the implementation of the WRS prototype. The WRS addresses the general problem of establishing trust in a collaboratively generated resource in a distributed multi-agent system, so the authors believe that the general architecture that underlies the WRS applies to many other applications in such systems.||0||0|
|TasTicWiki: A semantic wiki with content recommendation||Ruiz-Montiel M.
|CEUR Workshop Proceedings||English||2010||Wikis are a great tool inside the Social Web, as they provide the chance of creating collaborative knowledge in a quick way. Semantic wikis are becoming popular as Web technologies evolve: ontologies and semantic markup on the Web allow the generation of machine-readable information. Semantic wikis are often seen as small semantic webs as they provide support for enhanced navigation and searching of their contents, just what the standards of the Semantic Web aim to offer. Moreover, the great amount of information normally present inside wikis, or any web page, creates the necessity of some kind of filtering or personalized recommendation in order to lighten the search of interesting items. We have developed TasTicWiki, a novel semantic wiki engine which takes advantage of semantic information in order, not only to enhance navigation and searching, but also to provide recommendation services.||0||0|
|The entanglement of trust and knowledge on the Web||Simon J.||Ethics and Information Technology||English||2010||In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may have a default to trust, we remain and should remain epistemically vigilant; we look out and need to look out for signs of insincerity and dishonesty in our attempts to know. A fundamental requirement for such vigilance is transparency: in order to critically assess epistemic agents, content and processes, we need to be able to access and address them. On the Web, this request for transparency becomes particularly pressing if (a) trust is placed in unknown human epistemic agents and (b) if it is placed in non-human agents, such as algorithms. I give examples of the entanglement between knowledge and trust on the Web and draw conclusions about the forms of transparency needed in such systems to support epistemically vigilant behaviour, which empowers users to become responsible and accountable knowers. © 2010 Springer Science+Business Media B.V.||0||0|
|Using Wikipedia to alleviate data sparsity issues in Recommender Systems||Loizou A.
|Proceedings - 2010 5th International Workshop on Semantic Media Adaptation and Personalization, SMAP 2010||English||2010||This paper proposes that Wikipedia can effectively be used in order to lessen the negative effects of data sparsity on the accuracy of recommendations produced by Recommender Systems, provided that domain resources available for recommendation can successfully be mapped to Wikipedia articles. Under the assumption that hyperlinks between Wikipedia articles convey latent semantic relationships between the concepts they represent, we argue that by representing domain resources as a set of interconnected Wikipedia articles the volume of information available to a recommender algorithm increases, enabling it to improve its performance. The approach is evaluated using two real world datasets, giving positive results.||0||0|
|A web recommender system based on dynamic sampling of user information access behaviors||Jilin Chen
|Proceedings - IEEE 9th International Conference on Computer and Information Technology, CIT 2009||English||2009||In this study, we propose a Gradual Adaption Model for a Web recommender system. This model is used to track users' focus of interests and its transition by analyzing their information access behaviors, and recommend appropriate information. A set of concept classes are extracted from Wikipedia. The pages accessed by users are classified by the concept classes, and grouped into three terms of short, medium and long periods, and two categories of remarkable and exceptional for each concept class, which are used to describe users' focus of interests, and to establish reuse probability of each concept class in each term for each user by Full Bayesian Estimation as well. According to the reuse probability and period, the information that a user is likely to be interested in is recommended. In this paper, we propose a new approach by which short and medium periods are determined based on dynamic sampling of user information access behaviors. We further present experimental simulation results, and show the validity and effectiveness of the proposed system.||0||0|
|AAAI 2008 workshop reports||Anand S.S.
|AI Magazine||English||2009||AAAI was pleased to present the AAAI-08 Workshop Program, held Sunday and Monday, July 13-14, in Chicago, Illinois, USA. The program included the following 15 workshops: Advancements in POMDP Solvers; AI Education Workshop Colloquium; Coordination, Organizations, Institutions, and Norms in Agent Systems, Enhanced Messaging; Human Implications of Human-Robot Interaction; Intelligent Techniques for Web Personalization and Recommender Systems; Metareasoning: Thinking about Thinking; Multidisciplinary Workshop on Advances in Preference Handling; Search in Artificial Intelligence and Robotics; Spatial and Temporal Reasoning; Trading Agent Design and Analysis; Transfer Learning for Complex Tasks; What Went Wrong and Why: Lessons from AI Research and Applications; and Wikipedia and Artificial Intelligence: An Evolving Synergy. Copyright © 2009, Association for the Advancement of Artificial Intelligence. All rights reserved.||0||0|
|Knowledge infusion into content-based recommender systems||Giovanni Semeraro
De Gemmis M.
|RecSys'09 - Proceedings of the 3rd ACM Conference on Recommender Systems||English||2009||Content-based recommender systems try to recommend items similar to those a given user has liked in the past. The basic process consists of matching up the attributes of a user profile, in which preferences and interests are stored, with the attributes of a content object (item). Common-sense and domain-specific knowledge may be useful to give some meaning to the content of items, thus helping to generate more informative features than "plain" attributes. The process of learning user profiles could also benefit from the infusion of exogenous knowledge or open source knowledge, with respect to the classical use of endogenous knowledge (extracted from the items themselves). The main contribution of this paper is a proposal for knowledge infusion into content-based recommender systems, which suggests a novel view of this type of systems, mostly oriented to content interpretation by way of the infused knowledge. The idea is to provide the system with the "linguistic" and "cultural" background knowledge that hopefully allows a more accurate content analysis than classic approaches based on words. A set of knowledge sources is modeled to create a memory of linguistic competencies and of more specific world "facts", that can be exploited to reason about content as well as to support the user profiling and recommendation processes. The modeled knowledge sources include a dictionary, Wikipedia, and content generated by users (i.e. tags provided on items), while the core of the reasoning component is a spreading activation algorithm. Copyright 2009 ACM.||0||0|
|Using wikipedia content to derive an ontology for modeling and recommending web pages across systems||Chang P.-C.
|CEUR Workshop Proceedings||English||2009||In this work, we are building a cross-system recommender at the client side that uses the Wikipedia's content to derive an ontology for content and user modeling. We speculate the collaborative content of Wikipedia may cover many of the topical areas that people are generally interested in and the vocabulary may be closer to the general public users and updated sooner. Using the Wikipedia derived ontology as a shared platform to model web pages also addresses the issue of cross system recommendations, which generally requires a unified protocol or a mediator. Preliminary tests of our system may indicate that our derived ontology is a fair content model that maps an unknown webpage to its related topical categories. Once page topics can be identified, user models are formulated through analyzing usage pages. Eventually, we will formally evaluate the topicality-based user model. Copyright 2004 ACM.||0||0|
|Altruism, selfishness, and destructiveness on the social web||John Riedl||Lecture Notes in Computer Science||English||2008||Many online communities are emerging that, like Wikipedia, bring people together to build community-maintained artifacts of lasting value (CALVs). What is the nature of people's participation in building these repositories? What are their motives? In what ways is their behavior destructive instead of constructive? Motivating people to contribute is a key problem because the quantity and quality of contributions ultimately determine a CALV's value. We pose three related research questions: 1) How does intelligent task routing-matching people with work-affect the quantity of contributions? 2) How does reviewing contributions before accepting them affect the quality of contributions? 3) How do recommender systems affect the evolution of a shared tagging vocabulary among the contributors? We will explore these questions in the context of existing CALVs, including Wikipedia, Facebook, and MovieLens.||0||0|
|Applying Web 2.0 design principles in the design of cooperative applications||Pinkwart N.||Lecture Notes in Computer Science||English||2008||"Web 2.0" is a term frequently mentioned in media - apparently, applications such as Wikipedia, Social Network Services, Online Shops with integrated recommender systems, or Sharing Services like flickr, all of which rely on user's activities, contributions, and interactions as a central factor, are fascinating for the general public. This leads to a success of these systems that seemingly exceeds the impact of most "traditional" groupware applications that have emerged from CSCW research. This paper discusses differences and similarities between novel Web 2.0 tools and more traditional CSCW application in terms of technologies, system design and success factors. Based on this analysis, the design of the cooperative learning application LARGO is presented to illustrate how Web 2.0 success factors can be considered for the design of cooperative environments.||0||0|
|Article Recommendation Based on a Topic Model for Wikipedia Selection for Schools||Choochart Haruechaiyasak
|Article recommendation based on a topic model for Wikipedia Selection for Schools||Choochart Haruechaiyasak
|Lecture Notes in Computer Science||English||2008||The 2007 Wikipedia Selection for Schools is a collection of 4,625 selected articles from Wikipedia as educational for children. Users can currently access articles within the collection via two different methods: (1) by browsing on either a subject index or a title index sorted alphabetically, and (2) by following hyperlinks embedded within article pages. These two retrieval methods are considered static and subjected to human editors. In this paper, we apply the Latent Dirichlet Allocation (LDA) algorithm to generate a topic model from articles in the collection. Each article can be expressed by a probability distribution on the topic model. We can recommend related articles by calculating the similarity measures among the articles' topic distribution profiles. Our initial experimental results showed that the proposed approach could generate many highly relevant articles, some of which are not covered by the hyperlinks in a given article.||0||0|
|Social software: Fun and games, or business tools?||Warr W.A.||Journal of Information Science||English||2008||This is the era of social networking, collective intelligence, participation, collaborative creation, and borderless distribution. Every day we are bombarded with more publicity about collaborative environments, news feeds, blogs, wikis, podcasting, webcasting, folksonomies, social bookmarking, social citations, collaborative filtering, recommender systems, media sharing, massive multiplayer online games, virtual worlds, and mash-ups. This sort of anarchic environment appeals to the digital natives, but which of these so-called 'Web 2.0' technologies are going to have a real business impact? This paper addresses the impact that issues such as quality control, security, privacy and bandwidth may have on the implementation of social networking in hide-bound, large organizations.||0||0|
|SuggestBot: Using intelligent task routing to help people find work in wikipedia||Dan Cosley
|International Conference on Intelligent User Interfaces, Proceedings IUI||English||2007||Member-maintained communities ask their users to perform tasks the community needs. From Slashdot, to IMDb, to Wikipedia, groups with diverse interests create community-maintained artifacts of lasting value (CALV) that support the group's main purpose and provide value to others. Said communities don't help members find work to do, or do so without regard to individual preferences, such as Slashdot assigning meta-moderation randomly. Yet social science theory suggests that reducing the cost and increasing the personal value of contribution would motivate members to participate more.We present SuggestBot, software that performs intelligent task routing (matching people with tasks) in Wikipedia. SuggestBot uses broadly applicable strategies of text analysis, collaborative filtering, and hyperlink following to recommend tasks. SuggestBot's intelligent task routing increases the number of edits by roughly four times compared to suggesting random articles. Our contributions are: 1) demonstrating the value of intelligent task routing in a real deployment; 2) showing how to do intelligent task routing; and 3) sharing our experience of deploying a tool in Wikipedia, which offered both challenges and opportunities for research. Copyright 2007 ACM.||0||1|
|SuggestBot: using intelligent task routing to help people find work in Wikipedia||Dan Cosley
|English||2007||Member-maintained communities ask their users to perform tasks the community needs. From Slashdot, to IMDb, to Wikipedia, groups with diverse interests create community-maintained artifacts of lasting value (CALV) that support the group's main purpose and provide value to others. Said communities don't help members find work to do, or do so without regard to individual preferences, such as Slashdot assigning meta-moderation randomly. Yet social science theory suggests that reducing the cost and increasing the personal value of contribution would motivate members to participate more. We present SuggestBot, software that performs intelligent task routing (matching people with tasks) in Wikipedia. SuggestBot uses broadly applicable strategies of text analysis, collaborative filtering, and hyperlink following to recommend tasks. SuggestBot's intelligent task routing increases the number of edits by roughly four times compared to suggesting random articles. Our contributions are: 1) demonstrating the value of intelligent task routing in a real deployment; 2) showing how to do intelligent task routing; and 3) sharing our experience of deploying a tool in Wikipedia, which offered both challenges and opportunities for research.||0||1|
|Applications of knowledge currency||Carrillo C.
De la Rosa J.L.
|Internet and Information Systems in the Digital Age Challenges and Solutions - Proceedings of the 7th International Business Information Management Association Conference, IBIMA 2006||English||2006||This paper describes the state of the art for social currencies, cognitive capitalism and currencies on World Wide Web which is used to found a new currency based on knowledge. This new currency better reflects the wealth of nations throughout the world supports more social economic activity and is better adapted to the challenges of digital business ecosystems. The basic premise of the new currency is a knowledge measurement pattern that is formulated as a new alternative social currency. It is a currency to facilitate the conservation and storage of knowledge, its organization and categorization, but chiefly its transference and exploitation. The proposal turns recommender agents into the new currency. An example is presented to show how recommender agents receive a new use, transforming them into objects of transaction. Another application will pretend show our Knowledge Currency as a Digital Currency, with the Citation Auctions in the Research World.||0||0|
|Social currencies and knowledge currencies||Carrillo C.
De La Rosa J.L.
|Frontiers in Artificial Intelligence and Applications||English||2006||This paper describes the state of the art for social currencies and cognitive capitalism, which is used to found a new currency based on knowledge. This new currency better reflects the wealth of nations throughout the world, supports more social economic activity and is better adapted to the challenges of digital business ecosystems. The basic premise of the new currency is a knowledge measurement pattern that is formulated as a new alternative social currency. Therefore, it is a first step contributing to the worldwide evolution towards a knowledge society. It is a currency to facilitate the conservation and storage of knowledge, its organization and categorization, but chiefly its transference and exploitation. The proposal turns recommender agents into the new currency. An example is presented to show how recommender agents receive a new use, transforming them into objects of transaction.||0||0|