2010

From WikiPapers
Jump to: navigation, search
<< 2007 - 2008 - 2009 - 2010 - 2011 - 2012 - 2013 >>

This is a list of 8 events celebrated and 1091 publications published in 2010.

Events

Name City Country DateThis property is a special property in this wiki.
RecentChangesCamp 2010 Canberra Canberra Australia 11 August 2010
RecentChangesCamp 2010 Montreal Montreal Canada 25 June 2010
Wiki Loves Monuments 2010 Netherlands September 2010
WikiSym 2010 Gdańsk Poland 7 July 2010
Wikimania 2010 Gdańsk Poland 9 July 2010
Wikipedia CPOV Conference 2010 Amsterdam Amsterdam Netherlands 26 March 2010
Wikipedia CPOV Conference 2010 Bangalore Bangalore India 12 January 2010
Wikipedia CPOV Conference 2010 Leipzig Leipzig Germany 24 September 2010


Publications

Title Author(s) Keyword(s) Published in Language Abstract R C
"Be Nice": Wikipedia norms for supportive communication Joseph M. Reagle Collaboration
Communication
Prosocial
Supportive
Wikipedia
New Review of Hypermedia and Multimedia English Wikipedia is acknowledged to have been home to some bitter disputes. Indeed, conflict at Wikipedia is said to be "as addictive as cocaine". Yet, such observations are not cynical commentary but motivation for a collection of social norms. These norms speak to the intentional stance and communicative behaviors Wikipedians should adopt when interacting with one another. In the following pages, I provide a survey of these norms on the English Wikipedia and argue that they can be characterized as supportive based on Jack Gibb's classic communication article "Defensive Communication". 0 1
"Got You!": Automatic vandalism detection in wikipedia with web-based shallow syntactic-semantic modeling Wang W.Y.
McKeown K.R.
Coling 2010 - 23rd International Conference on Computational Linguistics, Proceedings of the Conference English Discriminating vandalism edits from non-vandalism edits in Wikipedia is a challenging task, as ill-intentioned edits can include a variety of content and be expressed in many different forms and styles. Previous studies are limited to rule-based methods and learning based on lexical features, lacking in linguistic analysis. In this paper, we propose a novel Web-based shallow syntacticsemantic modeling method, which utilizes Web search results as resource and trains topic-specific n-tag and syntactic n-gram language models to detect vandalism. By combining basic task-specific and lexical features, we have achieved high F-measures using logistic boosting and logistic model trees classifiers, surpassing the results reported by major Wikipedia vandalism detection systems. 0 0
"Got you!": automatic vandalism detection in Wikipedia with web-based shallow syntactic-semantic modeling William Yang Wang
Kathleen R. McKeown
COLING English 0 0
"Post it notes": Students' perceptions on assessment and reflective learning in the foreign language learning process using wikis Mideros D.
Roberts N.
Autonomy
Motivation
Perceptions of assessment
Social learning
Wiki
IMSCI 2010 - 4th International Multi-Conference on Society, Cybernetics and Informatics, Proceedings English This paper describes the experience of a qualitative case study in which a WIKI was implemented as a strategy for independent and interactive learning/practice of the receptive foreign language skills of reading and listening. The main objective was to make an in-depth exploration of students' perceptions and responses to the implementation paying particular attention to the influence that a small percentage of the overall grade of the course could have had in the students' active, passive, or disengaged response to the exercise. The study observed and analyzed the voices of a group of Level 11 Spanish students of the Spanish Degree Program at the University of the West Indies, St. Augustine Campus during the first semester of the academic year 2009-2010. This paper serves as a mode of reflection of the benefits and challenges that technology in the form of WEB 2.0 carries in the learning process of a foreign language, in this case Spanish, analyzing motivation and students' dis/engagement with their own learning process at higher education. 0 0
"What I know is...": Establishing credibility on wikipedia talk pages Meghan Oxley
Morgan J.T.
Mark Zachry
Brian Hutchinson
Computer-mediated communication
Computer-Supported Cooperative Work
Sociotechnical systems
Wiki
WikiSym 2010 English This poster presents a new theoretical framework and research method for studying the relationship between specific types of authority claims and the attempts of contributors to establish credibility in online, collaborative environments. We describe a content analysis method for coding authority claims based on linguistic and rhetorical cues in naturally occurring, text-based discourse. We present results from a preliminary analysis of a sample of Wikipedia talk page discussions focused on recent news events. This method provides a novel framework for capturing and understanding these persuasion-oriented behaviors, and shows potential as a tool for online communication research, including automated text analysis using trained natural language processing systems. 0 0
"What i know is...": establishing credibility on Wikipedia talk pages Meghan Oxley
Jonathan T. Morgan
Mark Zachry
Brian Hutchinson
Computer-mediated communication
Computer-Supported Cooperative Work
Sociotechnical systems
Wiki
WikiSym English 0 0
"Wikipedias" y biblioteca pública. Participar en la información local digital a través de "localpedias" José-Antonio Gómez-Hernández Wikipedia
Public libraries
Local digital content
Anuario ThinkEPI Spanish This paper justifies participation by public libraries in designing and publishing in “localpedias” as a way to promote collaboration in the creation of local content. For this purpose, the “localpedia” concept is explained and some of the main Spanish localpedia experiences described. Finally, some difficulties in consolidating this way of creating and sharing local knowledge are discussed. 3 0
13th international workshop on the web and databases - WebDB 2010 Dong X.L.
Naumann F.
SIGMOD Record English WebDB 2010, the 13th International Workshop on the Web and Databases, took place on June 6, 2010. Christian Bizer, cofounder of the DBpedia project, compared the Linked Data movement, which stems from the Semantic Web research area, with research in the field of Dataspaces. The research session entitled Linked data and Wikipedia featured papers entitled 'An agglomerative query model for discovery in linked data: semantics and approach' and 'XML-based RDF data management for efficient query processing'. The other sessions of the workshop included papers entitled 'Find your advisor: robust knowledge gathering from the Web', 'Redundancy-driven web data extraction and integration', and 'Using latent-structure to detect objects on the Web'. Topics such as 'Manimal: relational optimization for data-intensive programs' and 'Learning topical transition probabilities in click through data with regression models' were also discussed. 0 0
2010 3rd International Workshop on Managing Requirements Knowledge, MaRK'10: Foreword Maalej W.
Thurimella A.K.
Felfernig A.
2010 3rd International Workshop on Managing Requirements Knowledge, MaRK'10 English The third international workshop on managing requirements knowledge, MaRK'10, focuses on potentials and benefits of lightweight knowledge management approaches, such as ontologies, semantic Wikis and rationale management techniques, applied to requirements engineering. Novel ideas, emerging methodologies, frameworks and tools as well as industrial experiences for capturing, representing, sharing and reusing tacit knowledge in requirements engineering processes are discussed. Furthermore, the workshop will provide an interactive exchange platform between the knowledge management community, requirements engineering community and industrial practitioners. 0 0
A Content Analysis: How Wikipedia Talk Pages Are Used Jodi Schneider
Alexandre Passant
John G. Breslin
Web Science Conference English 0 0
A Cultural and Political Economy of Web 2.0 Robert W. Gehl English In this dissertation, I explore Web 2.0, an umbrella term for Web-based software and services such as blogs, wikis, social networking, and media sharing sites. This range of Web sites is complex, but is tied together by one key feature: the users of these sites and services are expected to produce the content included in them. That is, users write and comment upon blogs, produce the material in wikis, make connections with one another in social networks, and produce videos in media sharing sites. This has two implications. First, the increase of user-led media production has led to proclamations that mass media, hierarchy, and authority are dead, and that we are entering into a time of democratic media production. Second, this mode of media production relies on users to supply what was traditionally paid labor. To illuminate this, I explore the popular media discourses which have defined Web 2.0 as a progressive, democratic development in media production. I consider the pleasures that users derive from these sites. I then examine the technical structure of Web 2.0. Despite the arguments that present Web 2.0 as a mass appropriation of the means of media production, I have found that Web 2.0 site owners have been able to exploit users' desires to create content and control media production. Site owners do this by deploying a dichotomous structure. In a typical Web 2.0 site, there is a surface, where users are free to produce content and make affective connections, and there is a hidden depth, where new media capitalists convert user-generated content into exchange-values. Web 2.0 sites seek to hide exploitation of free user labor by limiting access to this depth. This dichotomous structure is made clearer if it is compared to the one Web 2.0 site where users have largely taken control of the products of their labor: Wikipedia. Unlike many other sites, Wikipedia allows users to see into and determine the legal, technical, and cultural depths of that site. I conclude by pointing to the different cultural formations made possible by eliminating the barrier between surface and depth in Web software architecture. 13 0
A Draw Plug-In for a Wiki Software Takashi Yamanoue Wiki
E-learning
Collaboration
SAINT English Experimental implementation of NetDraw, a draw program which is a plug-in of a wiki software, is shown. The draw program of a computer assisted teaching system is exploited to make NetDraw. It takes about three weeks to make the first version of NetDraw. NetDraw is a collaborative tool through drawing. It has been using for computer science classes in a university. A Teacher’s work for preparing classes was reduced by using NetDraw. 3 5
A FAQ online system based on Wiki 2010 International Conference on E-Health Networking, Digital Ecosystems and Technologies, EDT 2010 English 0 0
A Requirements Maturity Measurement Approach based on SKLSEWiki Peng R.
Ye Q.
Ye M.
Requirement maturity measurement
Requirement negotiation
Wiki
Proceedings - International Computer Software and Applications Conference English With the development of IT, the scale and complexity of information system has been dramatically increased. Followed is that the related stakeholders' size increases sharply. How to promote the requirements negotiation of large scale stakeholders becomes a focus of attention. Wiki, as a lightweight documentation and distributed collaboration platform, has demonstrated its capability in distributed requirements elicitation and documentation. Most efforts are paid to construct friendly user interface and collaborative editing capabilities. In this paper, a new concept, requirement maturity, is proposed to represent the stable degree of requirement reached through the negotiation process. A Requirement Maturity Measurement Approach based on Wiki uses the requirement maturity as a threshold to select requirements. Thus, the requirements, which reach a stable status through full negotiation, can be found out. A platform SKLSEWiki is developed to validate the approach. 0 0
A Semantic Approach for Question Classification using WordNet and Wikipedia Santosh K. Ray
Shailendra Singh
B. P. Joshi
Pattern Recognition Letters English Question Answering Systems, unlike search engines, are providing answers to the users’ questions in succinct form which requires the prior knowledge of the expectation of the user. Question classification module of a Question Answering System plays a very important role in determining the expectations of the user. In the literature, incorrect question classification has been cited as one of the major factors for the poor performance of the Question Answering Systems and this emphasizes on the importance of question classification module designing. In this article, we have proposed a question classification method that exploits the powerful semantic features of the WordNet and the vast knowledge repository of the Wikipedia to describe informative terms explicitly. We have trained our system over a standard set of 5500 questions (by UIUC) and then tested it over five TREC question collections. We have compared our results with some standard results reported in the literature and observed a significant improvement in the accuracy of question classification. The question classification accuracy suggests the effectiveness of the method which is promising in the field of open domain question classification. Judging the correctness of the answer is an important issue in the field of question answering. In this article, we are extending question classification as one of the heuristics for answer validation. We are proposing a World Wide Web based solution for answer validation where answers returned by open domain Question Answering Systems can be validated using online resources such as Wikipedia and Google. We have applied several heuristics for answer validation task and tested them against some popular web based open domain Question Answering Systems over a collection of 500 questions collected from standard sources such as TREC, the Worldbook, and the Worldfactbook. The proposed method seems to be promising for automatic answer validation task. 0 0
A Spatial Hypertext Wiki for knowledge management 2010 International Symposium on Collaborative Technologies and Systems, CTS 2010 English 0 0
A Statistical Approach to the Impact of Featured Articles in Wikipedia Antonio J. Reinoso
Felipe Ortega
Jesús M. González-Barahona
Israel Herraiz
Wikipedia
Usage patterns
Traffic characterization
Quantitative analysis
KEOD English This paper presents an empirical study on the impact of featured articles on the attention that Wikipedia’s articles attract, and how this behavior differs in different editions of Wikipedia. The study is based on the analysis of the log lines registered by the Wikimedia Foundation Squid servers after having sent the appropriate content in response to the corresponding request submitted by any Wikipedia user. The analysis has been conducted regarding the six most visited editions of the Wikipedia and has involved more than 4,100 million log lines corresponding to the traffic of September, October and November 2009. The methodology of work has mainly consisted on the parsing of the requests sent by the users and on their subsequent filtering according to the study directives. Relevant information fields has been finally stored in a database for persistence and further characterization. The main results of this paper are twofold: it shows how to use the the traffic log to extract information about the use of Wikipedia, which is a novel research approach without precedences in the research community, and it analyzes whether the featured articles mechanism achieve to attract more attention or not. 6 0
A Tiny Adventure: The introduction of problem based learning in an undergraduate chemistry course Williams D.P.
Woodward J.R.
Symons S.L.
Davies D.L.
Chemistry
Group work
Mini-projects
Problem based learning
Projects
Transferable skills
Wiki
Chemistry Education Research and Practice English Year 1 of the chemistry degree at the University of Leicester has been significantly changed by the integration of a problem based learning (PBL) component into the introductory inorganic/physical chemistry module, "Chemical Principles". Small groups of 5-6 students were given a series of problems with real world scenarios and were then given the responsibility of planning, researching and constructing solutions to the problem on a group wiki hosted on the Universty's Virtual Learning Environment (VLE). The introduction of PBL to the course was evaluated both quantitatively and qualitatively. Class test and exam results were analysed and compared with those achieved in previous years (i.e. before the introduction of PBL). It was found that student performance was at least as good as it had been before the introduction of PBL. Retention figures after PBL had risen sharply (not one PBL student dropped out of the course during the first term). Student and staff feedback was also collected for qualitative analysis of the impact of the change. Combining these findings showed that students appeared to show an improvement in, and recognition of the acquisition of, transferable skills and that group work on immediate arrival at university (representing an opportunity to use social skills within an academic exercise) led to high student retention within the PBL cohort. 0 0
A Web metrics study on Taiwan Baseball Wiki using Google Analytics Journal of Educational Media and Library Science English 0 0
A Wiki with multiagent tracking, modeling, and coalition formation Proceedings of the National Conference on Artificial Intelligence English 0 0
A Wikipedia Matching Approach to Contextual Advertising Alexander Pak
Chin-Wan Chung
World Wide Web English Abstract  Contextual advertising is an important part of today’s Web. It provides benefits to all parties: Web site owners and an advertising platform share the revenue, advertisers receive new customers, and Web site visitors get useful reference links. The relevance of selected ads for a Web page is essential for the whole system to work. Problems such as homonymy and polysemy, low intersection of keywords and context mismatch can lead to the selection of irrelevant ads. Therefore, a simple keyword matching technique gives a poor accuracy. In this paper, we propose a method for improving the relevance of contextual ads. We propose a novel “Wikipedia matching” technique that uses Wikipedia articles as “reference points” for ads selection. We show how to combine our new method with existing solutions in order to increase the overall performance. An experimental evaluation based on a set of real ads and a set of pages from news Web sites is conducted. Test results show that our proposed method performs better than existing matching strategies and using the Wikipedia matching in combination with existing approaches provides up to 50% lift in the average precision. TREC standard measure bpref-10 also confirms the positive effect of using Wikipedia matching for the effective ads selection. 0 0
A Wikipédia e o discurso de/sobre o conhecimento Gláucia da Silva Henge IX Encontro do Círculo de Estudos Linguísticos do Sul Portuguese 1 0
A baseline approach for detecting sentences containing uncertainty Sang E.T.K. CoNLL-2010: Shared Task - Fourteenth Conference on Computational Natural Language Learning, Proceedings of the Shared Task English We apply a baseline approach to the CoNLL-2010 shared task data sets on hedge detection. Weights have been assigned to cue words marked in the training data based on their occurrences in certain and uncertain sentences. New sentences received scores that correspond with those of their best scoring cue word, if present. The best acceptance scores for uncertain sentences were determined using 10-fold cross validation on the training data. This approach performed reasonably on the shared task's biological (F=82.0) and Wikipedia (F=62.8) data sets. 0 0
A cascade method for detecting hedges and their scope in natural language text Tang B.
Xiaolong Wang
Yuan B.
Fan S.
CoNLL-2010: Shared Task - Fourteenth Conference on Computational Natural Language Learning, Proceedings of the Shared Task English Detecting hedges and their scope in natural language text is very important for information inference. In this paper, we present a system based on a cascade method for the CoNLL-2010 shared task. The system composes of two components: one for detecting hedges and another one for detecting their scope. For detecting hedges, we build a cascade subsystem. Firstly, a conditional random field (CRF) model and a large margin-based model are trained respectively. Then, we train another CRF model using the result of the first phase. For detecting the scope of hedges, a CRF model is trained according to the result of the first subtask. The experiments show that our system achieves 86.36% F-measure on biological corpus and 55.05% F-measure on Wikipedia corpus for hedge detection, and 49.95% F-measure on biological corpus for hedge scope detection. Among them, 86.36% is the best result on biological corpus for hedge detection. 0 0
A case study of wikis and student-designed games in physical education Hastie P.A.
Casey A.
Tarter A.-M.
Games
ICT
Physical education
Wiki
Technology, Pedagogy and Education English This paper reports on the incorporation of wiki technology within physical education. Boys from two classes at a school in the United Kingdom were divided into small teams and given the task of creating a new game in a same genre as football, hockey, netball or rugby. Each team had a wiki on which were recorded all the plans and developments of this game as it was being devised and refined. The teacher, an outside games expert and the school's librarian also had access to the wikis, which allowed for constant interaction between the participants outside class time. Interviews with the teacher, the librarian and the students revealed that the 24/7 classroom enabled by the ICT, together with an extended community of practice, resulted in a higher quality learning experience in physical education for the participants. Indeed, it was the belief of all concerned that the quality of the end game products would not have been possible without the ICT component. 0 0
A classification algorithm of signed networks based on link analysis Qu Z.
Yafang Wang
Wang J.
Zhang F.
Qin Z.
Node classification
Signed networks
Social network
2010 International Conference on Communications, Circuits and Systems, ICCCAS 2010 - Proceedings English In the signed networks the links between nodes can be either positive (means relations are friendship) or negative (means relations are rivalry or confrontation), which are very useful for analysis the real social network. After study data sets from Wikipedia and Slashdot networks, We find that the signs of links in the fundamental social networks can be used to classified the nodes and used to forecast the potential emerged sign of links in the future with high accuracy, using models that established across these diverse data sets. Based on the models, the proposed algorithm in the artwork provides perception into some of the underlying principles that extract from signed links in the networks. At the same time, the algorithm shed light on the social computing applications by which the attitude of a person toward another can be predicted from evidence provided by their around friends relationships. 0 0
A cocktail approach to the VideoCLEF'09 linking task Raaijmakers S.
Versloot C.
De Wit J.
Lecture Notes in Computer Science English In this paper, we describe the TNO approach to the Finding Related Resources or linking task of VideoCLEF09. Our system consists of a weighted combination of off-the-shelf and proprietary modules, including the Wikipedia Miner toolkit of the University of Waikato. Using this cocktail of largely off-the-shelf technology allows for setting a baseline for future approaches to this task. 0 0
A combined approach to minimizing vandalisms on wikipedia Maneewongvatana S. ISCIT 2010 - 2010 10th International Symposium on Communications and Information Technologies English Vandalism is a main problem of the Wikipedia and other content management systems that have open-edit policy. Due to its popularity, Wikipedia has been attacked very regularly. To detect and clean up these bad contents, it requires a lot of human time to inspect the changes between revisions which should not be much wasted for such task. There have been many attempts to alleviate vandalisms. Some early work proposed automatic vandalism detection based on various methods. In this work, we emphasize the user is a key parameter used to detect vandalism. Together with reputation system and spell checker, they can be used to flag or ban the update. We also evaluate the reduction of the time at which vandalized revisions appear to public when we implement some of the proposed approach. 0 0
A common awareness and knowledge platform for studying and enabling independent living - CAPSIL Bennis C.
McGrath D.
Caulfield B.
Knapp B.
Coghlan N.
Ageing
Health care
ICT
Independent living
Monitoring systems
Pervasive health
Policy
Tool
Telemedicine
Wiki
Wireless networks
2010 4th International Conference on Pervasive Computing Technologies for Healthcare, Pervasive Health 2010 English The population of the world is growing older, and the balance of old to young is shifting so that by 2050 over 30% of the population is expected to be over 60 years old[l], with particularly high ratios of old to young in the EU, USA and Japan. CAPSIL is an FP7 Coordinating Support Action that incorporates a strategic international coalition of University and Industrial partners that already have extensive teams developing hardware/software/knowledge solutions to independent living based on user requirements. CAPSIL has two fundamental goals: 1. To carry out an analysis of the state of the art with regards to technology, healthcare and public policy in the EU, US and Japan for enabling independent living for older adults. Based on this analysis, develop a detailed roadmap for EU research to achieve effective and sustainable solutions for independent living 2. To support aging research by proposing procedures to incorporate all of these diverse solutions into WiKi entries (CAPSIL WiKi). It is our hope that these CAPSIL WiKi's will enable researchers and the ICT industry to get the information they need to quickly and easily test solutions for prolonging independent living within the many and various heterogeneous communities. In this paper we will summarise the principal findings of the CAPSIL Roadmap and present an overview of the main research gaps and recommendations for policy and research development. Finally, we will introduce the CAPSIL WiKi infrastructure. 0 0
A comparative analysis of the usage and infusion of wiki and non-wiki-based knowledge management systems Andrea Hester English Antecedents of adoption and diffusion in time-honored models such as the technology acceptance model and innovation diffusion theory may not provide sufficient measures for newer Web 2.0 technologies such as wikis. This research examines two potential extensions to the basic tenets of user acceptance: reciprocity expectation and personal innovativeness in information technology (IT). The research also examines an advancing technology: wiki technology-based knowledge management systems. Based on the results of an online survey, partial least squares analysis is used to evaluate the proposed model and provide comparative results for traditional knowledge management systems and wikis. Of the 170 respondents, 46 indicated wiki-based systems as their primary knowledge management system, while 124 indicated non-wiki-based systems as the primary system. The results indicate the set of factors influencing usage are different than the factors influencing infusion. Further, non-wiki-based versus newer wiki-based knowledge management systems have different sets of factors affecting usage and infusion. Both extensions to the base model have a greater impact on wiki-based systems as opposed to non-wiki-based systems. Reciprocity expectation was found to have a contradictory significant negative influence on infusion of wikis. Additionally, personal innovativeness in IT moderates the usage and infusion of wikis more so than for traditional knowledge management systems. These results support a more robust model for analyzing the utilization of such technologies. 0 0
A comparison of Web 2.0 tools in a doctoral course Meyer K.A. Blogs
Online discussions
Web 2.0 tools
Wiki
Internet and Higher Education English Adult, professional students in a doctoral-level course used Web 2.0 tools such as wikis, blogs, and online discussions to develop answers to six "Big Questions" related to higher education finance and also produced a research paper that used original data or the research literature to improve understanding of a specific topic. At the close of the course, students were asked to provide examples of learning for each question and each tool, and to evaluate the tools used. Bloom's Digital Taxonomy was used to evaluate levels of learning. Results indicated that the level of learning mirrored that of the Big Question or was at higher levels when students used new tools. Wikis generated objections from students who did not care for group work, although others found it a good collaborative tool. Blogs were more acceptable, but online discussions were preferred because of the interaction and sharing among students. Research papers allowed students to learn material of their own interest and to do so in depth. © 2010 Elsevier Inc. All rights reserved. 0 0
A comparison of approaches for geospatial entity extraction from Wikipedia Daryl Woodward
Jeremy Witmer
Jugal Kalita
Proceedings - 2010 IEEE 4th International Conference on Semantic Computing, ICSC 2010 English We target in this paper the challenge of extracting geospatial data from the article text of the English Wikipedia. We present the results of a Hidden Markov Model (HMM) based approach to identify location-related named entities in the our corpus of Wikipedia articles, which are primarily about battles and wars due to their high geospatial content. The HMM NER process drives a geocoding and resolution process, whose goal is to determine the correct coordinates for each place name (often referred to as grounding). We compare our results to a previously developed data structure and algorithm for disambiguating place names that can have multiple coordinates. We demonstrate an overall f-measure of 79.63% identifying and geocoding place names. Finally, we compare the results of the HMM-driven process to earlier work using a Support Vector Machine. 0 0
A comparison of generated Wikipedia profiles using social labeling and automatic keyword extraction Russell T.
Bongwon Suh
Chi E.H.
ICWSM 2010 - Proceedings of the 4th International AAAI Conference on Weblogs and Social Media English In many collaborative systems, researchers are interested in creating representative user profiles. In this paper, we are particularly interested in using social labeling and automatic keyword extraction techniques for generating user profiles. Social labeling is a process in which users manually tag other users with keywords. Automatic keyword extraction is a technique that selects the most salient words to represent a user's contribution. We apply each of these two profile generation methods to highly active Wikipedia editors and their contributions, and compare the results. We found that profiles generated through social labeling matches the profiles generated via automatic keyword extraction, and vice versa. The results suggest that user profiles generated from one method can be used as a seed or bootstrapping proxy for the other method. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 0
A content-based image retrieval system based on unsupervised topological learning Rogovschi N.
Grozavu N.
Clustering
Content-based image retrieval
Self-organizing maps
Topological learning
Proc. - 6th Intl. Conference on Advanced Information Management and Service, IMS2010, with ICMIA2010 - 2nd International Conference on Data Mining and Intelligent Information Technology Applications English Internet offers to its users an ever-increasing number of information. Among those, the multimodal data (images, text, video, sound) are widely requested by users, and there is a strong need for effective ways to process and to manage it, respectively. Most of existed algorithms/frameworks are doing only images annotations and the search is doing by this annotations, or combined with some clustering results, but most of them do not allow a quick browsing of these images. Even if the search is very quickly, but if the number of images is very large, the system must give the possibility to the user to browse this data. In this paper, an image retrieval system is presented, including detailed descriptions of used lwo-SOM (local weighting observations Self-Organizing Map) approach and a new interactive learning process using user information/response. Also, we show the use of unsupervised learning on an images dataset, we do not dispose of the labels, and we will not take into account the corresponding text for the images. The used real dataset contains 17812 images extracted from wikipedia pages, each of which is characterized by its color and texture. 0 0
A contribution-based framework for the creation of semantically-enabled web applications Rico M.
Camacho D.
Corcho O.
Contributely-collaborative systems
Semantic web applications
Semantic web technologies
Wiki-based applications
Information Sciences English We present Fortunata, a wiki-based framework designed to simplify the creation of semantically-enabled web applications. This framework facilitates the management and publication of semantic data in web-based applications, to the extent that application developers do not need to be skilled in client-side technologies, and promotes application reuse by fostering collaboration among developers by means of wiki plugins. We illustrate the use of this framework with two Fortunata-based applications named OMEMO and VPOET, and we evaluate it with two experiments performed with usability evaluators and application developers respectively. These experiments show a good balance between the usability of the applications created with this framework and the effort and skills required by developers. © 2009 Elsevier Inc. All rights reserved. 0 0
A course wiki: Challenges in facilitating and assessing student-generated learning content for the humanities classroom Journal of General Education English 0 0
A dynamic environment for distance learning Chatzisawa S.
Tsoulouhas G.
Georgiadou A.
Karakos A.
Asynchronous distance learning
Learning management systems
Moodle
Wiki
CSEDU 2010 - 2nd International Conference on Computer Supported Education, Proceedings English Research was conducted concerning Learning Management Systems (LMS) resulting in 'Moodle' being an efficient tool in dynamic distance learning environment. The fact that Moodle is an open source program allows the programmer to modify and extend it thus provide the possibility to develop a chat module that allows online communication in real time. Chat is in 'question - answer' form. The characteristic of this module is a virtual user. This user will be used in future development, in relation with data mining technology, in order to answer to students' questions. In addition, wikis was embedded into the system due to their wide range of implementations in education. Both types of wikis, closed and open, were included into the system. 'MediaWiki' was used in order to create the open wiki. Additional tools involved in the environment enable users to upload files, create quizzes, forums, calendar, questionnaires and more, aiming to the creation of a dynamic environment of distance learning. Regarding the dynamic environment developed, the program was tested in a high school as a tool, in terms of asynchronous learning environment, in order to ensure that is functional and easy to use. 0 0
A fielded wiki for personality genetics Finn Årup Nielsen Wiki
Neuroinformatics
Genetics
Bioinformatics
Meta-analysis
WikiSym English (poster summary): A fielded wiki (a highly structured wiki) for genetic association studies with personality traits is described that features easy entry, on-the-fly meta-analysis of effect sizes and forest and funnel plotting with export of data in different formats. (paper abstract): I describe a fielded wiki, where a Web form interface allows the entry, analysis and visualization of results from scientific papers in the personality genetics domain. Papers in this domain typically report the mean and standard deviation of multiple personality trait scores from statistics on human subjects grouped based on genotype. The wiki organizes the basic data in a single table with fixed columns, each row recording statistical values with respect to a specific personality trait reported in a specific paper with a specific genotype group. From this basic data hard-coded meta-analysis can compute individual and combined effect sizes. The meta-analytic results are displayed in on-the-fly computed hyperlinked graphs and tables. Revision control on the basic data tracks changes and data may be exported to comma-separated files or in a MediaWiki template format. 7 1
A five-year study of on-campus Internet use by undergraduate biomedical students Terry Judd
Gregor Kennedy
Computers and Education This paper reports on a five-year study (2005-2009) of biomedical students' on-campus use of the Internet. Internet usage logs were used to investigate students' sessional use of key websites and technologies. The most frequented sites and technologies included the university's learning management system, Google, email and Facebook. Email was the primary method of electronic communication. However, its use declined over time, with a steep drop in use during 2006 and 2007 appearing to correspond with the rapid uptake of the social networking site Facebook. Both Google and Wikipedia gained in popularity over time while the use of other key information sources, including the library and biomedical portals, remained low throughout the study. With the notable exception of Facebook, most {'Web} 2.0' technologies attracted little use. The {'Net} Generation' students involved in this study were heavy users of generalist information retrieval tools and key online university services, and prefered to use externally hosted tools for online communication. These and other findings have important implications for the selection and provision of services by universities. 2010 Elsevier Ltd. All rights reserved. 0 0
A flexible content repository to enable a peer-to-peer-based wiki Concurrency Computation Practice and Experience English 0 0
A framework for BM25F-based XML retrieval Itakura K.Y.
Clarke C.L.A.
BM25
BM25F
Book search
Wikipedia
XML retrieval
SIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval English We evaluate a framework for BM25F-based XML element retrieval. The framework gathers contextual information associated with each XML element into an associated field, which we call a characteristic field. The contents of the element and the contents of the characteristic field are then treated as distinct fields for BM25F weighting purposes. Evidence supporting this framework is drawn from both our own experiments and experiments reported in related work. 0 0
A framework for automatic semantic annotation of Wikipedia articles Pipitone A.
Pirrone R.
Linguistic pattern matching
Ontology
Semantic annotation
Semantic wiki
SWAP 2010 - 6th Workshop on Semantic Web Applications and Perspectives English Semantic wikis represent a novelty in the field of semantic technologies. Nowadays, there are many important "non-semantic" wiki sources, as the Wikipedia encyclopedia. A big challenge is to make existing wiki sources semantic wikis. In this way, a new generation of applications can be designed to brose, search, and reuse wiki contents, while reducing loss of data. The core of this problem is the extraction of semantic sense and the annotation from text. In this paper a hierarchical framework for automatic semantic annotation of plain text is presented that has been finalized to the use of Wikipedia pages as information source. The strategy is based on disambiguation of plain text using both domain ontology and linguistic pattern matching methods. The main steps are: TOC extraction from the original page, content annotation for each section linguistic rules, and semantic wiki generation. The complete framework is outlined and an application scenario is presented. 0 0
A framework for co-classification of articles and users in Wikipedia LeBo Liu
Tan P.-N.
Link-based classification
Wikipedia
Proceedings - 2010 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2010 English The massive size of Wikipedia and the ease with which its content can be created and edited has made Wikipedia an interesting domain for a variety of classification tasks, including topic detection, spam detection, and vandalism detection. These tasks are typically cast into a link-based classification problem, in which the class label of an article or a user is determined from its content-based and link-based features. Prior works have focused primarily on classifying either the editors or the articles (but not both). Yet there are many situations in which the classification can be aided by knowing collectively the class labels of the users and articles (e.g., spammers are more likely to post spam content than non-spammers). This paper presents a novel framework to jointly classify the Wikipedia articles and editors, assuming there are correspondences between their classes. Our experimental results demonstrate that the proposed co-classification algorithm outperforms classifiers that are trained independently to predict the class labels of articles and editors. 0 0
A framework for the assessment of wiki-based collaborative learning activities Meishar-Tal H.
Schencks M.
Assessment
Collaborative Authoring
Collaborative Learning
E-Assessment
Wiki
International Journal of Virtual and Personal Learning Environments English This paper discusses the pedagogical and technological aspects of assessing wiki-based collaborative learning activities. The first part of the paper presents a general framework of collaborative learning assessment. The framework is based on four aspects of assessment, characterized by four questions: who, what, how and by whom. The second part of the paper concentrates on the analysis of the applicability of the assessment framework in wikis. A systematic analysis of MediaWiki's reports is conducted in order to discuss the requisite information required for a well-balanced and effective assessment process. Finally, a few suggestions are raised for further improvements of the wiki's reports. Copyright 0 0
A framework of collaborative adaptation authoring Nurjanah D.
Davis H.C.
Tiropanis T.
Adaptation
Adaptive Educational Hypermedia
Computer-supported Collaborative Work
Semantic Web Technology
The Collaborative Adaptation Authoring Approach
Wiki
Proceedings of the 6th International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2010 English Adaptive Educational Hypermedia systems (AEH) enhance learning by adaptation and personalisation. As a consequence, wide ranging knowledge and learning content are needed. Problems then emerge in the provision of suitable authoring tools to carry out the authoring process which is complex and time consuming. Based on the fact that former research studies on authoring have identified drawbacks in collaboration, usabilility, efficiency, or interoperability, this paper proposes an approach for collaborative adaptation authoring for adaptive learning. The intended approach aims at improving authoring for AEH systems by allowing many people to participate and enhancing authors' interaction. The novelty of the approach lies in how the domain knowledge which has been semantically defined is enriched, and in the application of Computer Support Collaborative Work (CSCW). This approach adopts the advantages of existing semantic web technology and wiki-based authoring tools used to develop domain knowledge; its output is then enriched with pedagogy-related knowledge including adaptation. The output of the system is intended to be delivered in an existing AEH system. 0 0
A hedgehop over a max-margin framework using hedge cues Georgescul M. CoNLL-2010: Shared Task - Fourteenth Conference on Computational Natural Language Learning, Proceedings of the Shared Task English In this paper, we describe the experimental settings we adopted in the context of the 2010 CoNLL shared task for detecting sentences containing uncertainty. The classification results reported on are obtained using discriminative learning with features essentially incorporating lexical information. Hyper-parameters are tuned for each domain: using BioScope training data for the biomedical domain and Wikipedia training data for the Wikipedia test set. By allowing an efficient handling of combinations of large-scale input features, the discriminative approach we adopted showed highly competitive empirical results for hedge detection on the Wikipedia dataset: our system is ranked as the first with an F-score of 60.17%. 0 0
A high-precision approach to detecting hedges and their scopes Kilicoglu H.
Bergler S.
CoNLL-2010: Shared Task - Fourteenth Conference on Computational Natural Language Learning, Proceedings of the Shared Task English We extend our prior work on speculative sentence recognition and speculation scope detection in biomedical text to the CoNLL-2010 Shared Task on Hedge Detection. In our participation, we sought to assess the extensibility and portability of our prior work, which relies on linguistic categorization and weighting of hedging cues and on syntactic patterns in which these cues play a role. For Task 1B, we tuned our categorization and weighting scheme to recognize hedging in biological text. By accommodating a small number of vagueness quantifiers, we were able to extend our methodology to detecting vague sentences in Wikipedia articles. We exploited constituent parse trees in addition to syntactic dependency relations in resolving hedging scope. Our results are competitive with those of closeddomain trained systems and demonstrate that our high-precision oriented methodology is extensible and portable. 0 0
A human and social sciences wiktionary in a peer-to-peer network Khelifa L.N.
Mezil R.
Si-Mohammed A.
Bouabana-Tebibel T.
Human and social sciences wiktionary
Peer-to-pee
Semantic MediaWiki
Semantic wiki
2010 International Conference on Machine and Web Intelligence, ICMWI 2010 - Proceedings English This paper presents an integration of a multicultural and multilingual wiktionary in human and social sciences in a peer-to-peer network. This on-line dictionary was developed as part of the FSP project to allow researchers from both side of Mediterranean Sea to exchange and to share knowledge in the human and social sciences domain. The present extension would allow off-line collaborative edition, scalability, management of inter-wikis link, an advanced search for constructing the global sheet by interrogating specific peers and finally a wiki page replication strategy to ensure data availability. The system architecture and the prototype are presented. 0 0
A hypersocial-interactive model of Wiki-mediated writing: Collaborative writing in a fan & gamer community Rik Hunter English In this dissertation I argue that writing is a technologically- and socially-inflected activity, and the particular patterns of collaborative writing found on the World of Warcraft Wiki (WoWWiki) are the result of the interactions between a MediaWiki's affordances and the social practices operating in this context. In other contexts, collaborative writing can more closely resemble the "conventional ethos" (Knobel and Lankshear, 2007) of more individualistic notions of authorship often tied to print. With writing projects such as WoWWiki, we can observe a dramatic shift in notions of textual ownership and production towards the communal and collaborative, and I suggest the patterns of collaboration found on WoWWki are evidence of a larger technocultural shift signaling new conditions for literacy. In the midst of this shift, the meaning of "collaboration," "authorship," and "audience" is redefined.

Following my introductory chapter, I use textual analysis of talk pages to examine the talk pages of several of WoWWiki featured articles for particular patterns of language use and identify what WoWWikians focus their attention on in the process of writing articles. I argue that collaboration on WoWWiki poses a challenge to models of face to face writing groups and offers unique patterns of collaboration.

I then contend that WoWWiki's writing practices are entering a society where the idea of the single author has been strong. Nevertheless, I find evidence of a shared model of text production and collaborative notion of authorship; further, collaboration is disrupted by those who hold author-centric perspectives.

Next, I argue that our previous models of audience and writing previously developed around print and, later, hypertext are inadequate because they cannot account for roles readers can take and how writers and readers interact on a wiki. With this new arrangement in collaborative writing evident on WoWWiki, I develop the hypersocial-interactive model of wiki-mediated writing.

I conclude by reviewing this dissertation's main arguments regarding wiki-mediated collaborative writing, after which I explore the implications of using wikis for writing instruction. Finally, I discuss the limitations of this study and consider directions for future research on voluntary collaborative wiki-mediated writing.
22 0
A lucene and maximum entropy model based hedge detection system Long Chen
Di Eugenio B.
CoNLL-2010: Shared Task - Fourteenth Conference on Computational Natural Language Learning, Proceedings of the Shared Task English This paper describes the approach to hedge detection we developed, in order to participate in the shared task at CoNLL-2010. A supervised learning approach is employed in our implementation. Hedge cue annotations in the training data are used as the seed to build a reliable hedge cue set. Maximum Entropy (MaxEnt) model is used as the learning technique to determine uncertainty. By making use of Apache Lucene, we are able to do fuzzy string match to extract hedge cues, and to incorporate part-of-speech (POS) tags in hedge cues. Not only can our system determine the certainty of the sentence, but is also able to find all the contained hedges. Our system was ranked third on the Wikipedia dataset. In later experiments with different parameters, we further improved our results, with a 0.612 F-score on the Wikipedia dataset, and a 0.802 F-score on the biological dataset. 0 0
A machine learning approach to link prediction for interlinked documents Kc M.
Chau R.
Hagenbuchner M.
Tsoi A.C.
Lee V.
Lecture Notes in Computer Science English This paper provides an explanation to how a recently developed machine learning approach, namely the Probability Measure Graph Self-Organizing Map (PM-GraphSOM) can be used for the generation of links between referenced or otherwise interlinked documents. This new generation of SOM models are capable of projecting generic graph structured data onto a fixed sized display space. Such a mechanism is normally used for dimension reduction, visualization, or clustering purposes. This paper shows that the PM-GraphSOM training algorithm "inadvertently" encodes relations that exist between the atomic elements in a graph. If the nodes in the graph represent documents, and the links in the graph represent the reference (or hyperlink) structure of the documents, then it is possible to obtain a set of links for a test document whose link structure is unknown. A significant finding of this paper is that the described approach is scalable in that links can be extracted in linear time. It will also be shown that the proposed approach is capable of predicting the pages which would be linked to a new document, and is capable of predicting the links to other documents from a given test document. The approach is applied to web pages from Wikipedia, a relatively large XML text database consisting of many referenced documents. 0 0
A method for category similarity calculation in Wikis Cheong-Iao Pang
Robert P. Biuk-Aghai
Wiki
Category similarity
WikiSym English Wikis, such as Wikipedia, allow their authors to assign categories to articles in order to better organize related content. This paper presents a method to calculate similarities between categories, illustrated by a calculation for the top-level categories in the Simple English version of Wikipedia. 5 2
A method for obtaining semantic facets of music tags Sordo M.
Gouyon F.
Sarmento L.
Last.fm
Music tagging
Social music
Wikipedia
CEUR Workshop Proceedings English Music folksonomies have an inherent loose and open semantics, which hampers their use in structured browsing and recommendation. In this paper, we present a method for automatically obtaining a set of semantic facets underlying a folksonomy of music tags. The semantic facets are anchored upon the structure of the dynamic repository of universal knowledge Wikipedia. We illustrate the relevance of the obtained facets for the description of tags. 0 0
A methodology for producing improved focused elements Crouch C.J.
Crouch D.B.
Bhirud D.
Poluri P.
Polumetla C.
Sudhakar V.
Lecture Notes in Computer Science English This paper reports the results of our experiments to consistently produce highly ranked focused elements in response to the Focused Task of the INEX Ad Hoc Track. The results of these experiments, performed using the 2008 INEX collection, confirm that our current methodology (described herein) produces such elements for this collection. Our goal for 2009 is to apply this methodology to the new, extended 2009 INEX collection to determine its viability in this environment. (These experiments are currently underway.) Our system uses our method for dynamic element retrieval [4], working with the semi-structured text of Wikipedia [5], to produce a rank-ordered list of elements in the context of focused retrieval. It is based on the Vector Space Model [15]; basic functions are performed using the Smart experimental retrieval system [14]. Experimental results are reported for the Focused Task of both the 2008 and 2009 INEX Ad Hoc Tracks. 0 0
A model for open semantic hyperwikis Philip Boulain
Shadbolt N.
Gibbins N.
Open Hypermedia
Semantic web
Wiki
Lecture Notes in Computer Science English Wiki systems have developed over the past years as lightweight, community-editable, web-based hypertext systems. With the emergence of semantic wikis such as Semantic MediaWiki [6], these collections of interlinked documents have also gained a dual role as ad-hoc RDF [7] graphs. However, their roots lie in the limited hypertext capabilities of the World Wide Web [1]: embedded links, without support for features like composite objects or transclusion. Collaborative editing on wikis has been hampered by redundancy; much of the effort spent on Wikipedia is used keeping content synchronised and organised.[3] We have developed a model for a system, which we have prototyped and are evaluating, which reintroduces ideas from the field of hypertext to help alleviate this burden. In this paper, we present a model for what we term an 'open semantic hyperwiki' system, drawing from both past hypermedia models, and the informal model of modern semantic wiki systems. An 'open semantic hyperwiki' is a reformulation of the popular semantic wiki technology in terms of the long-standing field of hypermedia, which then highlights and resolves the omissions of hypermedia technology made by the World Wide Web and the applications built around its ideas. In particular, our model supports first-class linking, where links are managed separately from nodes. This is then enhanced by the system's ability to embed links into other nodes and separate them out again, allowing for a user editing experience similiar to HTML-style embedded links, while still gaining the advantages of separate links. We add to this transclusion, which allows for content sharing by including the content of one node into another, and edit-time transclusion, which allows users to edit pages containing shared content without the need to follow a sequence of indirections to find the actual text they wish to modify. Our model supports more advanced linking mechanisms, such as generic links, which allow words in the wiki to be used as link endpoints. The development of this model has been driven by our prior experimental work on the limitations of existing wikis and user interaction.We have produced a prototype implementation which provides first-class links, transclusion, and generic links. 0 0
A monolingual tree-based translation model for sentence simplification Zhu Z.
Bernhard D.
Iryna Gurevych
Coling 2010 - 23rd International Conference on Computational Linguistics, Proceedings of the Conference English In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems. 0 0
A multiple-stage framework for related entity finding: FDWIM at TREC 2010 entity track Dingquan Wang
Wu Q.
Hejie Chen
Niu J.
NIST Special Publication English This paper describes a multiple-stage retrieval framework for the task of related entity finding on TREC 2010 Entity Track. In the document retrieval stage, search engine is used to improve the retrieval accuracy. In the entity extraction and filtering stage, we extract entity with NER tools, Wikipedia and text pattern recognition. Then stoplist and other rules are employed to filter entity. Deep mining of the authority pages is proved to be effective in this stage. In entity ranking stage, many factors including keywords from narrative, page rank, combined results of corpus-based association rules and search engine are considered. In the final stage, an improved feature-based algorithm is proposed for the entity homepage detection. 0 0
A negative category based approach for Wikipedia document classification Meenakshi Sundaram Murugeshan
K. Lakshmi
Saswati Mukherjee
Wikipedia documents
XML classification
Cosine
Document classification
Feature selection
Fractional similarity
Initial descriptions
Negative categories
Profile creation
Similarity measure
Unstructured text
Int. J. Knowl. Eng. Data Min. English 0 0
A new approach for collaborative knowledge management: A unified conceptual model for collaborative knowledge management Kim D.J.
Andrew Yang T.
Collaborative knowledge management
E-collaboration
Unified model for collaborative knowledge management
Wikipedia
16th Americas Conference on Information Systems 2010, AMCIS 2010 English With the advancement of new communication and virtualization technologies, various tools and models have been proposed for enabling effective management of the e-collaboration processes related to the creation, sharing, and presentation of collective knowledge. In the theoretical perspective, two significant aspects of collaborative knowledge management have been considered: (a) the internal processes of collaborative knowledge creation and sharing, which occur not only within the individual knowledge workers but also among them (collaboration); (b) the effective design of human-computer interfaces facilitating the internal processes, by providing functionalities for the knowledge workers to comprehend, conceptualize, and cooperate in knowledge creation and sharing through e-collaboration processes, including the effective presentation of the generated knowledge on the website. At the present time, although there exist several studies in the related areas, there is no unique conceptual model that can be applied toward assessing both the interface layer and the internal processes of collaborative knowledge creation and sharing in distributed ICT-based work contexts. This gap has been a great motivation for us to propose a conceptual model, namely the Unified Collaborative Knowledge Management (UCKM) model, which can be used to design and evaluate the overall knowledge management process, including the underlying sub-processes, the presentation of knowledge, and the human-computer interfaces. 0 0
A new approach for integrating teams in multidisciplinary project based learning Romero G.
Martinez M.L.
Marquez J.D.J.
Perez J.M.
Active learning
Collaborative
Multidisciplinary
Project based learning
Wiki environment
Procedia - Social and Behavioral Sciences English This paper describes the experience of collaboration among students and teachers in order to develop multidisciplinary projects, and to reproduce as closely as possible, the team's integration into a company environment. A new methodology based on student interaction and content development in a Wiki environment has been developed. Students and Teachers have participated with enthusiasm, due to the correct well-distributed work and the easiness of use of the selected platform in which only an internet connected computer is needed to create and to discuss the multidisciplinary projects. The quality of the developed projects has been dramatically improved due to the integration of the results obtained from the different teams. © 2010 Elsevier Ltd. All rights reserved. 0 0
A new digital library retrieval model based on Wiki technology CCTAE 2010 - 2010 International Conference on Computer and Communication Technologies in Agriculture Engineering English 0 0
A new method to compute the word relevance in news corpus Jinpan L.
Liang H.
Xin L.
Mingmin X.
Wei L.
Component
News corpus
Term co-occurrence
Wikipedia
Word relatedness
Proceedings - 2010 2nd International Workshop on Intelligent Systems and Applications, ISA 2010 English In this paper we propose a new method to compute the relevance of term in news corpus. According to the characteristics of news corpus , we first propose that the news corpus should be divided into different channels, second we make use of the feature of news document , we divide the co-occurrence of terms into two cases, on the one hand the co-occurrence in the title of the news, On the other hand the co-occurrence in the news text, we use different methods to compute the co-occurrence. In the end, we introduce the web corpus Wikipedia to overcome some shortcomings of the news corpus 0 0
A novel literature retrieval model of digital library based on wiki technology Applied Mechanics and Materials English 0 0
A novel weighting scheme for efficient document indexing and classification Tahayna B.
Ayyasamy R.K.
Alhashmi S.
Eu-Gene S.
Feature subset seletion
Genetic algorithms
Support vector machines
Term weighting scheme
Wikipedia
Proceedings 2010 International Symposium on Information Technology - Engineering Technology, ITSim'10 English In this paper we propose and illustrate the effectiveness of a new topic-based document classification method. The proposed method utilizes the Wikipedia, a large scale Web encyclopaedia that has high-quality and huge-scale articles and a category system. Wikipedia is used using an Ngram technique to transform the document from being a "bag of words" to become a "bag of concepts". Based on this transformation, a novel concept-based weighting scheme (denoted as Conf.idf) is proposed to index the text with the flavor of the traditional tf.idf indexing scheme. Moreover, a genetic algorithm-based support vector machine optimization method is used for the purpose of feature subset and instance selection. Experimental results showed that proposed weighting scheme outperform the traditional indexing and weighting scheme. 0 0
A perfect match for reasoning, explanation, and reason maintenance: OWL 2 RL and semantic Wikis Kotowski J.
Bry F.
CEUR Workshop Proceedings English Reasoning in wikis has focused so far mostly on expressiveness and tractability and neglected related issues of updates and explanation. In this demo, we show reasoning, explanation, and incremental updates in the KiWi wiki and argue that it is a perfect match for OWL 2 RL reasoning. Explanation nicely complements the "work-in-progress" focus of wikis by explaining how which information was derived and thus helps users to easily discover and remove sources of inconsistencies. Incremental updates are necessary to minimize reasoning times in a frequently changing wiki environment. 0 0
A proposal of analogical reasoning based on structural mapping and image schemas Kaneko Y.
Okada K.
Ito S.
Nomura T.
Takagi T.
SCIS and ISIS 2010 - Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems English The analogical reasoning is a reasoning to interpret the newly given unknown facts based on the already-known facts that have been learned, and it is considered as a high level human thinking process. We pint out problems of fuzzy approximate reasoning and propose a new type analogical reasoning method based on structural mapping and image schemas. The extracted relation between base facts map the structural characteristics to the target fact and obtain reasoned results. Results of experiments using Wikipedia as a corpus have shown good performance. 0 0
A qualitative analysis of sub-degree students commentary styles and patterns in the context of gender and peer e-feedback Leung K.
Chan M.
Maxwell G.
Poon T.
Chinese writing
Peer e-feedback
Student learning
Wiki-supported learning
Lecture Notes in Computer Science English While research interest is building in the role and effectiveness of electronic based peer feedback (Peer e-Feedback) in the context of L1/L2 English writing, that of Chinese language education at sub-degree level has been neglected. This paper seeks to address this shortfall by examining aspects of how sub-degree level students at a Hong Kong Community College respond to peer roles in the context of e-feedback to written work in a Wiki-supported Chinese language class. The work focuses on identifying the predominant commentary styles employed in a Wiki-supported peer-reviewed writing environment (WPWE) and also gives attention to the question of Gender to probe features and scope, similarities and differences displayed between female and male students. Among the patterns identified was the trend to produce feedback in a descending order, viz: (1) offering a solution; (2) identification of a problem/good point; (3) explanation; (4) localization; and (5) elaboration. Some gender differences emerged e.g. males tended to offer 'specific suggestions' more readily than female students. Interestingly and importantly, both genders demonstrated inabilities and or reluctance to offer requests for elaboration - evidence that some well designed training may be desired before conducting online peer-reviewed writing activity. It was evident too, that positive feedback outnumbered negative feedback even when some helpful corrective criticism was clearly needed and appropriate. Overall, the many positives far outweighed some negatives in the educational value of Peer e-feedback as a useful tool in Chinese language education. The study also showed that there is a need to further refine and clearly define some of the terminology now appearing in this important area of research. 0 0
A random walk framework to compute textual semantic similarity: A unified model for three benchmark tasks Majid Yazdani
Andrei Popescu-Belis
Proceedings - 2010 IEEE 4th International Conference on Semantic Computing, ICSC 2010 English A network of concepts is built from Wikipedia documents using a random walk approach to compute distances between documents. Three algorithms for distance computation are considered: hitting/commute time, personalized page rank, and truncated visiting probability. In parallel, four types of weighted links in the document network are considered: actual hyperlinks, lexical similarity, common category membership, and common template use. The resulting network is used to solve three benchmark semantic tasks - word similarity, paraphrase detection between sentences, and document similarity - by mapping pairs of data to the network, and then computing a distance between these representations. The model reaches stateof-the-art performance on each task, showing that the constructed network is a general, valuable resource for semantic similarity judgments. 0 0
A ranking approach to target detection for automatic link generation He J.
Maarten de Rijke
Disambiguation
Learning to rank
Link generation
Wikipedia
SIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval English We focus on the task of target detection in automatic link generation with Wikipedia, i.e., given an N-gram in a snippet of text, find the relevant Wikipedia concepts that explain or provide background knowledge for it. We formulate the task as a ranking problem and investigate the effectiveness of learning to rank approaches and of the features that we use to rank the target concepts for a given N-gram. Our experiments show that learning to rank approaches outperform traditional binary classification approaches. Also, our proposed features are effective both in binary classification and learning to rank settings. 0 0
A recursive approach to entity ranking and list completion using entity determining terms, qualifiers and prominent n-grams Ramanathan M.
Rajagopal S.
Karthik V.
Murugeshan M.S.
Saswati Mukherjee
Entity Determining Terms (EDTs)
Entity Ranking
List Completion
Named Qualifiers
Prominent n-grams
Qualifiers
Wikipedia tags
Lecture Notes in Computer Science English This paper presents our approach for INEX 2009 Entity Ranking track which consists of two subtasks viz. Entity Ranking and List Completion. Retrieving the correct entities according to the user query is a three-step process viz. extracting the required information from the query and the provided categories, extracting the relevant documents which may be either prospective entities or intermediate pointers to prospective entities by making use of the structure available in the Wikipedia Corpus and finally ranking the resultant set of documents. We have extracted the Entity Determining Terms (EDTs), Qualifiers and prominent n-grams from the query, strategically exploited the relation between the extracted terms and the structure and connectedness of the corpus to retrieve links which are highly probable of being entities and then used a recursive mechanism for retrieving relevant documents through the Lucene Search. Our ranking mechanism combines various approaches that make use of category information, links, titles and WordNet information, initial description and the text of the document. 0 0
A resource allocation framework for collective intelligence system engineering Vergados D.J.
Ioanna Lykourentzou
Kapetanios E.
Collective intelligence
Model design
Resource allocation
System engineering
Proceedings of the International Conference on Management of Emergent Digital EcoSystems, MEDES'10 English In this paper, we present a framework for engineering collective intelligence systems that will be used by web communities. The proposed framework enables the development of communitydriven, self-regulating CI systems, which adapt their functionality to the activity and goals of the web community. The above engineering methodology is applied on the design of a popular web system, namely Wikipedia, to illustrate the way that the functionality of the latter could be improved, in terms of better and more prompt article quality production. The preliminary evaluation results of this application, obtained through simulation modeling are promising. Copyright 2010 ACM. 0 0
A retrieval method for earth science data based on integrated use of Wikipedia and domain ontology Masashi Tatedoko
Toshiyuki Shimizu
Akinori Saito
Masatoshi Yoshikawa
Lecture Notes in Computer Science English Due to the recent advancement in observation technologies and progress in information technologies, the total amount of earth science data has increased at an explosive pace. However, it is not easy to search and discover earth science data because earth science requires high degree of expertness. In this paper, we propose a retrieval method for earth science data which can be used by non-experts such as scientists from other field, or students interested in earth science. In order to retrieve relevant data sets from a query, which may not include technical terminologies, supplementing terms are extracted by utilizing knowledge bases; Wikipedia and domain ontology. We evaluated our method using actual earth science data. The data, the queries, and the relevance assessments for our experiments were made by the researchers of earth science. The results of our experiments show that our method has achieved good recall and precision. 0 0
A retrieval method for earth science data based on integrated use of wikipedia and domain ontology Masashi Tatedoko
Toshiyuki Shimizu
Akinori Saito
Masatoshi Yoshikawa
DEXA English 0 0
A self-supervised approach for extraction of attribute-value pairs from wikipedia articles Brandao W.C.
Moura E.S.
Silva A.S.
Ziviani N.
Lecture Notes in Computer Science English Wikipedia is the largest encyclopedia on the web and has been widely used as a reliable source of information. Researchers have been extracting entities, relationships and attribute-value pairs from Wikipedia and using them in information retrieval tasks. In this paper we present a self-supervised approach for autonomously extract attribute-value pairs from Wikipedia articles. We apply our method to the Wikipedia automatic infobox generation problem and outperformed a method presented in the literature by 21.92% in precision, 26.86% in recall and 24.29% in F1. 0 0
A semantic approach for question classification using WordNet and Wikipedia Santosh Kumar Ray
Shailendra Singh
B.P. Joshi
Pattern Recognition Letters Question Answering Systems, unlike search engines, are providing answers to the users' questions in succinct form which requires the prior knowledge of the expectation of the user. Question classification module of a Question Answering System plays a very important role in determining the expectations of the user. In the literature, incorrect question classification has been cited as one of the major factors for the poor performance of the Question Answering Systems and this emphasizes on the importance of question classification module designing. In this article, we have proposed a question classification method that exploits the powerful semantic features of the {WordNet} and the vast knowledge repository of the Wikipedia to describe informative terms explicitly. We have trained our system over a standard set of 5500 questions (by {UIUC)} and then tested it over five {TREC} question collections. We have compared our results with some standard results reported in the literature and observed a significant improvement in the accuracy of question classification. The question classification accuracy suggests the effectiveness of the method which is promising in the field of open-domain question classification. Judging the correctness of the answer is an important issue in the field of question answering. In this article, we are extending question classification as one of the heuristics for answer validation. We are proposing a World Wide Web based solution for answer validation where answers returned by open-domain Question Answering Systems can be validated using online resources such as Wikipedia and Google. We have applied several heuristics for answer validation task and tested them against some popular web based open-domain Question Answering Systems over a collection of 500 questions collected from standard sources such as {TREC,} the Worldbook, and the Worldfactbook. The proposed method seems to be promising for automatic answer validation task. 2010 Elsevier {B.V.} All rights reserved. 0 0
A semantic geographical knowledge wiki system mashed up with Google Maps Gao Y.
Gao S.
Li R.
Yuanyuan Liu
Geo-ontology
Google Maps
Mashup
Semantic wiki
Science China Technological Sciences English A wiki system is a typical Web 2.0 application that provides a bi-directional platform for users to collaborate and share much useful information online. Unfortunately, computers cannot well understand the wiki pages in plain text. The user-generated geographical content via wiki systems cannot be manipulated properly and efficiently unless the geographical semantics is explicitly represented. In this paper, a geographical semantic wiki system, Geo-Wiki, is introduced to solve this problem. Geo-Wiki is a semantic geographical knowledge-sharing web system based on geographical ontologies so that computers can parse and storage the multi-source geographical knowledge. Moreover, Geo-Wiki mashed up with map services enriches the representation and helps users to find spatial distribution patterns, and thus can serve geospatial decision-making by customizing the Google Maps APIs. 0 0
A semantic wiki alerting environment incorporating credibility and reliability evaluation Ulicny B.
Matheus C.J.
Kokar M.M.
Credibility
Entity/relation extraction
Event tracking
Gangs
Media monitoring
Reliability
Semantic analysis
CEUR Workshop Proceedings English In this paper, we describe a system that semantically annotates streams of reports about transnational criminal gangs in order to automatically produce models of the gangs' membership and activities in the form of a semantic wiki. A gang ontology and semantic inferencing are used to annotate the reports and supplement entity and relationship annotations based on the local document context. Reports in the datastream are annotated for reliability and credibility in the proof-of-concept system. 0 0
A semantic wiki for the engineering of diagnostic guideline knowledge Hatko R.
Jochen Reutelshoefer
Joachim Baumeister
Frank Puppe
CEUR Workshop Proceedings English This paper presents a wiki environment for the modelling of Computer Interpretable Guidelines (CIGs) using the graphical language DiaFlux. We describe a wiki-driven development process using a stepwise formalization and allowing for almost self-acquisition by the domain specialists. The applicability of the approach is demonstrated by a project developing a guideline for sepsis diagnosis and treatment by a collaboration of clinicians. 0 0
A semantic wiki framework for reconciling conflict collaborations based on selecting consensus choice Journal of Universal Computer Science English 0 0
A semi-automatic method for domain ontology extraction from Portuguese language Wikipedia's categories Xavier C.C.
De Lima V.L.S.
Ontology
Semi-automatic ontology extraction
Wikipedia
Lecture Notes in Computer Science English The increasing need for ontologies and the difficulties of manual construction give place to initiatives proposing methods for automatic and semi-automatic ontology learning. In this work we present a semi-automatic method for domain ontologies extraction from Wikipedia's categories. In order to validate the method, we have conducted a case study in which we implemented a prototype generating a Tourism ontology. The results are evaluated against a manually built Golden Standard reporting 79.51% Precision and 91.95% Recall, comparable to those found in the literature for other languages. 0 0
A semi-supervised key phrase extraction approach: Learning from title phrases through a document semantic network Deyi Li
Li S.
Li W.
Weiping Wang
Qu W.
ACL 2010 - 48th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference English It is a fundamental and important task to extract key phrases from documents. Generally, phrases in a document are not independent in delivering the content of the document. In order to capture and make better use of their relationships in key phrase extraction, we suggest exploring the Wikipedia knowledge to model a document as a semantic network, where both n-ary and binary relationships among phrases are formulated. Based on a commonly accepted assumption that the title of a document is always elaborated to reflect the content of a document and consequently key phrases tend to have close semantics to the title, we propose a novel semi-supervised key phrase extraction approach in this paper by computing the phrase importance in the semantic network, through which the influence of title phrases is propagated to the other phrases iteratively. Experimental results demonstrate the remarkable performance of this approach. 0 0
A statistical approach to the impact of featured articles in Wikipedia Reinoso A.J.
Felipe Ortega
Gonzalez-Barahona J.M.
Israel Herraiz
Quantitative analysis
Traffic characterization
Usage patterns
Wikipedia
KEOD 2010 - Proceedings of the International Conference on Knowledge Engineering and Ontology Development English This paper presents an empirical study on the impact of featured articles on the attention that Wikipedia's articles attract, and how this behavior differs in different editions of Wikipedia. The study is based on the analysis of the log lines registered by the Wikimedia Foundation Squid servers after having sent the appropriate content in response to the corresponding request submitted by any Wikipedia user. The analysis has been conducted regarding the six most visited editions of the Wikipedia and has involved more than 4,100 million log lines corresponding to the traffic of September, October and November 2009. The methodology of work has mainly consisted on the parsing of the requests sent by the users and on their subsequent filtering according to the study directives. Relevant information fields has been finally stored in a database for persistence and further characterization. The main results of this paper are twofold: it shows how to use the the traffic log to extract information about the use of Wikipedia, which is a novel research approach without precedences in the research community, and it analyzes whether the featured articles mechanism achieve to attract more attention or not. 0 0
A structured Wikipedia for mathematics: Mathematics in a web 2.0 world Hong Lin Mathematics
Online collaboration
Organization
Web 2.0 technologies
ICSOFT 2010 - Proceedings of the 5th International Conference on Software and Data Technologies English In this paper, we propose a new idea for developing a collaborative online system for storing mathematical work similar to Wikipedia, but much more suitable for storing mathematical results and concepts. The main idea proposed in this paper is to design a system that would allow users to store mathematics in a structured manner, which would make related work easier to find. The proposed system would have users use indentation to add a hierarchical structure to mathematical results and concepts entered into the system. The hierarchical structure provided by the indentation of results and concepts would provide users with additional search functionality useful for finding related work. Additionally, the system would automatically link related results by using the structure provided by users, and also provide other useful functionality. The system would be flexible in terms of letting users decide how much structure to add to each mathematical result or concept to ensure that contributors are not overly burdened with having to add too much structure to each result. The system proposed in this paper serves as a starting point for discussion on new ideas to organize mathematical results and concepts, and many open questions remain for new research. 0 0
A student-centered collaborative learning environment for developing communication skills in engineering education Requena-Carrion J.
Alonso-Atienza F.
Guerrero-Curieses A.
Rodriguez-Gonzalez A.B.
Collaborative learning
Communication skills
Poster session
Project based learning
Wiki
2010 IEEE Education Engineering Conference, EDUCON 2010 English Communication skills development is one of the main goals of engineering education. We propose an integrated student-centered collaborative learning environment for developing communication skills, using project-based learning methods and peer assessment. In our learning environment, projects are assigned to small groups of students under teacher supervision, documented in a wiki-editing tool and presented during a public poster session. By combining wiki entries and poster presentations, we intend to facilitate students (1) to gain access to the project of their peers and share their results, (2) to analyze and comment critically the project of their peers and provide them with feedback, and (3) to enhance their writing and oral skills. Previous experiences encourage us to promote this integrated learning environment. Wiki environments allowed students to improve the quality of their projects and to develop a critical attitude towards their projects and the projects of their peers. The poster session was found to be more dynamic than traditional oral presentations. Students got engaged in a more open and critical manner with the project of their peers, and students presenting their project had the chance to improve the quality of their presentation on the fly, by presenting their work several times in the duration of the session. In future courses, we will implement a learning environment that combines both wikibased and poster session approaches. We expect that the implementation of both approaches will help to develop the communication skills of engineering students. 0 0
A survival modeling approach to biomedical search result diversification using wikipedia Xiaoshi Yin
Jimmy Xiangji Huang
Xiaofeng Zhou
Zhoujun Li
Biomedical IR
Diversity
Survival modeling
SIGIR English 0 0
A taxonomy of Wiki genres in enterprise settings Erika Shehan Poole
Jonathan Grudin
Enterprise wiki
Pedia
Taxonomy
Wiki
Workplace
WikiSym English 0 0
A tool to support authoring of learning designs in the field of sustainable energy education Karetsos S.
Haralampopoulos D.
Collaborative authoring
Learning design
Ontology
Semantic wiki
Sustainable energy
Proceedings of the IADIS International Conference WWW/Internet 2010 English Sustainable energy education is anticipated to contribute to the solution of a sequence of challenges-problems related generally with the dominant developmental models and particularly with the energy production and consumption. Within this contribution we examine the integration of an ontology based framework for learning design in sustainable energy education area with a semantic wiki, as a web based tool supporting the collaborative construction, improvement and exchange/ dissemination of learning designs in the field of sustainable energy education. We conjecture that the semantic wiki technology platform constitutes an adequate environment, in terms of its usability and expressiveness, for the building and supporting of a community of practice in this area. 0 0
A unique web resource for physiology, ecology and the environmental sciences: PrometheusWiki Sack L.
Cornwell W.K.
Santiago L.S.
Barbour M.M.
Choat B.
Evans J.R.
Munns R.
Nicotra A.
Methods
Protocols
Standardisation
Web publishing
Wiki
Functional Plant Biology English PROtocols, METHods, Explanations and Updated Standards Wiki (PrometheusWiki, http://www.publish.csiro.au/prometheuswiki/) is a new open access, fully searchable web resource that contains protocols and methods for plant physiology, ecology and environmental sciences. Contributions can be uploaded by anyone in the community, with attributed authorship, and are open for wiki-style comment. This resource allows the gathering in one place of methods, links to published methods and detailed protocols used by leading laboratories around the world, with annotation. As a web resource, PrometheusWiki is continually evolving and updatable, easily and rapidly searchable and highly accessible. It will also enhance communication, allowing multimedia description of protocols and techniques, with spreadsheet tools, slide shows and video files easily integrated into the text. This resource is anticipated to lead to strong benefits in standardising methods, improving access to training for students and professionals, promoting collaborations and expanding the cutting edge of research. 0 0
A wiki based system to produce high quality teaching materials Proceedings of the 5th Iberian Conference on Information Systems and Technologies, CISTI 2010 English 0 0
A wiki for Mizar: motivation, considerations, and initial prototype Josef Urban
Jesse Alama
Piotr Rudnicki
Herman Geuvers
AISC'10/MKM'10/Calculemus English 0 0
A wiki way to wisdom Chemical Engineer English 0 0
A wiki-based collective intelligence approach to formulate a Body of Knowledge (BOK) for a new discipline Yoshifumi Masunaga
Yoshiyuki Shoji
Kazunari Ito
Body of knowledge (BOK)
BOK constructor
Collective intelligence
Discipline
Semantic MediaWiki (SMW)
Wiki
WikiSym 2010 English This paper describes a wiki-based collective intelligence approach to provide a system environment that enables users to formulate a body of knowledge (BOK) for a new discipline, such as social informatics. When the targeted discipline is mature, for example, computer science, its BOK can be straightforwardly formulated by a task force using a top-down approach. However, in the case of a new discipline, it is presumed that nobody has a comprehensive understanding of it; therefore, the formulation of BOK in such a field can be carried out using a bottom-up approach. In other words, a collective intelligence approach supporting such work seems promising. This paper proposes the BOK+ which is a novel BOK formulation principle for new disciplines. To realize this principle, the BOK Constructor is designed and prototyped where Semantic MediaWiki (SMW) is used to provide its basic functions. The BOK Constructor consists of a BOK Editor, SMW, Uploader, and BOK Miner. Most of the fundamental functions of the BOK Constructor, with the exception of the BOK Miner, were implemented. We validated that the BOK Constructor serves its intended purpose. 0 0
A wiki-based platform for technoeconomic analysis of lignocellulosic ethanol biorefineries 10AIChE - 2010 AIChE Annual Meeting, Conference Proceedings English 0 0
A wiki-oriented on-line dictionary for human and social sciences Khelifa L.
Lammari N.
Fadili H.
Akoka J.
Human and social sciences
Multicultural wiktionary
Semantic wiki
CEUR Workshop Proceedings English The aim of this paper is to contribute to the construction of a human and social sciences (HSS) on-line dictionary. The latter is Wiki-oriented. It takes into account the multicultural aspect of the HSS as well as the ISO 1951 international standard. This standard has been defined to harmonize the presentation of specialized/general and multilingual/monolingual dictionaries into a generic structure independent of the publishing media. The proposed Wiktionary will allow HSS researchers to exchange and to share their knowledge regardless of their geographical locations of work and/or of residence. After the conceptual description of this dictionary and the presentation of the mapping rules to Wiki semantic concepts, the paper will present an overview of the prototype that has been developed. 0 0
AVBOT: detección y corrección de vandalismos en Wikipedia Emilio J. Rodríguez-Posada NovATIca Spanish 0 2
AWESOME computing: Using corpus data to tailor a community environment for dissertation writing Dimitriva V.
Neagle R.
Bajanki S.
Lau L.
Boyle R.
Dissertation writing
Ill-defined domains
Learning communities
Social semantic web
Lecture Notes in Computer Science English This demonstration will present a novel community environment 'AWESOME Dissertation Environment (ADE)' which uses semantic wikis to implement the pedagogical approach of 'social scaffolding'. ADE was developed within an interdisciplinary UK research project called AWESOME (Academic Writing Empowered by Social Online Mediated Environments) which involved the universities of Leeds, Coventry and Bangor. The environment was instantiated in several domains: Education, Fashion and Design, Philosophy and Religious Studies, and an Academic Writing Centre. Following both the encouraging feedback from the trial instantiations and the challenges faced in deploying the environment in practice, we conducted a second stage of the project which aimed at adapting the ADE to dissertation writing in computing. Following the lessons learnt from the first stage, we now performed a systematic approach to tailor the existing community environment to meet dissertation writing needs in a specific domain and in a particular educational practice. 0 0
Academics and Wikipedia: Reframing Web 2.0+as a disruptor of traditional academic power-knowledge arrangements H. Eijkman Campus-Wide Information Systems Purpose - There is much hype about academics' attitude to Wikipedia. This paper seeks to go beyond anecdotal evidence by drawing on empirical research to ascertain how academics respond to Wikipedia and the implications these responses have for the take-up of Web 2.0+. It aims to test the hypothesis that Web 2.0+, as a platform built around the socially constructed nature of knowledge, is inimical to conventional power-knowledge arrangements in which academics are traditionally positioned as the key gatekeepers to knowledge. Design/methodology/approach - The research relies on quantitative and qualitative data to provide an evidence-based analysis of the attitudes of academics towards the student use of Wikipedia and towards Web 2.0+. These data were provided via an online survey made available to a number of universities in Australia and abroad. As well as the statistical analysis of quantitative data, qualitative data were subjected to thematic analysis using relational coding. Findings - The data by and large demonstrate that Wikipedia continues to be a divisive issue among academics, particularly within the soft sciences. However, Wikipedia is not as controversial as popular publicity would lead one to believe. Many academics use it extensively though cautiously themselves, and therefore tend to support a cautious approach to its use by students. However, evidence supports the assertion that there is an implicit if not explicit awareness among academics that Wikipedia, and possibly by extension Web 2.0+, are disruptors of conventional academic power-knowledge arrangements. Practical implications - It is clear that academics respond differently to the disruptive effects that Web 2.0+has on the political economy of academic knowledge construction. Contrary to popular reports, responses to Wikipedia are not overwhelmingly focused on resistance but encompass both cautious and creative acceptance. It is becoming equally clear that the increasing uptake of Web 2.0+in higher education makes it inevitable that academics will have to address the political consequences of this reframing of the ownership and control of academic knowledge production. Originality/value - The paper demonstrates originality and value by providing a unique, evidence-based insight into the different ways in which academics respond to Wikipedia as an archetypal Web 2.0+application and by positioning Web 2.0+within the political economy of academic knowledge construction. 0 0
Access and annotation of archaelogical corpus via a semantic wiki Leclercq E.
Savonnet M.
CEUR Workshop Proceedings English Semantic wikis have shown their ability to allow knowledge management and collaborative authoring. They are particularly appropriate for scientific collaboration. This paper details the main concepts and the architecture of WikiBridge, a semantic wiki, and its application in the archaelogical domain. Archaeologists primarily have a documentcentric work. Adding meta-information in the form of annotations has proved to be useful to enhance search. WikiBridge combines models and ontologies to increase data consistency within the wiki. Moreover, it allows several types of annotations: simple annotations, n-ary relations and recursive annotations. The consistency of these annotations is checked synchronously by using structural constraints and or asynchronously by using domain constraints. 0 0
Accessibility and usability of a collaborative e-learning application Bozza A.
Mesiti M.
Valtolina S.
Dini S.
Ribaudo M.
Accessibility and usability evaluation
Design for all
Wiki system
CSEDU 2010 - 2nd International Conference on Computer Supported Education, Proceedings English VisualPedia is a collaborative environment proposed to facilitate the development of educational objects thought for all students, including students with different forms of disability. In this paper we briefly introduce VisualPedia and then report on the experience in the evaluation of accessibility and usability of the system prototype we have developed so far. We also discuss possible future improvements that have become evident after the experimentation with end users. 0 0
Accessible organizational elements in wikis with Model-Driven Development Bittar T.J.
Lobato L.L.
Fortes R.P.M.
Neto D.F.
Accessibility
Information architecture
Model-driven development (MDD)
Wiki
SIGDOC 2010 - Proceedings of the 28th ACM International Conference on Design of Communication English Wiki is a web collaborative tool for promoting rapid publication of information, by allowing users to edit, add or revise content through a web browser. Despite various benefits offered by the use of wikis, there is no guarantee of a good structure of its content. This occurs, especially, because of the flexibility and easiness on creating and referencing pages and also for the reason that difficulty to graphically visualize the information architecture. In this paper it is proposed a Model-Driven Development (MDD) approach that supports creating graphical models of namespaces to generate structured wikis code. In addition, this approach also aims to include accessibility features on models from official W3C guidelines such as WCAG and ATAG, allowing access by a wider range of users. Copyright 2010 ACM. 0 0
Accuracy estimate and optimization techniques for SimRank computation Dmitry Lizorkin
Pavel Velikhov
Maxim Grinev
Denis Turdakov
VLDB Journal The measure of similarity between objects is a very useful tool in many areas of computer science, including information retrieval. {SimRank} is a simple and intuitive measure of this kind, based on a graph-theoretic model. {SimRank} is typically computed iteratively, in the spirit of {PageRank.} However, existing work on {SimRank} lacks accuracy estimation of iterative computation and has discouraging time complexity. In this paper, we present a technique to estimate the accuracy of computing {SimRank} iteratively. This technique provides a way to find out the number of iterations required to achieve a desired accuracy when computing {SimRank.} We also present optimization techniques that improve the computational complexity of the iterative algorithm from O(n4) in the worst case to {min(O(nl),} O(n3/ log2n)), with n denoting the number of objects, and l denoting the number object-to-object relationships. We also introduce a threshold sieving heuristic and its accuracy estimation that further improves the efficiency of the method. As a practical illustration of our techniques, we computed {SimRank} scores on a subset of English Wikipedia corpus, consisting of the complete set of articles and category links. {Springer-Verlag} 2009. 0 0
Achieving high precisions with peer-to-peer is possible Winter J.
Kuhne G.
Distributed Search
Efficiency
INEX
Large-Scale
XML-Retrieval
Lecture Notes in Computer Science English Until previously, centralized stand-alone solutions had no problem coping with the load of storing, indexing and searching the small test collections used for evaluating search results at INEX. However, searching the new large-scale Wikipedia collection of 2009 requires much more resources such as processing power, RAM, and index space. It is hence more important than ever to regard efficiency issues when performing XML-Retrieval tasks on such a big collection. On the other hand, the rich markup of the new collection is an opportunity to exploit the given structure and obtain a more efficient search. This paper describes our experiments using distributed search techniques based on XML-Retrieval. Our aim is to improve both effectiveness and efficiency; we have thus submitted search results to both the Efficiency Track and the Ad Hoc Track. In our experiments, the collection, index, and search load are split over a peer-to-peer (P2P) network to gain more efficiency in terms of load balancing when searching large-scale collections. Since the bandwidth consumption between searching peers has to be limited in order to achieve a scalable, efficient system, we exploit XML-structure to reduce the number of messages sent between peers. In spite of mainly aiming at efficiency, our search engine SPIRIX resulted in quite high precisions and made it into the top-10 systems (focused task). It ranked 7 at the Ad Hoc Track (59%) and came first in terms of precision at the Efficiency Track (both categories of topics). For the first time at INEX, a P2P system achieved an official search quality comparable with the top-10 centralized solutions! 0 0
Acquiring semantic context for events from online resources Oliveirinha J.
Pereira F.
Alves A.
Context-aware
Events
Information extraction
Meaning of places
Proceedings of the 3rd International Workshop on Location and the Web, LocWeb 2010 English During the last few years, the amount of online descriptive information about places and their dynamics has reached reasonable dimension for many cities in the world. Such enriched information can now support semantic analysis of space, particularly in which respects to what exists there and what happens there. We present a methodology to automatically label places according to events that happen there. To achieve this we use Information Extraction techniques applied to online Web 2.0 resources such as Zvents and Boston Calendar. Wikipedia is also used as a resource to semantically enrich the tag vectors initially extracted. We describe the process by which these semantic vectors are obtained, present results of experimental analysis, and validated these with Amazon Mechanical Turk and a set of algorithms. To conclude, we discuss the strengths and weaknesses of the methodology. Copyright 2010 ACM. 0 0
Acquiring thesauri from wikis by exploiting domain models and lexical substitution Claudio Giuliano
Alfio Massimiliano Gliozzo
Aldo Gangemi
Kateryna Tymoshenko
ESWC English 0 0
Activity theoretical framework for wiki-based collaborative content creation 2010 International Conference on Management and Service Science, MASS 2010 English 0 0
Adapting recommender systems to the requirements of personal health record systems Wiesner M.
Pfeifer D.
Graph theory
Health care
Information needs
Knowledge mining
Recommender system
Relevance computation
Wikipedia
IHI'10 - Proceedings of the 1st ACM International Health Informatics Symposium English In the future many people in industrialized countries will manage their personal health data electronically in centralized, reliable and trusted repositories - so-called personal health record systems (PHR). At this stage PHR systems still fail to satisfy the individual medical information needs of their users. Personalized recommendations could solve this problem. A first approach of integrating recommender system (RS) methodology into personal health records - termed health recommender system (HRS) - is presented. By exploitation of existing semantic networks like Wikipedia a health graph data structure is obtained. The data kept within such a graph represent health related concepts and are used to compute semantic distances among pairs of such concepts. A ranking procedure based on the health graph is outlined which enables a match between entries of a PHR system and health information artifacts. This way a PHR user will obtain individualized health information he might be interested in. 0 0
Adaptive ranking of search results by considering user's comprehension Makoto Nakatani
Adam Jatowt
Katsumi Tanaka
Adaptive ranking
Comprehensibility
User interaction
Web search
Data mining
Proceedings of the 4th International Conference on Ubiquitous Information Management and Communication ICUIMC 10 English Given a search query, conventional Web search engines provide users with the same ranking although users' comprehension levels can be different. It is often difficult especially for non-expert users to find comprehensible Web pages from the list of search results. In this paper, we propose the method of adaptively ranking search results by considering user's comprehension level. The main issues are (a) estimating the comprehensibility of Web pages and (b) estimating the user's comprehension level. In our method, the com-prehensibility of each search result is computed by using the readability index and technical terms extracted from Wikipedia. User's comprehension level is estimated by the users' feedback about the difficulty of search results that they have viewed. We implement a prototype system and evaluate the usefulness of our approach by user experiments. 0 0
Adhocratic Governance in the Internet Age: A Case of Wikipedia Piotr Konieczny Adhocracy
Governance
Wikipedia
Journal of Information Technology and Politics English In recent years, a new realm has appeared for the study of political and sociological phenomena: the Internet. This article will analyze the decision-making processes of one of the largest online communities, Wikipedia. Founded in 2001, Wikipedianow among the top-10 most popular sites on the Internethas succeeded in attracting and organizing millions of volunteers and creating the world's largest encyclopedia. To date, however, little study has been done of Wikipedia's governance. There is substantial confusion about its decision-making structure. The organization's governance has been compared to many decision-making and political systemsfrom democracy to dictatorship, from bureaucracy to anarchy. It is the purpose of this article to go beyond the earlier simplistic descriptions of Wikipedia's governance in order to advance the study of online governance, and of organizations more generally. As the evidence will show, while Wikipedia's governance shows elements common to many traditional governance models, it appears to be closest to the organizational structure known as adhocracy. 0 2
Adoption of social software for collaboration Lei Zhang Blogs
Collaboration
IT adoption
Social software
Wiki
Proceedings of the International Conference on Management of Emergent Digital EcoSystems, MEDES'10 English This doctoral research explores how social software can be used to support work collaboration. A case study approach with mixed methods is adopted in this study. Social network analysis and statistical analysis provide complementary support to qualitative analysis. The UK public sector was chosen as the research context. Users are individuals who are knowledge workers in distributed and cross-boundary groups. The asynchronous social software applications studied are blogs and wikis. This paper first describes the major contributions made in the research findings. Next, it identifies the implications of this study for the adoption theory, mixed methodology and for practice. Finally, having taken into consideration the limitations of the study, some recommendations are proposed for further research. Copyright 0 0
Algorithm Visualization: The state of the field Shaffer C.A.
Cooper M.L.
Alon A.J.D.
Akbar M.
Stewart M.
Ponce S.
Edwards S.H.
Algorithm animation
Algorithm Visualization
Algoviz wiki
Community
Data structure visualization
Free and open source software
ACM Transactions on Computing Education English We present findings regarding the state of the field of Algorithm Visualization (AV) based on our analysis of a collection of over 500 AVs. We examine how AVs are distributed among topics, who created them and when, their overall quality, and how they are disseminated. There does exist a cadre of good AVs and active developers. Unfortunately, we found that many AVs are of low quality, and coverage is skewed toward a few easier topics. This can make it hard for instructors to locate what they need. There are no effective repositories of AVs currently available, which puts many AVs at risk for being lost to the community over time. Thus, the field appears in need of improvement in disseminating materials, propagating known best practices, and informing developers about topic coverage. These concerns could be mitigated by building community and improving communication among AV users and developers. 0 0
Aligning WordNet synsets and wikipedia articles Fernando S.
Stevenson M.
AAAI Workshop - Technical Report English This paper examines the problem of finding articles in Wikipedia to match noun synsets in WordNet. The motivation is that these articles enrich the synsets with much more information than is already present in WordNet. Two methods are used. The first is title matching, following redirects and disambiguation links. The second is information retrieval over the set of articles. The methods are evaluated over a random sample set of 200 noun synsets which were manually annotated. With 10 candidate articles retrieved for each noun synset, the methods achieve recall of 93%. The manually annotated data set and the automatically generated candidate article sets are available online for research purposes. Copyright © 2010, Association for the Advancement of Artificial Intelligence. All rights reserved. 0 0
An Efficient Method for Tagging a Query with Category Labels Using Wikipedia towards Enhancing Search Engine Results Milad Alemzadeh
Fakhri Karray
Web Query
Tag
Category Labelling
Wikipedia
WI-IAT English 0 0
An N-gram-and-wikipedia joint approach to natural language identification Yang X.
Liang W.
N-Gram
Natural language identification
TextTiling algorithm
Wikipedia
2010 4th International Universal Communication Symposium, IUCS 2010 - Proceedings English Natural Language Identification is the process of detecting and determining in which language or languages a given piece of text is written. As one of the key steps in Computational Linguistics/Natural Language Processing(NLP) tasks, such as Machine Translation, Multi-lingual Information Retrieval and Processing of Language Resources, Natural Language Identification has drawn widespread attention and extensive research, making it one of the few relatively well studied sub-fields in the whole NLP field. However, various problems remain far from resolved in this field. Current noncomputational approaches require researchers possess sufficient prior linguistic knowledge about the languages to be identified, while current computational (statistical) approaches demand large-scale training set for each to-be-identified language. Apparently, drawbacks for both are that, few computer scientists are equipped with sufficient knowledge in Linguistics, and the size of the training set may get endlessly larger in pursuit of higher accuracy and the ability to process more languages. Also, faced with multi-lingual documents on the Internet, neither approach can render satisfactory results. To address these problems, this paper proposes a new approach to Natural Language Identification. It exploits N-Gram frequency statistics to segment a piece of text in a language-specific fashion, and then takes advantage of Wikipedia to determine the language used in each segment. Multiple experiments have demonstrated that satisfactory results can be rendered by this approach, especially with multi-lingual documents. 0 0
An automatic acquisition of domain knowledge from list-structrued text in baidu encyclopedia Wu W.
Liu T.
Hu H.
Du X.
Baidu encyclopedia
Knowledge Extraction
List wrapper
2010 4th International Universal Communication Symposium, IUCS 2010 - Proceedings English We propose a novel method which can automatically extract new concepts and semantic relations between concepts, in order to support the domain ontology evolvement. We collect the corpus from a free Chinese encyclopedia called Baidu encyclopedia, which is similar to Wikipedia. We locate lists from the Baidu encyclopedia, and extract domain knowledge from the lists. Further more, we use a knowledge assessor to ensure the validity of extracted knowledge. In the experiments, we make a practical attempt to evolve the Chinese Law Ontology (CLO V0), and show that our method can improve the completeness and coverage of CLO V0. 0 0
An efficient web-based wrapper and annotator for tabular data Amin M.S.
Jamil H.
Information extraction
Missing column name annotation
Wrapper
International Journal of Software Engineering and Knowledge Engineering English In the last few years, several works in the literature have addressed the problem of data extraction from web pages. The importance of this problem derives from the fact that, once extracted, data can be handled in a way similar to instances of a traditional database, which in turn can facilitate application of web data integration and various other domain specific problems. In this paper, we propose a novel table extraction technique that works on web pages generated dynamically from a back-end database. The proposed system can automatically discover table structure by relevant pattern mining from web pages in an efficient way, and can generate regular expression for the extraction process. Moreover, the proposed system can assign intuitive column names to the columns of the extracted table by leveraging Wikipedia knowledge base for the purpose of table annotation. To improve accuracy of the assignment, we exploit the structural homogeneity of the column values and their co-location information to weed out less likely candidates. This approach requires no human intervention and experimental results have shown its accuracy to be promising. Moreover, the wrapper generation algorithm works in linear time. 0 0
An empirical analysis on how learners interact in wiki in a graduate level online course Wen-Hao D. Huang
Kazuaki Nakazawa
English As Web 2.0 emerging technologies are gaining momentum in higher education, educators as well as students are finding new ways to integrate them for teaching and learning. Technologies such as blogs, wikis and multimedia-sharing utilities have been used to teach various subject matters. This trend not only creates new opportunities for us to afford collaborative learning processes but also generates research inquiries that demand that we empirically examine those technologies' pedagogical impact against existing theoretical frameworks. By doing so, we are able to validate Web 2.0 technologies' systematic integration into instructional settings while innovating the learning process for new generations of learners. Therefore, this exploratory mixed-method case study, situated in a 10-week online graduate level course, investigated the perceived interaction levels between learnerlearner and learnerinstructor in using PBwiki for weekly reading assignments. Based on quantitative responses from 16 participants, learners perceived a significantly higher level of instructional interaction with their peers than they did with the instructor. Their qualitative responses further identified their weekly activity patterns in accomplishing the Wiki assignments and provided rationales for their interaction level perceptions. This case study concluded that educators should remove all communication modalities external to the Wiki environments to provide authentic Wiki-collaboration experiences for learners. 0 0
An evaluation of medical knowledge contained in Wikipedia and its use in the LOINC database Jeff Friedlin
Clement J McDonald
Journal of the American Medical Informatics Association: JAMIA The logical observation identifiers names and codes {(LOINC)} database contains 55 000 terms consisting of more atomic components called parts. {LOINC} carries more than 18 000 distinct parts. It is necessary to have definitions/descriptions for each of these parts to assist users in mapping local laboratory codes to {LOINC.} It is believed that much of this information can be obtained from the internet; the first effort was with Wikipedia. This project focused on 1705 laboratory analytes (the first part in the {LOINC} laboratory name). Of the 1705 parts queried, 1314 matching articles were found in Wikipedia. Of these, 1299 (98.9\%) were perfect matches that exactly described the {LOINC} part, 15 (1.14\%) were partial matches (the description in Wikipedia was related to the {LOINC} part, but did not describe it fully), and 102 (7.76\%) were mis-matches. The current release of {RELMA} and {LOINC} include Wikipedia descriptions of {LOINC} parts obtained as a direct result of this project. 0 0
An evidence-based approach to collaborative ontology development Tonkin E.
Pfeiffer H.D.
Hewson A.
Proceedings of the International Symposium on Matching and Meaning Automated Development, Evolution and Interpretation of Ontologies - A Symposium at the AISB 2010 Convention English The development of ontologies for various purposes is now a relatively commonplace process. A number of different approaches towards this aim are evident; empirical methodologies, giving rise to data-driven procedures; self-reflective (innate) methodologies, resulting in artifacts that are based on intellectual understanding; collaborative approaches, which result in the development of an artifact representing a consensus viewpoint. We compare and contrast these approaches through two parallel use cases, in work that is currently ongoing. The first explores a case study in creation of a knowledge base from raw, semi-structured information available on the Web. This makes use of text and data mining approaches from various sources of information, including semi-formally structured metadata, interpreted using methods drawn from statistical analysis, and data drawn from crowd-sourced resources such as Wikipedia. The second explores ontology development in the area of physical computing, specifically, context-awareness in ubiquitous computing, and focuses on exploring the significant impact of an evidence-led approach. Both examples are chosen from domains in which automated extraction of information is a significant use case for the resulting ontology. In the first case, automated extraction takes the form of indexing for search and browse of the archived data. In the second, the predominant use cases relate to context-awareness. Via these examples, we identify a core set of design principles for software platforms that bring together evidence from each of these processes, exploring participatory development of ontologies intended for use in domains in which empirical evidence and user judgment are allied. 0 0
An exploration of learning to link with Wikipedia: features, methods and training collection Jiyin He
Maarten De Rijk
INEX English 0 0
An exploration of learning to link with wikipedia: Features, methods and training collection He J.
Maarten de Rijke
Lecture Notes in Computer Science English We describe our participation in the Link-the-Wiki track at INEX 2009. We apply machine learning methods to the anchor-to-best-entry-point task and explore the impact of the following aspects of our approaches: features, learning methods as well as the collection used for training the models. We find that a learning to rank-based approach and a binary classification approach do not differ a lot. The new Wikipedia collection which is of larger size and which has more links than the collection previously used, provides better training material for learning our models. In addition, a heuristic run which combines the two intuitively most useful features outperforms machine learning based runs, which suggests that a further analysis and selection of features is necessary. 0 0
An exploratory study of online social networking within a doctorate of education program Beutel D.
Gray L.
Beames S.
Klenowski V.
Ehrich L.
Kapitzke C.
Blended learning
Doctoral education
Professional doctorate
Wiki
International Journal of Learning English The professional doctorate is a degree that is specifically designed for professionals investigating real-world problems and relevant issues for a profession, industry and/or the community. The exploratory study on which this paper is based sought to track the scholarly skill development of a cohort of professional doctoral students who commenced their course in January 2008 at an Australian university. Via an initial survey and two focus groups held six months apart, the study aimed to determine if there had been any qualitative shifts in students' understandings, expectations and perceptions regarding their developing knowledge and skills. Three key findings that emerged from this study were: (i) the appropriateness of using a blended learning approach in this professional doctoral program; (ii) the challenges of using wikis as an online technology for creating communities of practice; and (iii) the transition from professional to scholar is a process that requires the guided support inherent in the design of this particular doctorate of education program. 0 0
An inside view: credibility in Wikipedia from the perspective of editors H. Francke
O. Sundin
Information Research Introduction. The question of credibility in participatory information environments, particularly Wikipedia, has been much debated. This paper investigates how editors on Swedish Wikipedia consider credibility when they edit and read Wikipedia articles. Method. The study builds on interviews with 11 editors on Swedish Wikipedia, supported by a document analysis of policies on Swedish Wikipedia. Analysis. The interview transcripts have been coded qualitatively according to the participants' use of Wikipedia and what they take into consideration in making credibility assessments. Results. The participants use Wikipedia for purposes where it is not vital that the information is correct. Their credibility assessments are mainly based on authorship, verifiability, and the editing history of an article. Conclusions. The situations and purposes for which the editors use Wikipedia are similar to other user groups, but they draw on their knowledge as members of the network of practice of wikipedians to make credibility assessments, including knowledge of certain editors and of the {MediaWiki} architecture. Their assessments have more similarities to those used in traditional media than to assessments springing from the wisdom of crowds. 0 1
An intelligent system for semi-automatic evolution of ontologies Ramezani M.
Witschel H.F.
2010 IEEE International Conference on Intelligent Systems, IS 2010 - Proceedings English Ontologies are an important part of the Semantic Web as well as of many intelligent systems. However, the traditional expert-driven development of ontologies is time-consuming and often results in incomplete and inappropriate ontologies. In addition, since ontology evolution is not controlled by end users, it may take too long for a conceptual change in the domain to be reflected in the ontology. In this paper, we present a recommendation algorithm in a Web 2.0 platform that supports end users to collaboratively evolve ontologies by suggesting semantic relations between new and existing concepts. We use the Wikipedia category hierarchy to evaluate our algorithm and our experimental results show that the proposed algorithm produces high quality recommendations. 0 0
An ontology for I&C knowledge using trees of Porphyry Dourgnon-Hanoune A.
Dang T.
Salaun P.
Bouthors V.
I&C
Knowledge management
Otology
Terminology
Wiki
IEEE International Conference on Industrial Informatics (INDIN) English EDF (Electricit́ de France) is a more than 60 years old company. Its main business is to generate electricity from nuclear, hydraulic, fossil fired power plants and wind turbines, etc. Most of them are in service since decades. Therefore, the knowledge should be preserved: deep knowledge about how the plants have been built as well as dynamic knowledge about the process operation. We have structured deep knowledge in ontology. The necessity to transmit formal knowledge has led us to introduce trees of Porphyry in our ontology. 0 0
Analysing collaboration in OPAALS' wiki: A comparative study among collaboration networks Colugnati F.A.B.
Lopes L.C.R.
Collaborative process
Social network analysis
Virtual communities
Wiki
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering English This work aims to analyse the wiki tool from OPAALS as a collaborative environment. To achieve that, methods from social network analysis and statistics are employed. The analysis is compared with other collaboration networks. The results obtained here show the evolution of the tool and that the adoption was successful. 0 0
Analysis of implicit relations on wikipedia: Measuring strength through mining elucidatory objects Xiaodan Zhang
Yasuhito Asano
Masatoshi Yoshikawa
Generalized flow
Link analysis
Relation
Data mining
Lecture Notes in Computer Science English We focus on measuring relations between pairs of objects in Wikipedia whose pages can be regarded as individual objects. Two kinds of relations between two objects exist: in Wikipedia, an explicit relation is represented by a single link between the two pages for the objects, and an implicit relation is represented by a link structure containing the two pages. Previously proposed methods are inadequate for measuring implicit relations because they use only one or two of the following three important factors: distance, connectivity, and co-citation. We propose a new method reflecting all the three factors by using a generalized maximum flow. We confirm that our method can measure the strength of a relation more appropriately than these previously proposed methods do. Another remarkable aspect of our method is mining elucidatory objects, that is, objects constituting a relation. We explain that mining elucidatory objects opens a novel way to deeply understand a relation. 0 0
Analysis of implicit relations on wikipedia: measuring strength through mining elucidatory objects Xinpeng Zhang
Yasuhito Asano
Masatoshi Yoshikawa
Generalized flow
Link analysis
Relation
Data mining
DASFAA English 0 0
Analysis of structural relationships for hierarchical cluster labeling Muhr M.
Roman Kern
Michael Granitzer
Cluster labeling
Statistical methods
Structural information
Topic hierarchies
SIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval English Cluster label quality is crucial for browsing topic hierarchies obtained via document clustering. Intuitively, the hierarchical structure should influence the labeling accuracy. However, most labeling algorithms ignore such structural properties and therefore, the impact of hierarchical structures on the labeling accuracy is yet unclear. In our work we integrate hierarchical information, i.e. sibling and parent-child relations, in the cluster labeling process. We adapt standard labeling approaches, namely Maximum Term Frequency, Jensen-Shannon Divergence, χ 2 Test, and Information Gain, to take use of those relationships and evaluate their impact on 4 different datasets, namely the Open Directory Project, Wikipedia, TREC Ohsumed and the CLEF IP European Patent dataset. We show, that hierarchical relationships can be exploited to increase labeling accuracy especially on high-level nodes. 0 0
Analyzing student collaborations in a wiki-based science curriculum Vanessa L. Peters
James D. Slotta
ICLS English 0 0
Analyzing the Creative Editing Behavior of Wikipedia Editors: Through Dynamic Social Network Analysis Takashi Iba
Keiichi Nemoto
Bernd Peters
Peter A. Gloor
Procedia - Social and Behavioral Sciences 0 0
Analyzing the creative editing behavior of wikipedia editors through dynamic social network analysis Takashi Iba
Keiichi Nemoto
Bernd Peters
Gloor P.A.
Coolfarmer
Dynamic Social Network Analysis
Egoboosting
Wikipedia
Procedia - Social and Behavioral Sciences English This paper analyzes editing patterns of Wikipedia contributors using dynamic social network analysis. We have developed a tool that converts the edit flow among contributors into a temporal social network. We are using this approach to identify the most creative Wikipedia editors among the few thousand contributors who make most of the edits amid the millions of active Wikipedia editors. In particular, we identify the key category of "coolfarmers", the prolific authors starting and building new articles of high quality. Towards this goal we analyzed the 2580 featured articles of the English Wikipedia where we found two main article types: (1) articles of narrow focus created by a few subject matter experts, and (2) articles about a broad topic created by thousands of interested incidental editors. We then investigated the authoring process of articles about a current and controversial event. There we found two types of editors with different editing patterns: the mediators, trying to reconcile the different viewpoints of editors, and the zealots, who are adding fuel to heated discussions on controversial topics. As a second category of editors we look at the "egoboosters", people who use Wikipedia mostly to showcase themselves. Understanding these different patterns of behavior gives important insights about the cultural norms of online creators. In addition, identifying and policing egoboosters has the potential to increase the quality of Wikipedia. People best suited to enforce culture-compliant behavior of egoboosters through exemplary behavior and active intervention are the highly regarded coolfarmers introduced above. 0 2
Annoki: A MediaWiki-based collaboration platform Brendan Tansey
Eleni Stroulia
Access control
Collaboration
Contribution analysis
MediaWiki extensions
Team management
Web 2.0
Wiki
Proceedings - International Conference on Software Engineering English Communication plays a vital role throughout all the activities of software engineering processes. As Web 2.0 paradigms concentrate on communication, collaboration, and information sharing, it is only natural that these applications should become part of the software engineering toolkit. In this paper, we describe Annoki, our collaboration platform built on top of the popular wiki software MediaWiki. Annoki supports collaboration by improving the organization of, managing access to, assisting in the creation of, and graphically displaying information about content stored on the wiki. We follow our description of Annoki with a discussion of the current users of Annoki, the largest of whom is the Software Engineering Research Lab at the University of Alberta, where it is used to manage research and development software engineering activities on a daily basis. 0 0
Annotate Wikipedia with Flickr images: Concepts and case study Jie Xiao
Qi Tian
Annotation
Geo information
Social community
Tag
Wikipedia
Proceedings of the 2nd International Conference on Internet Multimedia Computing and Service, ICIMCS'10 English Wikipedia, as an open editable resource, provides reliable knowledge and taxonomy. Contrast to the rich literal information, Wikipedia is lack of visual illustrations, like images and animations. Can we visually annotate Wikipedia concept and provide representative images according to its taxonomy? The huge amount of online social media, such as the tagged images in Flickr, is a good visual resource. Nevertheless, the noisy nature of the tags hinders itself. Based on the observation that images are often collected by the groups with common interest or topic, we propose a framework to visually annotate Wikipedia via social community. The contribution of our work is two-fold: (i) we diversely enrich Wikipedia with images based on its taxonomy; (ii) we introduce community effort to overcome the noisy nature of tags in harvesting images. This work shows our concept and community data collection of the proposed system. Copyright 2010 ACM. 0 0
Annotating and searching web tables using entities, types and relationships Limaye G.
Sarawagi S.
Soumen Chakrabarti
Proceedings of the VLDB Endowment English Tables are a universal idiom to present relational data. Billions of tables on Web pages express entity references, attributes and relationships. This representation of relational world knowledge is usually considerably better than completely unstructured, free-format text. At the same time, unlike manually-created knowledge bases, relational information mined from "organic" Web tables need not be constrained by availability of precious editorial time. Unfortunately, in the absence of any formal, uniform schema imposed on Web tables, Web search cannot take advantage of these high-quality sources of relational information. In this paper we propose new machine learning techniques to annotate table cells with entities that they likely mention, table columns with types from which entities are drawn for cells in the column, and relations that pairs of table columns seek to express. We propose a new graphical model for making all these labeling decisions for each table simultaneously, rather than make separate local decisions for entities, types and relations. Experiments using the YAGO catalog, DBPedia, tables from Wikipedia, and over 25 million HTML tables from a 500 million page Web crawl uniformly show the superiority of our approach. We also evaluate the impact of better annotations on a prototype relational Web search tool. We demonstrate clear benefits of our annotations beyond indexing tables in a purely textual manner. 0 0
Anomalies in ontologies with rules Joachim Baumeister
Seipel D.
Evaluation
Ontology engineering
Owl
Rif-Bld
Swrl
Verification
Journal of Web Semantics English For the development of practical semantic applications, ontologies are commonly used with rule extensions. Prominent examples of semantic applications not only are Semantic Wikis, Semantic Desktops, but also advanced Web Services and agents. The application of rules increases the expressiveness of the underlying knowledge in many ways. Likewise, the integration not only creates new challenges for the design process of such ontologies, but also existing evaluation methods have to cope with the extension of ontologies by rules. Since the verification of Owl ontologies with rule extensions is not tractable in general, we propose to verify ontologies at the symbolic level by using a declarative approach: With the new language Datalog{star operator}, known anomalies can be easily specified and tested in a compact manner. We introduce supplements to existing verification techniques to support the design of ontologies with rule enhancements, and we focus on the detection of anomalies that especially occur due to the combined use of rules and ontological definitions. © 2010 Elsevier B.V. All rights reserved. 0 0
Answer reliability on Q&A sites Pnina Shachaf Askville
CQA
Crowd sourcing
Q and A sites
Quality
Social reference
Web 2.0
WikiAnswers
Wikipedia Reference Desk
Yahoo! Answers
16th Americas Conference on Information Systems 2010, AMCIS 2010 English Similar to other Web 2.0 platforms, user-created content on question answering (Q&A) sites raises concerns about information quality. However, it is possible that some of these sites provide accurate information while others do not. This paper evaluates and compares answer reliability on four Q&A sites. Content analysis of 1,522 transactions from Yahoo! Answers, Wiki Answers, Askville, and the Wikipedia Reference Desk, reveals significant differences in answer quality among these sites. The most popular Q&A site (that attracts the largest numbers of users, questions, and answers) provides the least accurate, complete, and verifiable information. 0 0
Análisis de la incorporación de una plataforma wiki a la docencia de la asignatura "nuevas tecnologías de la información" Antonio José Reinoso Peinado Wiki
Wiki engine
MediaWiki
Wiki platforms
E-learning
Information Technologies
IT
Revista de Docencia Universitaria Spanish This paper describes the study carried out in order to analyze and evaluate the use of a wiki-based platform as a supporting element in the student learning process. The Wiki platform is also analyzed as a tool providing services in teaching of subjects related to New Information and Communication Technologies (ICT). Moreover, this work focuses on the necessary metrics to be obtained in order to establish use and behavioural patterns which allow to characterize the user-platform relationships and may lead to describe possible students’ attitudes when facing tasks demanding cooperative and collaborative efforts. Este documento describe el estudio realizado con el fin de analizar y evaluar el uso de una plataforma basada en el paradigma “Wiki” como elemento de apoyo en el proceso de aprendizaje de los alumnos y como herramienta de utilidad en la docencia de una materia relacionada con las Nuevas Tecnologías de la Información y Comunicaciones (TIC). Además, este trabajo trata de obtener las métricas necesarias para establecer patrones de uso y comportamiento que permitan caracterizar la interacción con la plataforma y puedan ayudar a describir la actitud de los alumnos a la hora de enfrentarse a la realización de tareas cooperativas que exijan coordinación y organización de esfuerzos. 6 0
Application of social software in college education Pan Q. Blogs
College education
Social software
Wiki
Proceedings - 2010 International Conference on Artificial Intelligence and Education, ICAIE 2010 English Social software is a newborn thing in the process of network socialization, it makes learners and software feature set in one body, provides good support for learning, and it makes learning and the transformation of knowledge complement with each other. This article describes the concept of social software and its classification, and expounds its application in high school education, in the hope that learners can effectively use social software to achieve optimum learning outcomest. 0 0
Application of web 2.0 technologies in e-learning context Wan L. Blogs
E-learning
Model
Web 2.0
Wiki
2010 International Conference on Networking and Digital Society, ICNDS 2010 English Web 2.0 is defined as the collective set of Internet-based tools such as wikis, blogs, web based applications, social networking sites and so on. The use of Web 2.0 is a new era in the practice of e-learning. In this paper, the author firstly introduced the landscape of Web 2.0 and their application in educational activities. Then the author analyzed three current models of using Web 2.0 theory and technologies in e-learning context. Based on the analysis of the three models, the author proposed an integrated framework of using web 2.0 technologies in e-learning 2.0. The framework consist of Web 2.0 tools, e-learning 2.0 application and e-learning 2.0 learning modes. The framework will help researchers understand the web based e-learning architecture. 0 0
Applications of ontologies in collaborative software development Happel H.-J.
Maalej W.
Seedorf S.
Knowledge sharing
Ontology
Semantic development environment
Semantic wiki
Software engineering semantic web
Collaborative Software Engineering English Making distributed teams more efficient is one main goal of Collaborative Software Development (CSD) research. To this end, ontologies, which are models that capture a shared understanding of a specific domain, provide key benefits. Ontologies have formal, machine-interpretable semantics that allow to define semantic mappings for heterogeneous data and to infer implicit knowledge at run-time. Extending development infrastructures and software architectures with ontologies (of problem and solution domains) will address coordination and knowledge sharing challenges in activities such as documentation, requirements specificationrequirements specification, component reuse, error handling, and test case management. The purpose of this article is to provide systematic account of how ontologies can be applied in CSD, and to describe benefits of both existing applications such as "semantic wikis" as well as visionary scenarios such as a "Software Engineering Semantic Web". 0 0
Applying wikipedia-based explicit semantic analysis for query-biased document summarization Yunqing Zhou
Zhongqi Guo
Peng Ren
Yong Yu
Explicit semantic analysis
Machine learning
Query-biased summary
Wikipedia
ICIC English 0 0
Approaches for automatically enriching wikipedia Zareen Syed
Tim Finin
AAAI Workshop - Technical Report English We have been exploring the use of Web-derived knowledge bases through the development of Wikitology - a hybrid knowledge base of structured and unstructured information extracted from Wikipedia augmented by RDF data from DBpedia and other Linked Open Data resources. In this paper, we describe approaches that aid in enriching Wikipedia and thus the resources that derive from Wikipedia such as the Wikitology knowledge base, DBpedia, Freebase and Powerset. Copyright © 2010, Association for the Advancement of Artificial Intelligence. All rights reserved. 0 0
Arquitectura en territorios informados y transparentes. Una wiki en la escuela de arquitectura Javier Fernández García Architecture
Territory
Immersion
Wiki
Teaching
Revista de Docencia Universitaria Spanish The framework of our research is based on the spaces information, formation and cohabitation configured by the citizens of the online territory. Web 2.0 has granted us new tools to build spaces with. Since the establishment in August 2006 of the platform CityWiki, we conduct an investigation on the architectures of these places that shape a common space without hierarchy or police. Action, thus, occurs strictly from the freedom of responsible inhabitants. [As defined by M.Castells (2004)] this cohabitation may only exist in a multiple place, where the unlikeness of dwellers gets "unified by the common believe of the value of the fact of sharing". Moreover, in this laboratory, we encourage a new research and teaching activity with an attitude 2.0 inspired in these basic principles of World Wide Web.

In the first part of this article, the empirically evolving theoretical background which focuses the today of our research is exposed. The main objective of this part is to prove wiki territories to be informed and transparent, able to augment and never substitute our reality with new layers of information. In the second part, our innovative teaching experience during 2007/08 is described and scrutinized in the terms described.

El marco general de nuestra investigación en la universidad son los espacios de convivencia, información y formación que configuran los habitantes en un territorio online. La Web 2.0 nos ha dotado de nuevas herramientas con la que construir espacios. Desde la fundación en agosto de 2006 de CityWiki, indagamos en la arquitectura de estos lugares que se articulan en un espacio común en el que, a falta de jerarquía y policía, la acción en él la define el empoderamiento y la libertad del ciudadano responsable. A modo de laboratorio, desde entonces, embarcamos la actividad investigadora y docente en la actitud 2.0 incluida en la WWW.

En la primera parte de este artículo se expone el pensamiento actualizado que empíricamente evoluciona emparejado a la actualidad de nuestras investigaciones. El objetivo es hacer entender el territorio wiki como informado y transparente llamado a añadir capas de información a la realidad para de esta forma no sustituirla sino ampliarla. Seguidamente, en la segunda parte, se describe y analiza la experiencia de innovación llevada a cabo durante el curso 2007/2008 en la que se ensayó un modelo docente instrumentados en los modos y paisajes previamente descritos.
2 0
As relações de poder entre editores da Wikipédia Paulo Henrique Souto Maior Serrano French semiotics
Wikipedia
Conflict
Talk page
IX Encontro do Círculo de Estudos Linguísticos do Sul Portuguese The collaborative encyclopedia Wikipedia is guided by several policies, recommendations and standards, developed by its community of users from five basic principles: 1) encyclopedist, 2) the neutral point of view, 3) free license, 4) how to conduct encrypted, 5) freedom in the rules (Wikipedia: 2009b). This article analyses through the greimasian´s and tensive semiotics the application of the five principles in the discussion of conflicting entries. 0 0
Assessment of wiki-supported collaborative learning in higher education 2010 9th International Conference on Information Technology Based Higher Education and Training, ITHET 2010 English 0 0
Associating semantics to multilingual tags in folksonomies (poster) Garcia-Silva A.
Gracia J.
Corcho O.
CEUR Workshop Proceedings English Tagging systems are nowadays a common feature in web sites where user-generated content plays an important role. However, the lack of semantics and multilinguality hamper information retrieval process based on folksonomies. In this paper we propose an approach to bring semantics to multilingual folksonomies. This approach includes a sense disambiguation activity and takes advantage from knowledge generated by the masses in the form of articles, redirection and disambiguation links, and translations in Wikipedia. We use DBpedia[2] as semantic resource to define the tag meanings. 0 0
Aufbau eines linguistischen Korpus aus den Daten der englischen Wikipedia Markus Fuchs Corpus
Database
Wikipedia
Proceedings of the Conference on Natural Language Processing 2010 (KONVENS 10) German 0 0
Auto-organização e processos editoriais na Wikipédia: uma análise à luz de Michel Debrun Carlos Frederico de Brito d’Andréa Leitura e escrita em movimento Portuguese 0 1
Automated Query Learning with Wikipedia and Genetic Programming Pekka Malo
Pyry Siitari
Ankur Sinha
English Most of the existing information retrieval systems are based on bag of words model and are not equipped with common world knowledge. Work has been done towards improving the efficiency of such systems by using intelligent algorithms to generate search queries, however, not much research has been done in the direction of incorporating human-and-society level knowledge in the queries. This paper is one of the first attempts where such information is incorporated into the search queries using Wikipedia semantics. The paper presents an essential shift from conventional token based queries to concept based queries, leading to an enhanced efficiency of information retrieval systems. To efficiently handle the automated query learning problem, we propose Wikipedia-based Evolutionary Semantics (Wiki-ES) framework where concept based queries are learnt using a co-evolving evolutionary procedure. Learning concept based queries using an intelligent evolutionary procedure yields significant improvement in performance which is shown through an extensive study using Reuters newswire documents. Comparison of the proposed framework is performed with other information retrieval systems. Concept based approach has also been implemented on other information retrieval systems to justify the effectiveness of a transition from token based queries to concept based queries. 0 1
Automatic evaluation of topic coherence Newman D.
Lau J.H.
Grieser K.
Baldwin T.
NAACL HLT 2010 - Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Proceedings of the Main Conference English This paper introduces the novel task of topic coherence evaluation, whereby a set of words, as generated by a topic model, is rated for coherence or interpretability. We apply a range of topic scoring models to the evaluation task, drawing on WordNet, Wikipedia and the Google search engine, and existing research on lexical similarity/relatedness. In comparison with human scores for a set of learned topics over two distinct datasets, we show a simple co-occurrence measure based on point-wise mutual information over Wikipedia data is able to achieve results for the task at or nearing the level of inter-annotator correlation, and that other Wikipedia-based lexical relatedness methods also achieve strong results. Google produces strong, if less consistent, results, while our results over WordNet are patchy at best. 0 0
Automatic generation of semantic fields for annotating web images Gang Wang
Chua T.S.
Ngo C.-W.
Wang Y.C.
Coling 2010 - 23rd International Conference on Computational Linguistics, Proceedings of the Conference English The overwhelming amounts of multimedia contents have triggered the need for automatically detecting the semantic concepts within the media contents. With the development of photo sharing websites such as Flickr, we are able to obtain millions of images with usersupplied tags. However, user tags tend to be noisy, ambiguous and incomplete. In order to improve the quality of tags to annotate web images, we propose an approach to build Semantic Fields for annotating the web images. The main idea is that the images are more likely to be relevant to a given concept, if several tags to the image belong to the same Semantic Field as the target concept. Semantic Fields are determined by a set of highly semantically associated terms with high tag co-occurrences in the image corpus and in different corpora and lexica such as WordNet and Wikipedia. We conduct experiments on the NUSWIDE web image corpus and demonstrate superior performance on image annotation as compared to the state-ofthe- art approaches. 0 0
Automatic word sense disambiguation based on document networks D.Yu. Turdakov
S.D. Kuznetsov
Programming and Computer Software In this paper, a survey of works on word sense disambiguation is presented, and the method used in the Texterra system 1 is described. The method is based on calculation of semantic relatedness of Wikipedia concepts. Comparison of the proposed method and the existing word sense disambiguation methods on various document collections is given. 2010 Pleiades Publishing, Ltd. 0 0
Automatically acquiring a semantic network of related concepts Szumlanski S.
Gomez F.
Common sense knowledge
Knowledge acquisition
Lexical semantics
Semantic networks
Semantic relatedness
International Conference on Information and Knowledge Management, Proceedings English We describe the automatic construction of a semantic network1, in which over 3000 of the most frequently occurring monosemous nouns2 in Wikipedia (each appearing between 1,500 and 100,000 times) are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from cooccurrence in Wikipedia texts using an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among related nouns to automatically dis-ambiguate them to their appropriate senses (i.e., concepts). Through the act of disambiguation, we begin to accumulate relatedness data for concepts denoted by polysemous nouns, as well. The resultant concept-to-concept associations, covering 17,543 nouns, and 27,312 distinct senses among them, constitute a large-scale semantic network of related concepts that can be conceived of as augmenting the WordNet noun ontology with related-to links. 0 0
Automatically suggesting topics for augmenting text documents Robert West
Doina Precup
Joelle Pineau
Data mining
Eigenarticles
Principal component analysis
Topic suggestion
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English We present a method for automated topic suggestion. Given a plain-text input document, our algorithm produces a ranking of novel topics that could enrich the input document in a meaningful way. It can thus be used to assist human authors, who often fail to identify important topics relevant to the context of the documents they are writing. Our approach marries two algorithms originally designed for linking documents to Wikipedia articles, proposed by Milne and Witten [15] and West et al. [22], While neither of them can suggest novel topics by itself, their combination does have this capability. The key step towards finding missing topics consists in generalizing from a large background corpus using principal component analysis. In a quantitative evaluation we conclude that our method achieves the precision of human editors when input documents are Wikipedia articles, and we complement this result with a qualitative analysis showing that the approach also works well on other types of input documents. 0 0
Automatically weighting tags in XML collection Liu D.
Wan C.
Long Chen
Xiaojiang Liu
Tag weighting model
Topic generalization
XML retrieval
International Conference on Information and Knowledge Management, Proceedings English In XML retrieval, nodes with different tags play different roles in XML documents and then tags should be reflected in the relevance ranking. An automatic method is proposed in this paper to infer the weights of tags. We first investigate 15 features about tags, and then select five of them based on the correlations between these features and manual tag weights. Using these features, a tag weight assignment model, ATG, is designed. We evaluate the performance of ATG on two real data sets, IEEECS and Wikipedia from two different perspectives. One is to evaluate the quality of the model by measuring the correlation between weights generated by our model and those given by experts. The other is to test the effectiveness of the model in improving retrieval performance. Experimental results show that the tag weights generated by ATG are highly correlated with the manually assigned weights and the ATG model improves retrieval effectiveness significantly. 0 0
BabelNet: Building a Very Large Multilingual Semantic Network Simone Ponzetto Roberto Navigli Wikipedia
Knowledge acquisition
Semantic networks
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), Uppsala, Sweden BabelNet, a very large, wide-coverage multilingual semantic network, is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. 0 0
BabelNet: Building a very large multilingual semantic network Roberto Navigli
Ponzetto S.P.
ACL 2010 - 48th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference English In this paper we present BabelNet - a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. 0 0
Based on the Wiki technology to realize open function of the aircraft digital maintenance platform ICCASM 2010 - 2010 International Conference on Computer Application and System Modeling, Proceedings English 0 0
Best-effort semantic document search on GPUs Byna S.
Meng J.
Raghunathan A.
Chakradhar S.
Cadambi S.
Best-effort computing
CUDA
Dependency relaxation
Document search
GPGPU
Supervised semantic indexing
International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS English Semantic indexing is a popular technique used to access and organize large amounts of unstructured text data. We describe an optimized implementation of semantic indexing and document search on manycore GPU platforms. We observed that a parallel implementation of semantic indexing on a 128-core Tesla C870 GPU is only 2.4X faster than a sequential implementation on an Intel Xeon 2.4GHz processor. We ascribe the less than spectacular speedup to a mismatch in the workload characteristics of semantic indexing and the unique architectural features of GPUs. Compared to the regular numerical computations that have been ported to GPUs with great success, our semantic indexing algorithm (the recently proposed Supervised Semantic Indexing algorithm called SSI) has interesting characteristics - the amount of parallelism in each training instance is data-dependent, and each iteration involves the product of a dense matrix with a sparse vector, resulting in random memory access patterns. As a result, we observed that the baseline GPU implementation significantly under-utilizes the hardware resources (processing elements and memory bandwidth) of the GPU platform. However, the SSI algorithm also demonstrates unique characteristics, which we collectively refer to as the "forgiving nature" of the algorithm. These unique characteristics allow for novel optimizations that do not strive to preserve numerical equivalence of each training iteration with the sequential implementation. In particular, we consider best-effort computing techniques, such as dependency relaxation and computation dropping, to suitably alter the workload characteristics of SSI to leverage the unique architectural features of the GPU. We also show that the realization of dependency relaxation and computation dropping concepts on a GPU is quite different from how one would implement these concepts on a multicore CPU, largely due to the distinct architectural features supported by a GPU. Our new techniques dramatically enhance the amount of parallel workload, leading to much higher performance on the GPU. By optimizing data transfers between CPU and GPU, and by reducing GPU kernel invocation overheads, we achieve further performance gains. We evaluated our new GPU-accelerated implementation of semantic document search on a database of over 1.8 million documents from Wikipedia. By applying our novel performance-enhancing strategies, our GPU implementation on a 128-core Tesla C870 achieved a 5.5X acceleration as compared to a baseline parallel implementation on the same GPU. Compared to a baseline parallel TBB implementation on a dual-socket quad-core Intel Xeon multicore CPU (8-cores), the enhanced GPU implementation is 11X faster. Compared to a parallel implementation on the same multi-core CPU that also uses data dependency relaxation and dropping computation techniques, our enhanced GPU implementation is 5X faster. Copyright 0 0
Beyond Wikipedia: Coordination and Conflict in Online Production Groups Aniket Kittur
Robert E. Kraut
Wiki
Wikipedia
Coordination
Conflict
Social computing
Collective intelligence
Distributed cognition
Collaboration
Online production
Computer-Supported Cooperative Work English Online production groups have the potential to transform the way that knowledge is produced and disseminated. One of the most widely used forms of online production is the wiki, which has been used in domains ranging from science to education to enterprise. We examined the development of and interactions between coordination and conflict in a sample of 6811 wiki production groups. We investigated the influence of four coordination mechanisms: intra-article communication, inter-user communication, concentration of workgroup structure, and policy and procedures. We also examined the growth of conflict, finding the density of users in an information space to be a significant predictor. Finally, we analyzed the effectiveness of the four coordination mechanisms on managing conflict, finding differences in how each scaled to large numbers of contributors. Our results suggest that coordination mechanisms effective for managing conflict are not always the same as those effective for managing task quality, and that designers must take into account the social benefits of coordination mechanisms in addition to their production benefits. 0 4
Beyond Wikipedia: how good a reference source are medical wikis? Paula Younger English The purpose of this paper is to examine the case for using subject (medical) wikis as a reference tool. The paper summarises content of ganfyd and WikiMD, comparing their ethos and approach to information. It describes some other medical and health wikis in brief. As their audience is somewhat more specialised, medical wikis, currently in their infancy, cover topics in more depth than Wikipedia but coverage remains patchy. They may be of particular use for those without access to expensive resources such as UpToDate requiring a short literature review or overview of a topic. Wikis at present are best used as a signpost to other resources with tighter editorial control. The assessment of the subject wikis is brief and the analysis of wikis as a reference tool is largely drawn from general literature, not medical. This assessment provides exposure of subject wikis as a potential reference tool. The paper highlights the existence of subject wikis as a potential more in-depth tool than Wikipedia. 0 0
Beyond the legacy of the Enlightenment? Online encyclopaedias as digital heterotopias J. Haider
O. Sundin
First Monday This article explores how we can understand contemporary participatory online encyclopaedic expressions, particularly Wikipedia, in their traditional role as continuation of the Enlightenment ideal, as well as in the distinctly different space of the Internet. Firstly we position these encyclopaedias in a historical tradition. Secondly, we assign them a place in contemporary digital networks which marks them out as sites in which Enlightenment ideals of universal knowledge take on a new shape. We argue that the Foucauldian concept of heterotopia, that is special spaces which exist within society, transferred online, can serve to understand Wikipedia and similar participatory online encyclopaedias in their role as unique spaces for the construction of knowledge, memory and culture in late modern society. 0 1
Beyond vandalism: Wikipedia trolls Pnina Shachaf
Noriko Hara
Journal of Information Science English Research on trolls is scarce, but their activities challenge online communities; one of the main challenges of the Wikipedia community is to fight against vandalism and trolls. This study identifies Wikipedia trolls’ behaviours and motivations, and compares and contrasts hackers with trolls; it extends our knowledge about this type of vandalism and concludes that Wikipedia trolls are one type of hacker. This study reports that boredom, attention seeking, and revenge motivate trolls; they regard Wikipedia as an entertainment venue, and find pleasure from causing damage to the community and other people. Findings also suggest that trolls’ behaviours are characterized as repetitive, intentional, and harmful actions that are undertaken in isolation and under hidden virtual identities, involving violations of Wikipedia policies, and consisting of destructive participation in the community.
Research on trolls is scarce, but their activities challenge online communities; one of the main challenges of the Wikipedia community is to fight against vandalism and trolls. This study identifies Wikipedia trolls behaviours and motivations, and compares and contrasts hackers with trolls; it extends our knowledge about this type of vandalism and concludes that Wikipedia trolls are one type of hacker. This study reports that boredom, attention seeking, and revenge motivate trolls; they regard Wikipedia as an entertainment venue, and find pleasure from causing damage to the community and other people. Findings also suggest that trolls behaviours are characterized as repetitive, intentional, and harmful actions that are undertaken in isolation and under hidden virtual identities, involving violations of Wikipedia policies, and consisting of destructive participation in the community. The Author(s), 2010.
0 1
BinRank: Scaling dynamic authority-based search using materialized subgraphs Heasoo Hwang
Andrey Balmin
Berthold Reinwald
Erik Nijkamp
IEEE Transactions on Knowledge and Data Engineering Dynamic authority-based keyword search algorithms, such as {ObjectRank} and personalized {PageRank,} leverage semantic link information to provide high quality, high recall search in databases, and the Web. Conceptually, these algorithms require a query-time {PageRank-style} iterative computation over the full graph. This computation is too expensive for large graphs, and not feasible at query time. Alternatively, building an index of precomputed results for some or all keywords involves very expensive preprocessing. We introduce {BinRank,} a system that approximates {ObjectRank} results by utilizing a hybrid approach inspired by materialized views in traditional query processing. We materialize a number of relatively small subsets of the data graph in such a way that any keyword query can be answered by running {ObjectRank} on only one of the subgraphs. {BinRank} generates the subgraphs by partitioning all the terms in the corpus based on their co-occurrence, executing {ObjectRank} for each partition using the terms to generate a set of random walk starting points, and keeping only those objects that receive non-negligible scores. The intuition is that a subgraph that contains all objects and links relevant to a set of related terms should have all the information needed to rank objects with respect to one of these terms. We demonstrate that {BinRank} can achieve subsecond query execution time on the English Wikipedia data set, while producing high-quality search results that closely approximate the results of {ObjectRank} on the original graph. The Wikipedia link graph contains about 108 edges, which is at least two orders of magnitude larger than what prior state of the art dynamic authority-based search systems have been able to demonstrate. Our experimental evaluation investigates the trade-off between query execution time, quality of the results, and storage requirements of {BinRank. 0 0
BioSnowball: Automated population of wikis Xiaojiang Liu
Zaiqing Nie
Yu N.
Wen J.-R.
Bootstrapping
Fact extraction
Markov Logic Networks
Summarization
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining English Internet users regularly have the need to find biographies and facts of people of interest. Wikipedia has become the first stop for celebrity biographies and facts. However, Wiki-pedia can only provide information for celebrities because of its neutral point of view (NPOV) editorial policy. In this paper we propose an integrated bootstrapping framework named BioSnowball to automatically summarize the Web to generate Wikipedia-style pages for any person with a modest web presence. In BioSnowball, biography ranking and fact extraction are performed together in a single integrated training and inference process using Markov Logic Networks (MLNs) as its underlying statistical model. The bootstrapping framework starts with only a small number of seeds and iteratively finds new facts and biographies. As biography paragraphs on the Web are composed of the most important facts, our joint summarization model can improve the accuracy of both fact extraction and biography ranking compared to decoupled methods in the literature. Empirical results on both a small labeled data set and a real Web-scale data set show the effectiveness of BioSnowball. We also empirically show that BioSnowball outperforms the decoupled methods. 0 0
Bots Nicht-menschliche Mitglieder der Wikipedia-Gemeinschaft Robin D. Fink
Tobias Liboschik
German 0 0
Bridging domains using world wide knowledge for transfer learning Evan Wei Xiang
Bin Cao
Derek Hao Hu
Qiang Yang
IEEE Transactions on Knowledge and Data Engineering A major problem of classification learning is the lack of ground-truth labeled data. It is usually expensive to label new data instances for training a model. To solve this problem, domain adaptation in transfer learning has been proposed to classify target domain data by using some other source domain data, even when the data may have different distributions. However, domain adaptation may not work well when the differences between the source and target domains are large. In this paper, we design a novel transfer learning approach, called {BIG} {(Bridging} Information Gap), to effectively extract useful knowledge in a worldwide knowledge base, which is then used to link the source and target domains for improving the classification performance. {BIG} works when the source and target domains share the same feature space but different underlying data distributions. Using the auxiliary source data, we can extract a bridge that allows cross-domain text classification problems to be solved using standard semisupervised learning algorithms. A major contribution of our work is that with {BIG,} a large amount of worldwide knowledge can be easily adapted and used for learning in the target domain. We conduct experiments on several real-world cross-domain text classification tasks and demonstrate that our proposed approach can outperform several existing domain adaptation approaches significantly. 0 0
Building Bilingual Parallel Corpora Based on Wikipedia Mehdi Mohammadi
Nasser GhasemAghaee
Parallel corpora
Sentence alignment
Wikipedia
ICCEA English 0 1
Building a Collaborative Peer-to-Peer Wiki System on a Structured Overlay Gérald Oster
Rubén Mondéjar
Pascal Molli
Sergiu Dumitriu
Computer Networks English The ever growing request for digital information raises the need for content distribution architectures providing high storage capacity, data availability and good performance. While many simple solutions for scalable distribution of quasi-static content exist, there are still no approaches that can ensure both scalability and consistency for the case of highly dynamic content, such as the data managed inside wikis. We propose a peer-to-peer solution for distributing and managing dynamic content, that combines two widely studied technologies: Distributed HashTables (DHT) and optimistic replication. In our “universal wiki” engine architecture (UniWiki), on top of a reliable, inexpensive and consistent DHT-based storage, any number of front-ends can be added, ensuring both read and write scalability, as well as suitability for large-scale scenarios. The implementation is based on Damon, a distributed AOP middleware, thus separating distribution, replication, and consistency responsibilities, and also making our system transparently usable by third party wiki engines. Finally, UniWiki has been proved viable and fairly efficient in large-scale scenarios. 0 0
Building an online course based on semantic Wiki for hybrid learning Yanyan Li
Yuanyuan Liu
Collaborative learning
Hybrid learning
Online course
Semantic wiki
Lecture Notes in Computer Science English By combining properties of Wikis with Semantic Web technologies, Semantic Wikis emerged with semantic enhancements. Based upon Semantic Wiki, this paper designs and develops an online course integrated with face-to-face instruction to support hybrid learning. Compared with general online courses, the course has three outstanding features. First, taken the learning object as the basic building blocks, the course organizes learning content in a structured, coherent and flexible way. Second, it motivates learners to be actively engaged in the collaborative learning process by allowing convenient course authoring, editing as well as adequate interaction. Third, it enables smart resource accessing with the provision of intelligent facilities, such as semantic search, relational navigation, course management, etc. 0 0
Building ontological models from Arabic Wikipedia: a proposed hybrid approach Nora I. Al-Rajebah
Hend S. Al-Khalifa
AbdulMalik S. Al-Salman
Arabic Wikipedia
Knowledge representation
Ontology
Ontology engineering
Relation extraction
IiWAS English 0 0
Building taxonomy of web search intents for name entity queries Xiaoshi Yin
Shah S.
Query clustering
Web search intent
Proceedings of the 19th International Conference on World Wide Web, WWW '10 English A significant portion of web search queries are name entity queries. The major search engines have been exploring various ways to provide better user experiences for name entity queries, such as showing "search tasks" (Bing search) and showing direct answers (Yahoo!, Kosmix). In order to provide the search tasks or direct answers that can satisfy most popular user intents, we need to capture these intents, together with relationships between them. In this paper we propose an approach for building a hierarchical taxonomy of the generic search intents for a class of name entities (e.g., musicians or cities). The proposed approach can find phrases representing generic intents from user queries, and organize these phrases into a tree, so that phrases indicating equivalent or similar meanings are on the same node, and the parent-child relationships of tree nodes represent the relationships between search intents and their sub-intents. Three different methods are proposed for tree building, which are based on directed maximum spanning tree, hierarchical agglomerative clustering, and pachinko allocation model. Our approaches are purely based on search logs, and do not utilize any existing taxonomies such as Wikipedia. With the evaluation by human judges (via Mechanical Turk), it is shown that our approaches can build trees of phrases that capture the relationships between important search intents. 0 0
Bumpy, caution with merging: An exploration of tagging in a geowiki Torre F.
Sheppard S.A.
Reid Priedhorsky
Loren Terveen
Bicycling
Geowikis
Online community
Tags
Wiki
Proceedings of the 16th ACM International Conference on Supporting Group Work, GROUP'10 English We introduced tags into the Cyclopath geographic wiki for bicyclists. To promote the creation of useful tags, we made tags wiki objects, giving ownership of tag applications to the community, not to individuals. We also introduced a novel interface that lets users fine-tune their routing preferences with tags. We analyzed the Cyclopath tagging vocabulary, the relationship of tags to existing annotation techniques (notes and ratings), and the roles users take on with respect to tagging, notes, and ratings. Our findings are: two distinct tagging vocabularies have emerged, one around each of the two main types of geographic objects in Cyclopath; tags and notes have overlapping content but serve distinct purposes; users employ both ratings and tags to express their route-finding preferences, and use of the two techniques is moderately correlated; and users are highly specialized in their use of tags and notes. These findings suggest new design opportunities, including semi-automated methods to infer new annotations in a geographic context. 0 0
Business student collaborative work supported by Moodle wiki CSEDU 2010 - 2nd International Conference on Computer Supported Education, Proceedings English 0 0
C-Link: Concept linkage in knowledge repositories Cowling P.
Remde S.
Hartley P.
Stewart W.
Stock-Brooks J.
Woolley T.
AAAI Spring Symposium - Technical Report English When searching a knowledge repository such as Wikipedia or the Internet, the user doesn't always know what they are looking for. Indeed, it is often the case that a user wishes to find information about a concept that was completely unknown to them prior to the search. In this paper we describe C-Link, which provides the user with a method for searching for unknown concepts which lie between two known concepts. C-Link does this by modeling the knowledge repository as a weighted, directed graph where nodes are concepts and arc weights give the degree of "relatedness" between concepts. An experimental study was undertaken with 59 participants to investigate the performance of C-Link compared to standard search approaches. Statistical analysis of the results shows great potential for C-Link as a search tool. 0 0
Caching search engine results over incremental indices Blanco R.
Bortnikov E.
Junqueira F.P.
Lempel R.
Telloli L.
Hugo Zaragoza
Real-time indexing
Search engine caching
SIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval English A Web search engine must update its index periodically to incorporate changes to the Web. We argue in this paper that index updates fundamentally impact the design of search engine result caches, a performance-critical component of modern search engines. Index updates lead to the problem of cache invalidation: invalidating cached entries of queries whose results have changed. Naïve approaches, such as flushing the entire cache upon every index update, lead to poor performance and in fact, render caching futile when the frequency of updates is high. Solving the invalidation problem efficiently corresponds to predicting accurately which queries will produce different results if re-evaluated, given the actual changes to the index. To obtain this property, we propose a framework for developing invalidation predictors and define metrics to evaluate invalidation schemes. We describe concrete predictors using this framework and compare them against a baseline that uses a cache invalidation scheme based on time-to-live (TTL). Evaluation over Wikipedia documents using a query log from the Yahoo search engine shows that selective invalidation of cached search results can lower the number of unnecessary query evaluations by as much as 30% compared to a baseline scheme, while returning results of similar freshness. In general, our predictors enable fewer unnecessary invalidations and fewer stale results compared to a TTL-only scheme for similar freshness of results. 0 0
Capabilities and roles of enterprise wikis in organizational communication Christian Wagner
Schroeder A.
Business communication
Collaborative authoring
Content refactoring
Media capability
Wiki
Technical Communication English Purpose: The article alerts technical communicators to wiki technology, an emerging new medium that allows dispersed groups to create shared content via collaborative editing and different-time communication. Wiki-based collaborative content creation enables new communication practices and thereby challenges several assumptions of existing media choice theories. Method: Analysis of empirical evidence from 32 published case descriptions and reports to evaluate wiki technology in a corporate context based on the defining characteristics of three media choice theories (i.e., media richness theory, theory of media synchronicity, and common ground theory). Results: Wikis meet or exceed capabilities of several other communication and collaboration media, and thus provide a credible alternative to other business communication technologies currently in use. Further, distinct media capabilities of wikis are not fully represented by current media choice theories, suggesting the need to extend media choice theories to recognize these unique capabilities. Conclusion: The unique features of enterprise wikis enable new collaboration practices and challenge some of the core theoretical assumptions of media choice theories. The refactoring capability of wikis is identified as a unique feature that enables new forms of collaboration and communication in organizations. An implementation that wishes to successfully leverage wiki-enabled collaboration opportunities must carefully consider challenges of human interaction such as free-riding, or conflict of values. 0 0
Categorising Social Tags to Improve Folksonomy-based Recommendations Ivan Cantador
Ioannis Konstas
Joemon M. Jose
Web Semantics: Science, Services and Agents on the World Wide Web Pages Accepted Manuscript 0 0
Centroid-based classification enhanced with Wikipedia Abdullah Bawakid
Mourad Oussalah
Categorization
Classification
Component
Semantics
Text enrichment
Wikipedia
Proceedings - 9th International Conference on Machine Learning and Applications, ICMLA 2010 English Most of the traditional text classification methods employ Bag of Words (BOW) approaches relying on the words frequencies existing within the training corpus and the testing documents. Recently, studies have examined using external knowledge to enrich the text representation of documents. Some have focused on using WordNet which suffers from different limitations including the available number of words, synsets and coverage. Other studies used different aspects of Wikipedia instead. Depending on the features being selected and evaluated and the external knowledge being used, a balance between recall, precision, noise reduction and information loss has to be applied. In this paper, we propose a new Centroid-based classification approach relying on Wikipedia to enrich the representation of documents through the use of Wikpedia's concepts, categories structure, links, and articles text. We extract candidate concepts for each class with the help of Wikipedia and merge them with important features derived directly from the text documents. Different variations of the system were evaluated and the results show improvements in the performance of the system. 0 0
Changes in middle school students' six contemporary learning abilities (6-CLAs) through project-based design of web-games and social media use Rebecca Reynolds Digital & information literacy
Games
Social media
Wiki
Proceedings of the ASIST Annual Meeting English This poster presents findings on student development of contemporary learning abilities among 14 middle school students enrolled in a year-long elective game design class. The study measures students' change in attitudes towards the activities in which they participate, through their responses to a self-report survey of frequency, motivation, and self-reported knowledge. T-test statistics were used to analyze pre- and post-program differences, resulting in several statistically significant increases. The program and its outcomes have implications for digital literacy learning interventions that can be implemented in formal and informal learning environments with youth. 0 0
Chapter 3: Search for knowledge Gerhard Weikum Lecture Notes in Computer Science English There are major trends to advance the functionality of search engines to a more expressive semantic level. This is enabled by the advent of knowledge-sharing communities such as Wikipedia and the progress in automatically extracting entities and relationships from semistructured as well as natural-language Web sources. In addition, Semantic-Web-style ontologies, structured Deep-Web sources, and Social-Web networks and tagging communities can contribute towards a grand vision of turning the Web into a comprehensive knowledge base that can be efficiently searched with high precision. This vision and position paper discusses opportunities and challenges along this research avenue. The technical issues to be looked into include knowledge harvesting to construct large knowledge bases, searching for knowledge in terms of entities and relationships, and ranking the results of such queries. 0 0
Characterizing and modeling the dynamics of online popularity Jacob Ratkiewicz
Santo Fortunato
Alessandro Flammini
Filippo Menczer
Alessandro Vespignani
Physical Review Letters Online popularity has an enormous impact on opinions, culture, policy, and profits. We provide a quantitative, large scale, temporal analysis of the dynamics of online content popularity in two massive model systems: the Wikipedia and an entire country's Web space. We find that the dynamics of popularity are characterized by bursts, displaying characteristic features of critical systems such as fat-tailed distributions of magnitude and interevent time. We propose a minimal model combining the classic preferential popularity increase mechanism with the occurrence of random popularity shifts due to exogenous factors. The model recovers the critical features observed in the empirical analysis of the systems analyzed here, highlighting the key factors needed in the description of popularity dynamics. 2010 The American Physical Society. 0 3
Chart pruning for fast lexicalised-grammar parsing YanChun Zhang
Ahn B.-G.
Clark S.
Van Wyk C.
Curran J.R.
Rimell L.
Coling 2010 - 23rd International Conference on Computational Linguistics, Proceedings of the Conference English Given the increasing need to process massive amounts of textual data, efficiency of NLP tools is becoming a pressing concern. Parsers based on lexicalised grammar formalisms, such as TAG and CCG, can be made more efficient using supertagging, which for CCG is so effective that every derivation consistent with the supertagger output can be stored in a packed chart. However, wide-coverage CCG parsers still produce a very large number of derivations for typical newspaper or Wikipedia sentences. In this paper we investigate two forms of chart pruning, and develop a novel method for pruning complete cells in a parse chart. The result is a widecoverage CCG parser that can process almost 100 sentences per second, with little or no loss in accuracy over the baseline with no pruning. 0 0
Chatting in the Wiki: synchronous-asynchronous integration Robert P. Biuk-Aghai
Keng Hong Lei
Asynchronous
Communication
Instant messaging
Synchronous
Wiki
WikiSym English Wikis have become popular platforms for collaborative writing. The traditional production mode has been remote asynchronous and supported by wiki systems geared toward both asynchronous writing and asynchronous communication. However, many people have come to rely on synchronous communication in their daily work. This paper first discusses aspects of synchronous and asynchronous activity and communication and then proposes an integration of synchronous communication facilities in wikis. A prototype system developed by the authors is briefly presented. 1 1
Chemical Information Media in the Chemistry Lecture Hall: A Comparative Assessment of Two Online Encyclopedias L Korosec
P A Limacher
H P Luthi
M P Brandle
CHIMIA The chemistry encyclopedia Rompp Online and the German universal encyclopedia Wikipedia were assessed by first-year university students on the basis of a set of 30 articles about chemical thermodynamics. Criteria with regard to both content and form were applied in the comparison; 619 ratings (48\% participation rate) were returned. While both encyclopedias obtained very good marks and performed nearly equally with regard to their accuracy, the average overall mark for Wikipedia was better than for Rompp Online, which obtained lower marks with regard to completeness and length. Analysis of the results and participants' comments shows that students attach importance to completeness, length and comprehensibility rather than accuracy, and also attribute less value to the availability of sources which validate an encyclopedia article. Both encyclopedias can be promoted as a starting reference to access a topic in chemistry. However, it is recommended that instructors should insist that students do not rely solely on encyclopedia texts, but use and cite primary literature in their reports. 0 1
Chemical information media in the chemistry lecture hall: A comparative assessment of two online encyclopedias Korosec L.
Limacher P.A.
Luthi H.P.
Brandle M.P.
Chemical information
Quality assessment
Römpp Online
Student survey
Wikipedia
Chimia English The chemistry encyclopedia Römpp Online and the German universal encyclopedia Wikipedia were assessed by first-year university students on the basis of a set of 30 articles about chemical thermodynamics. Criteria with regard to both content and form were applied in the comparison; 619 ratings (48% participation rate) were returned. While both encyclopedias obtained very good marks and performed nearly equally with regard to their accuracy, the average overall mark for Wikipedia was better than for Rompp Online, which obtained lower marks with regard to completeness and length. Analysis of the results and participants' comments shows that students attach importance to completeness, length and comprehensibility rather than accuracy, and also attribute less value to the availability of sources which validate an encyclopedia article. Both encyclopedias can be promoted as a starting reference to access a topic in chemistry. However, it is recommended that instructors should insist that students do not rely solely on encyclopedia texts, but use and cite primary literature in their reports. 0 1
Chinese characters conversion system based on lookup table and language model Li M.-H.
Wu S.-H.
Yang P.-C.
Ku T.
Chinese character conversion
Language model
Lookup table
Wikipedia
Proceedings of the 22nd Conference on Computational Linguistics and Speech Processing, ROCLING 2010 Chinese The character sets used in China and Taiwan are both Chinese, but they are divided into simplified and traditional Chinese characters. There are large amount of information exchange between China and Taiwan through books and Internet. To provide readers a convenient reading environment, the character conversion between simplified and traditional Chinese is necessary. The conversion between simplified and traditional Chinese characters has two problems: one-to-many ambiguity and term usage problems. Since there are many traditional Chinese characters that have only one corresponding simplified character, when converting simplified Chinese into traditional Chinese, the system will face the one-to-many ambiguity. Also, there are many terms that have different usages between the two Chinese societies. This paper focus on designing an extensible conversion system, that can take the advantage of community knowledge by accumulating lookup tables through Wikipedia to tackle the term usage problem and can integrate language model to disambiguate the one-to-many ambiguity. The system can reduce the cost of proofreading of character conversion for books, e-books, or online publications. The extensible architecture makes it easy to improve the system with new training data. 1 0
Classifying Wikipedia articles into NE's using SVM's with threshold adjustment Iman Saleh
Kareem Darwish
Aly Fahmy
NEWS English 0 0
ClassroomWiki: A Collaborative Wiki for Instructional Use with Multiagent Group Formation Nobel khandaker
Leen-Kiat Soh
Collaborative learning tool
Multiagent systems.
IEEE Trans. Learn. Technol. English 0 0
ClassroomWiki: a wiki for the classroom with multiagent tracking, modeling, and group formation Nobel Khandaker
Leen-Kiat Soh
Coalition formation
Multiagent
Wiki
AAMAS English 0 0
Co-creation of value in IT service processes using semantic MediaWiki Schmidt R.
Frank Dengler
Kieninger A.
Co-authorship
Process
SD-Logic
Semantic MediaWiki
Service
Lecture Notes in Business Information Processing English Enterprises are substituting their own IT-Systems by services provided by external providers. This provisioning of services may be done in an industrialized way, separating the service provider from the consumer. However, using industrialized services diminishes the capability to differentiate from competitors. To counter this, collaborative service processes based on the co-creation of value between service providers and prosumers are of huge importance. The approach presented shows how the co-creation of value in IT-service processes can profit from social software, using the example of the Semantic MediaWiki. 0 0
Co-star: A co-training style algorithm for hyponymy relation acquisition from structured and unstructured text Oh J.-H.
Yamada I.
Kentaro Torisawa
Saeger S.D.
Coling 2010 - 23rd International Conference on Computational Linguistics, Proceedings of the Conference English This paper proposes a co-training style algorithm called Co-STAR that acquires hyponymy relations simultaneously from structured and unstructured text. In Co- STAR, two independent processes for hyponymy relation acquisition - one handling structured text and the other handling unstructured text - collaborate by repeatedly exchanging the knowledge they acquired about hyponymy relations. Unlike conventional co-training, the two processes in Co-STAR are applied to different source texts and training data. We show the effectiveness of this algorithm through experiments on large scale hyponymy-relation acquisition from Japanese Wikipedia and Web texts. We also show that Co-STAR is robust against noisy training data. 0 0
Cognitive abilities and the measurement of world wide web usability Campbell S.G.
Norman K.L.
Proceedings of the Human Factors and Ergonomics Society English Usability of an interface is an emergent property of the system and the user; it does not exist independently of either one. For this reason, characteristics of the user which affect his or her performance on a task can affect the apparent usability of the interface in a usability study. We propose and investigate, using a Wikipedia information-seeking task, a model relating spatial abilities and performance measures for system usability. In the context of World Wide Web (WWW) site usability, we found that spatial visualization ability and system experience predicted system effectiveness measures, while spatial orientation ability, spatial visualization ability, and general computer experience predicted system efficiency measures. We suggest possible extensions and further tests of this model. Copyright 2010 by Human Factors and Ergonomics Society, Inc. All rights reserved. 0 0
Coisas velhas em coisas novas: novas “velhas tecnologias” Pedro Demo Generative internet
Hacker spirit
Libertarianism
New technologies
Abuse of freedom
Innovation
Continuities
Ciência da Informação Portuguese The objective of this article is to present an up-to-date discussion about the extraordinary technological innovations, mainly the new technologies underlining both breaches and continuities. Technologies are supposed to present a sense of convergence as well as continuities. Hackers and others who propose free software are in favor of liberty and liberation, considering computer and internet as arenas of freedom. This is only partly correct, because these hackers who consider themselves as libertarians submit themselves to narrow-minded structures of power (for example, autocratic bosses). Internet is state-wide instead of being worldwide. France has imposed changes in the contents of sites. China does not allow a free flow of information. That aura of beginning liberty, granted as a structure of the computer for being customized and formatted is strongly contested by illegal and immoral flow, by introduction of spasm and marketing, as well as by virus contamination. The so called "generative internet" is loosing ground on account of the pressure of users who want guaranteed end-products, easier to be handled, for avoiding abuse of freedom. The case of Wikipédia is remarkable. Continuous wars of publishing unsettle the environment (although this does not hinder the production of a large and original encyclopedia). 13 0
Collaborating and delivering literature search results to clinical teams using Web 2.0 tools Damani S.
Fulton S.
Collaboration
Connotea
Delicious
EndNote Web
Online reference management tools
SharePoint
Social bookmarking
Web 2.0
Wiki
Medical Reference Services Quarterly English This article describes the experiences of librarians at the Research Medical Library embedded within clinical teams at The University of Texas MD Anderson Cancer Center and their efforts to enhance communication within their teams using Web 2.0 tools. Pros and cons of EndNote Web, Delicious, Connotea, PBWorks, and SharePoint are discussed. 0 0
Collaboration at a distance: Using a wiki to create a collaborative learning environment for distance education and on-campus students in a social work course Journal of Teaching in Social Work English 0 0
Collaborative approaches to resolving difficult ill borrowing requests: Using a working group and a wiki for knowledge sharing Journal of Interlibrary Loan, Document Delivery and Electronic Reserve English 0 0
Collaborative editing and linking of astronomy vocabularies using semantic mediawiki Chalmers S.
Gray N.
Iadh Ounis
Gray A.
CEUR Workshop Proceedings English The International Virtual Observatory Alliance (IVOA) comprises 17 Virtual Observatory (VO) projects and facilitates the creation, coordination and collaboration of standards promoting the use and reuse of astronomical data archives. The Semantics working group in the IVOA has repurposed five existing vocabularies (modelled using SKOS), capturing concepts within specific areas of astronomy expertise and applications. A major task however, is to promote the uptake of these semantic representations within the Astronomy community, and further, to let astronomers model (and in turn create links from) their own custom vocabularies to use these existing definitions. In this paper we show how Semantic Mediawiki (SMW) can be used to support expert interaction in the lifecycle of vocabulary creation, linking, and maintenance. 0 0
Collaborative educational geoanalytics applied to large statistics temporal data Jern M. Blogs
Collaborative time animation
Collaborative work
Geovisual analytics
Information and geovisualization
Learning
MediaWiki
Statistical data
Storytelling
CSEDU 2010 - 2nd International Conference on Computer Supported Education, Proceedings English Recent advances in Web 2.0 graphics technologies have the potential to make a dramatic impact on developing collaborative geovisual analytics that analyse, visualize, communicate and present official statistics. In this paper, we introduce novel "storytelling" means for the experts to first explore large, temporal and multidimensional statistical data, then collaborate with colleagues and finally embed dynamic visualization into Web documents e.g. HTML, Blogs or MediaWiki to communicate essential gained insight and knowledge. The aim is to let the analyst (author) explore data and simultaneously save important discoveries and thus enable sharing of gained insights over the Internet. Through the story mechanism facilitating descriptive metatext, textual annotations hyperlinked through the snapshot mechanism and integrated with interactive visualization, the author can let the reader follow the analyst's way of logical reasoning. This emerging technology could in many ways change the terms and structures for learning. 0 0
Collaborative knowledge discovery & marshalling for intelligence & security applications Cowell A.J.
Jensen R.S.
Gregory M.L.
Ellis P.
Fligg K.
McGrath L.R.
O'Hara K.
Bell E.
Annotation
Illicit trafficking
Knowledge elicitation
Natural Language Processing
Nuclear materials
Semantic wiki
ISI 2010 - 2010 IEEE International Conference on Intelligence and Security Informatics: Public Safety and Security English This paper discusses the Knowledge Encapsulation Framework, a flexible, extensible evidence-marshalling environment built upon a natural language processing pipeline and exposed to users via an open-source semantic wiki. We focus our discussion on applications of the framework to intelligence and security applications, specifically, an instantiation of the KEF environment for researching illicit trafficking in nuclear materials. 0 0
Collaborative knowledge evaluation with a semantic Wiki: WikiDesign ICEIS 2010 - Proceedings of the 12th International Conference on Enterprise Information Systems English 0 0
Collaborative learning in wikis Chang Y.-K.
Morales-Arroyo M.A.
Than H.
Tun Z.
Zhe Wang
Collaborative learning
Constructive learning
Wiki
Education for Information English Wikis are a supporting tool for pupils' learning and collaboration. Tasks such as cooperative authoring, joined workbooks creation, document review, group assignments, reflection notes and others have been tried out using wikis as a facilitating tool [1]. However, few studies have reported how students actually perceive some well-claimed benefits. This study investigated the perception of earning activities facilitated by wikis, and the effectiveness of several roles wikis might play in constructive and collaborative learning. This study tried to answer the following questions. How do students perceive a wiki as a learning tool? How does a wiki support constructive learning skills? How does a wiki support student's collaborative learning skills? How does collaboration in wiki facilitate students' content learning and project work? The study was conducted using a survey method to examine the perception of wiki usage and collaborative and constructive learning. In the reported study, a questionnaire was used to gather data from 92 graduate students. The results suggest that using wikis were perceived to enhance collaborative knowledge building among students, but it did not contribute much to learning the subject matter although students were more involved in the learning process than with conventional teaching methods. In other words, it indicates that students may not obtain better return of investment on the time spent in using wiki as a learning tool. While wiki did contribute to enrich the learning experience, further study is needed to investigate how to link the learning process with learning outcomes using this type of collaboration tools. © 2010/2011 - IOS Press and the authors. All rights reserved. 0 0
Collaborative modeling with semantic MediaWiki Frank Dengler
Happel H.-J.
Modeling
Process models
Semantic Wikis
UML
WikiSym 2010 English Modeling is an important aspect of information system development, allowing for abstract descriptions of systems and processes. Therefore, models are often characterized as communication artifacts between different stakeholders in a development process. However, modeling as such has turned out to be a specialist activity, requiring skills in arcane modeling languages and complex tools. In this paper, we suggest and present an approach for collaborative, Wiki-based modeling of process models and UML (class-)diagrams. While other web-based "lightweight" modeling tools are available, our approach consequently follows the Wiki-paradigm and allows us to semantically process the modeled information building upon Semantic MediaWiki. 0 0
Collaborative structuring of knowledge by experts and the public Morris T.
Daniel Mietchen
Citizendium
Expertise
Open education
Open governance
Open knowledge
Open science
Semantic web
Wiki
CEUR Workshop Proceedings English There is much debate on how public participation and expertise can be brought together in collaborative knowledge environments. One of the experiments addressing the issue directly is Citizendium. In seeking to harvest the strengths (and avoiding the major pitfalls) of both user-generated wiki projects and traditional expert-approved reference works, it is a wiki to which anybody can contribute using their real names, while those with specific expertise are given a special role in assessing the quality of content. Upon fulfillment of a set of criteria like factual and linguistic accuracy, lack of bias, and readability by non- specialists, these entries are forked into two versions: a stable (and thus citable) approved "cluster" (an article with subpages providing supplementary information) and a draft version, the latter to allow for further development and updates. We provide an overview of how Citizendium is structured and what it offers to the open knowledge communities, particularly to those engaged in education and research. Special attention will be paid to the structures and processes put in place to provide for transparent governance, to encourage collaboration, to resolve disputes in a civil manner and by taking into account expert opinions, and to facilitate navigation of the site and contextualization of its contents. 0 1
Collective cross-document relation extraction without labelled data Yao L.
Riedel S.
McCallum A.
EMNLP 2010 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference English We present a novel approach to relation extraction that integrates information across documents, performs global inference and requires no labelled text. In particular, we tackle relation extraction and entity identification jointly. We use distant supervision to train a factor graph model for relation extraction based on an existing knowledge base (Freebase, derived in parts from Wikipedia). For inference we run an efficient Gibbs sampler that leads to linear time joint inference. We evaluate our approach both for an in-domain (Wikipedia) and a more realistic out-of-domain (New York Times Corpus) setting. For the in-domain setting, our joint model leads to 4% higher precision than an isolated local approach, but has no advantage over a pipeline. For the out-of-domain data, we benefit strongly from joint modelling, and observe improvements in precision of 13% over the pipeline, and 15% over the isolated baseline. 0 0
Collective knowledge engineering with semantic wikis Nalepa G.J. Knowledge engineering
Knowledge evaluation
Semantic wiki
Journal of Universal Computer Science English In the paper application of semantic wikis as knowledge engineering tool in a collaborative environment is considered. Selected aspects of semantic wikis are discussed. The main apparent limitation of existing semantic wikis is the lack of expressive knowledge representation mechanism. Building a knowledge base with a semantic wiki becomes complicated because of its collective nature, where number of users collaborate in the knowledge engineering process. A need for knowledge evaluation and analysis facilities become clear. The paper discusses a new semantic wiki architecture called PlWiki. The most important concept is to provide a strong knowledge representation and reasoning with Horn clauses-based representation. The idea is to use Prolog clauses on the lower level to represent facts and relations, as well as define rules on top of them. On the other hand a higher-level Semantic Web layer using RDF support is provided. This allows for compatibility with Semantic Media Wiki while offering improved representation and reasoning capabilities. Another important idea is provide an extension to already available flexible wiki solution (DokuWiki) instead of modifying existing wiki engine. Using the presented architecture it is possible to analyze rule-based knowledge stored in the wiki. © J.UCS. 0 0
Collective wisdom: Information growth in wikis and blogs Sanmay Das
Malik Magdon-Ismail
Collective intelligence
Social network
Proceedings of the ACM Conference on Electronic Commerce English Wikis and blogs have become enormously successful media for collaborative information creation. Articles and posts accrue information through the asynchronous editing of users who arrive both seeking information and possibly able to contribute information. Most articles stabilize to high quality, trusted sources of information representing the collective wisdom of all the users who edited the article. We propose a model for information growth which relies on two main observations: (i) as an article's quality improves, it attracts visitors at a faster rate (a rich get richer phenomenon); and, simultaneously, (ii) the chances that a new visitor will improve the article drops (there is only so much that can be said about a particular topic). Our model is able to reproduce many features of the edit dynamics observed on Wikipedia and on blogs collected from LiveJournal; in particular, it captures the observed rise in the edit rate, followed by 1/t decay. 0 0
Combination of evidence for effective web search Dong Nguyen
Callan J.
NIST Special Publication English In this paper we describe Carnegie Mellon University's sub- mission to the TREC 2010 Web Track. Our baseline run combines different methods, of which in particular the spam prior and mixture model were found the most effective. We also experimented with expansion over the Wikipedia corpus and found that picking the right Wikipedia articles for expansion can improve performance substantially. Furthermore, we did preliminary experiments with combining expansion over the Wikipedia corpus with expansion over the top ranked web pages. 0 0
Combining process model and semantic wiki Albers A.
Ebel B.
Sauter C.
Documentation
IPeM
Process
Semantics
Wiki
11th International Design Conference, DESIGN 2010 English Increasing product complexity, gobal markets and shorter product life cycles are only a few reasons why the development of new products is a challenging task. A lot of knowledge is needed for and generated in product development processes. In this paper an approach is presented which suggests a combination of the integrated product engineering model (iPeM) and semantic wikis for supporting knowledge management in product development. To receive a valid statement of the practicability and usability of the approach the implemented wiki-system was used during an industrial predevelopment project. 0 0
Combining text/image in WikipediaMM task 2009 Moulin C.
Barat C.
Lemaitre C.
Gery M.
Ducottet C.
Largeron C.
Lecture Notes in Computer Science English This paper reports our multimedia information retrieval experiments carried out for the ImageCLEF Wikipedia task 2009. We extend our previous multimedia model defined as a vector of textual and visual information based on a bag of words approach [6]. We extract additional textual information from the original Wikipedia articles and we compute several image descriptors (local colour and texture features). We show that combining linearly textual and visual information significantly improves the results. 0 0
Combining wikipedia-based concept models for cross-language retrieval Benjamin Roth
Dietrich Klakow
Crosslanguage information retrieval
Explicit semantic analysis
Latent dirichlet allocation
Machine translation
IRFC English 0 0
Combining wikis and screen capture videos as a part of information systems science course Makkonen P. Connectivism
Constructivist learning
Learning of information systems
Problem-based learning
Screen capture video
Web-based learning environment
Wiki
16th Americas Conference on Information Systems 2010, AMCIS 2010 English This paper describes the combination of wikis and screen capture videos as a complementary addition to conventional lectures in an information management and information systems development course. Our basis was collaborative problembased learning with the problems defined by students. The idea was that students were expected to find concepts or issues from our lecture material which are not well-defined or clarified for them. Our intention was that in this way we could run collaborative learning under the principles of the Jigsaw method. In this technique different students create presentations on different themes and the students teach each other by using these presentations. The students composed a Windows Media Player video focusing on the self-defined problems of a subject area. This was followed by a seminar on our wiki in which the students familiarized themselves with the videos of other students. The approach was beneficial for learning in many ways. 0 0
Comparing Methods for Single Paragraph Similarity Analysis B. Stone
S. Dennis
P. J. Kwantes
Topics in Cognitive Science The focus of this paper is two-fold. First, similarities generated from six semantic models were compared to human ratings of paragraph similarity on two datasets”23 World Entertainment News Network paragraphs and 50 {ABC} newswire paragraphs. Contrary to findings on smaller textual units such as word associations {(Griffiths,} Tenenbaum, \& Steyvers, 2007), our results suggest that when single paragraphs are compared, simple nonreductive models (word overlap and vector space) can provide better similarity estimates than more complex models {(LSA,} Topic Model, {SpNMF,} and {CSM).} Second, various methods of corpus creation were explored to facilitate the semantic models similarity estimates. Removing numeric and single characters, and also truncating document length improved performance. Automated construction of smaller Wikipedia-based corpora proved to be very effective, even improving upon the performance of corpora that had been chosen for the domain. Model performance was further improved by augmenting corpora with dataset paragraphs. 0 0
Computational Methods for Historical Research on Wikipedia's Archives Jonathan Cohen Wikipedia Archive
Data mining
Geocoding
Spatial Data Analysis
E-Research: A Journal of Undergraduate Work English This paper presents a novel study of geographic information implicit in the English Wikipedia archive. This project demonstrates a method to extract data from the archive with data mining, map the global distribution of Wikipedia editors through geocoding in GIS, and proceed with a spatial analysis of Wikipedia use in metropolitan cities. 0 0
Computing semantic relatedness between named entities using Wikipedia Hongyan Liu
Yirong Chen
Semantic relatedness
Web mining
Proceedings - International Conference on Artificial Intelligence and Computational Intelligence, AICI 2010 English In this paper the authors suggest a novel approach that uses Wikipedia to measure the semantic relatedness between Chinese named entities, such as names of persons, books, softwares, etc. The relatedness is measured through articles in Wikipedia that are related to the named entities. The authors select a set of "definition words" which are hyperlinks from these articles, and then compute the relatedness between two named entities as the relatedness between two sets of definition words. The authors propose two ways to measure the relatedness between two definition words: by Wiki-articles related to the words or by categories of the words. Proposed approaches are compared with several other baseline models through experiments. The experimental results show that this method renders satisfactory results. 0 0
Concept neighbourhoods in knowledge organisation systems Priss U.
Old L.J.
Advances in Knowledge Organization English This paper discusses the application of concept neighbourhoods (in the sense of formal concept analysis) to knowledge organisation systems. Examples are provided using Roget's Thesaurus, WordNet and Wikipedia categories. 0 0
Conceptual hierarchical clustering of documents using Wikipedia knowledge Gerasimos Spanakis
Georgios Siolas
Andreas Stafylopatis
Lecture Notes in Electrical Engineering English In this paper, we propose a novel method for conceptual hierarchical clustering of documents using knowledge extracted from Wikipedia. A robust and compact document representation is built in real-time using the Wikipedia API. The clustering process is hierarchical and creates cluster labels which are descriptive and important for the examined corpus. Experiments show that the proposed technique greatly improves over the baseline approach. © 2011 Springer Science+Business Media B.V. 0 0
Connecting semantic MediaWiki to different triple stores using RDF2Go Schied M.
Kostlbacher A.
Wolff C.
Rdf2go
Semantic MediaWiki
Triple store connector
CEUR Workshop Proceedings English This article describes a generic triple store connector for the popular Semantic MediaWiki software to be used with different triple stores like Jena or Sesame. Using RDF2Go as an abstraction layer it is possible to easily exchange triple stores. This ongoing work is part of the opendrugwiki project, a semantic wiki for distributed pharmaceutical research groups. 0 0
Considering Adaptation and the "Function" of Traits in the Classroom, Using Wiki Tools Evolution: Education and Outreach English 0 0
Consistency without concurrency control in large, dynamic systems Mihai Letia
Nuno Preguica
Marc Shapiro
Operating Systems Review (ACM) English Replicas of a commutative replicated data type (CRDT) eventually converge without any complex concurrency control. We validate the design of a non-trivial CRDT, a replicated sequence, with performance measurements in the context of Wikipedia. Furthermore, we discuss how to eliminate a remaining scalability bottleneck: Whereas garbage collection previously required a system-wide consensus, here we propose a flexible two-tier architecture and a protocol for migrating between tiers. We also discuss how the CRDT concept can be generalised, and its limitations. 0 0
Consolidating tools for model evaluation Olesen H.R.
Chang C.J.
ASTM
Atmospheric dispersion models
Boot
Kincaid
Model evaluation
Model Validation Kit
Sigplot
Wiki
International Journal of Environment and Pollution English An overview is provided of some central tools and data sets that are currently available for evaluation of atmospheric dispersion models. The paper serves as a guide to the Model Validation Kit, which was introduced already in 1993, but has undergone a recent revision. The Model Validation Kit is a package of field data sets and software for model evaluation plus various supplementary materials. Further, the paper outlines main features of a corresponding package that implements the evaluation methodology of the American Society for Testing and Materials (ASTM), as specified in its standard guide D6589 on statistical evaluation of dispersion models. The paper gives a review of features and limitations of the two packages. Copyright 0 0
Constructing Commons in the Cultural Environment MJ Madison
BM Frischmann
KJ Strandburg
Cornell Law Review, This Article sets out a. framework for investigating sharing and resource-pooling arrangements for information- and knowledge-based works. We argue that adapting the approach pioneered by Elmor Ostrom and her collaborators to commons arrangements in the natural environment provides a template for examining the construction of commons in the cultural environment. The approach promises to lead to a better understanding of how participants in commons and pooling arrangements structure their interactions in relation to the environments in which they are embedded, in relation to information and knowledge resources that they produce and use, and in relation to one another Some examples of the types of arrangements we have in. mind are patent pools (such as the Manufacturer's Aircraft Association), open source software development projects (such as Linux), Wikipedia, the Associated Press, certain jamband communities, medieval guilds, and modern research universities. These examples are illustrative and far from exhaustive. Each involves a constructed cultural commons worth of independent study, but independent studies get us only so far. A more systematic approach is needed. An improved understanding of cultural commons is critical for obtaining a more complete perspective on intellectual property doctrine and its interactions with other legal and social mechanisms for governing creativity and innovation, in particular, and information and knowledge production, conservation, and consumption, generally. We propose and initial framework for evaluating and comparing the contours of different commons arrangements. The framework will allow us to develop an inventory of structural similarities and differences among cultural commons in different industries, disciplines, and knowledge domains and shed light on the underlying contextual reasons for such differences. Structural inquiery into a series of case studies will provide a basis from developing theories to exploan the emergence, form, and stability of the observed variety of cultural commons and eventually, to design models to explicate and infrorm institutional desing. The proposed approach would draw upon case studies from a while range of disciplines Among other things, we argue that theoretical apporaches to constructed cultural and use of pooled resources, internal licensing conditions, management of external relationships, and institutional forms, along with the degree of collaboration among members, sharing of human capital, degrees of integration among participants, and any specified purposed to the arrangement. 0 0
Construction of a domain ontological structure from Wikipedia Xavier C.C.
De Lima V.L.S.
STIL 2009 - 2009 7th Brazilian Symposium in Information and Human Language Technology Portuguese Data extraction from Wikipedia for ontologies construction, enrichment and population is an emerging research field. This paper describes a study on automatic extraction of an ontological structure containing hyponymy and location relations from Wikipedia's Tourism category in Portuguese, illustrated with an experiment, and evaluation of its results. 0 0
Content filtering and the new censorship Edwards L. Filtering
Internet child pornography
IWF
Privatized censorship
4th International Conference on Digital Society, ICDS 2010, Includes CYBERLAWS 2010: The 1st International Conference on Technical and Legal Aspects of the e-Society English Since the famous Time magazine cover of 1995, nation states have been struggling to control access to adult and illegal material on the Internet. In recent years, strategies for such control have shifted from the use of traditional policing-largely ineffective in a transnational medium - to the use of take down and especially filtering applied by ISPs enrolled as "privatized censors" by the state. The role of the IWF in the UK has become a pivotal case study of how state and private interests have interacted to produce effective but non transparent and non accountable censorship, even in a Western democracy. The IWF's role has recently been significantly questioned after a stand-off with Wikipedia in December 2008. This paper will set the IWF's recent acts in the context of a massive increase in global filtering of Internet content, and suggest the creation of a Speech Impact Assessment process which might inhibit the growth of unchecked censorship. 0 0
Context-based term identification and extraction for ontology construction Goh H.-N.
Kiu C.-C.
Ontology construction
Taxonomy
Term identification and extraction
Wikipedia
Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering, NLP-KE 2010 English Ontology construction often requires a domain specific corpus in conceptualizing the domain knowledge; specifically, it is an association of terms, relation between terms and related instances. It is a vital task to identify a list of significant term for constructing a practical ontology. In this paper, we present the use of a context-based term identification and extraction methodology for ontology construction from text document. The methodology is using a taxonomy and Wikipedia to support automatic term identification and extraction from structured documents with an assumption of candidate terms for a topic are often associated with its topic-specific keywords. A hierarchical relationship of super-topics and sub-topics is defined by a taxonomy, meanwhile, Wikipedia is used to provide context and background knowledge for topics that defined in the taxonomy to guide the term identification and extraction. The experimental results have shown the context-based term identification and extraction methodology is viable in defining topic concepts and its sub-concepts for constructing ontology. The experimental results have also proven its viability to be applied in a small corpus / text size environment in supporting ontology construction. 0 0
Conversation support system for people with language disorders - Making topic lists from Wikipedia Yamane Y.
Ishida H.
Hattori F.
Yasuda K.
Conversation support
Simbiotic computing
Wikipedia
Proceedings of the 9th IEEE International Conference on Cognitive Informatics, ICCI 2010 English A conversation support system for people with language disorders is proposed. Although the existing conversation support system "Raku-raku Jiyu Kaiwa" (Easy Free Conversation) is effective, it has insufficient topic words and a rigid topic list structure. To solve these problems, this paper proposes a method that makes topic lists from Wikipedia's millions of topic words. Experiments using the proposed topic list showed that subject utterances increased and the variety of spoken topics was expanded. 0 0
Converting a historical architecture encyclopedia into a semantic knowledge base René Witte
Ralf Krestel
Thomas Kappler
Lockemann P.C.
IEEE Intelligent Systems English The Handbook on Architecture (Handbuch der Architektur) was perhaps one of the most ambitious publishing projects ever. Like a 19thcentury Wikipedia, it attempted nothing less than a full account of all architectural knowledge available at the time, both past and present. It covers topics from Greek temples to contemporary hospitals and universities; from the design of individual construction elements such as window sills to large-scale town planning; from physics to design; from planning to construction. It also discusses architectural history and styles and a multitude of other topics, such as building conception, statics, and interior design. 0 0
Cooperation and Cognition in Wikipedia Articles - A data-driven, philosophical and exploratory study R. Jesus Center for Philosophy of Science and Nature Studies, University of Copenhagen English Wikipedia has created and harnessed new social and work dynamics, which can provide insight into specific aspects of cognition, as amplified by a multitude of editors and their ping-pong style of editing, spatial and time flexibility within unique technology-community fostering features. Wikipedia's motto "The Free Encyclopedia That Anyone Can Edit" is analyzed to reveal human, technological and value actors within a theoretical context of distributed cognition, cooperation and technological agency. In the Data-driven studies using data from Wiki log pages, network visualization and bicliques are used and developed to focus closer on the process of collaboration in articles and meta-articles, and inside the article "Prisoner's dilemma" and the policy article "Neutral Point of View". The several tools used reveal clusters of interest, dense areas of coordination, a blend between coordination and direct editing work, and point to Wikipedia's dynamic stability in content and form. In the philosophical-cognitive studies, a distinction between Cognition for Planning and Cognition for Improvising is proposed to account for Wikipedia's success and mode of editing whereby many small edits are used for its improvement. In the exploratory part an installation of a 'live-Wiki' 'Our Coll/nn/ective Minds' piece reflects on several aspects of Wikis, free culture, open source, Do-It-Yourself by engaging in the debate in a more creative and participative form. These studies contribute to constructing an ecology of the article, a vision of humanities bottom-up, and a better understanding of cooperation and cognition within sociotechnological networks. 0 0
Coordination and division of labor in open content communities: The role of template messages in Wikipedia Alessandro Rossi
Loris Gaio
Den Besten M.
Dalle J.-M.
Proceedings of the Annual Hawaii International Conference on System Sciences English Though largely spontaneous and loosely regulated, the process of peer production within online communities is also supplemented by additional coordination mechanisms. In this respect, we study an emergent organizational practice of the Wikipedia community, the use of template messages, which seems to act as effective and parsimonious coordination device to signal quality concerns or other issues that need to be addressed. We focus on the template "NPOV", which signals breaches on the fundamental policy of neutrality of Wikipedia articles, and we show how and to what extent putting such template on a page affects the editing process. We notably find that intensity of editing increases immediately after the "NPOV" template appears and that controversies about articles which have received the attention of a more limited group of editors before they were tagged as controversial have a lower chance to be treated quickly. 0 1
CorpWiki: A self-regulating wiki to promote corporate collective intelligence through expert peer matching Ioanna Lykourentzou
Katerina Papadaki
Dimitrios J. Vergados
Despina Polemi
Vassili Loumos
English One of the main challenges that organizations face nowadays, is the efficient use of individual employee intelligence, through machine-facilitated understanding of the collected corporate knowledge, to develop their collective intelligence. Web 2.0 technologies, like wikis, can be used to address the above issue. Nevertheless, their application in corporate environments is limited, mainly due to their inability to ensure knowledge creation and assessment in a timely and reliable manner. In this study we propose CorpWiki, a self-regulating wiki system for effective acquisition of high-quality knowledge content. Inserted articles undergo a quality assessment control by a large number of corporate peer employees. In case the quality is inadequate, CorpWiki uses a novel expert peer matching algorithm (EPM), based on feed-forward neural networks, that searches the human network of the organization to select the most appropriate peer employee who will improve the quality of the article. Performance evaluation results, obtained through simulation modeling, indicate that CorpWiki improves the final quality levels of the inserted articles as well as the time and effort required to reach them. The proposed system, combining machine-learning intelligence with the individual intelligence of peer employees, aims to create new inferences regarding corporate issues, thus promoting the collective organizational intelligence. 0 0
Corrigendum to Wikipedia workload analysis for decentralized hosting Computer Networks 53 (11) (2009) 1830-1845 (DOI:10.1016/j.comnet.2009.02.019) Guido Urdaneta
Guillaume Pierre
Maarten van Steen
Computer Networks 0 0
Creating a Wikipedia-based Persian-English word association dictionary Rahimi Z.
Shakery A.
Association dictionary
Information retrieval
Wikipedia
Data mining
2010 5th International Symposium on Telecommunications, IST 2010 English One of the most important issues in cross language information retrieval is how to cross the language barrier between the query and the documents. Different translation resources have been studied for this purpose. In this research, we study using Wikipedia for query translation by constructing a Wikipedia-based bilingual association dictionary. We use English and Persian Wikipedia inter-language links to align related titles and then mine word by word associations between the two languages using the extracted alignments. We use the mined word association dictionary for translating queries in Persian-English cross language information retrieval. Our experimental results on Hamshari corpus show that the proposed method is effective in extracting word associations and that Persian Wikipedia is a promising translation resource. Using the association dictionary, we can improve the pure dictionary-based method, where the only translation resource is a bilingual dictionary, by 33.6% and its recall by 26.2%. 0 0
Creating and exploiting a Web of semantic data Tim Finin
Zareen Syed
Information extraction
Knowledge base
Semantic web
Wikipedia
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence, Proceedings English Twenty years ago Tim Berners-Lee proposed a distributed hypertext system based on standard Internet protocols. The Web that resulted fundamentally changed the ways we share information and services, both on the public Internet and within organizations. That original proposal contained the seeds of another effort that has not yet fully blossomed: a Semantic Web designed to enable computer programs to share and understand structured and semi-structured information easily. We will review the evolution of the idea and technologies to realize a Web of Data and describe how we are exploiting them to enhance information retrieval and information extraction. A key resource in our work is Wikitology, a hybrid knowledge base of structured and unstructured information extracted from Wikipedia. 0 0
Creative Commons International The International License Porting Project Catharina Maracke Jipitec When Creative Commons {(CC)} was founded in 2001, the core Creative Commons licenses were drafted according to United States Copyright Law. Since their first introduction in December 2002, Creative Commons licenses have been enthusiastically adopted by many creators, authors, and other content producers “ not only in the United States, but in many other jurisdictions as well. Global interest in the {CC} licenses prompted a discussion about the need for national versions of the {CC} licenses. To best address this need, the international license porting project {(œCreative} Commons International? “ formerly known as {œInternational} Commons?) was launched in 2003. Creative Commons International works to port the core Creative Commons licenses to different copyright legislations around the world. The porting process includes both linguistically translating the licenses and legally adapting the licenses to a particular jurisdiction such that they are comprehensible in the local jurisdiction and legally enforceable but concurrently retain the same key elements. Since its inception, Creative Commons International has found many supporters all over the world. With Finland, Brazil, and Japan as the first completed jurisdiction projects, experts around the globe have followed their lead and joined the international collaboration with Creative Commons to adapt the licenses to their local copyright. This article aims to present an overview of the international porting process, explain and clarify the international license architecture, its legal and promotional aspects, as well as its most recent challenges. 0 0
Crew: cross-modal resource searching by exploiting Wikipedia Chen Liu
Beng C. Ooi
Anthony K. H. Tung
Dongxiang Zhang
Multi-modal
Web 2.0
Wikipedia
English In Web 2.0, users have generated and shared massive amounts of resources in various media formats, such as news, blogs, audios, photos and videos. The abundance and diversity of the resources call for better integration to improve the accessibility. A straightforward approach is to link the resources via tags so that resources from different modals sharing the same tag can be connected as a graph structure. This naturally motivates a new kind of information retrieval system, named cross-modal resource search, in which given a query object from any modal, all the related resources from other modals can be retrieved in a convenient manner. However, due to the tag homonym and synonym, such an approach returns results of low quality because resources with the same tag but not semantically related will be directly connected as well. In this paper, we propose to build the resource graph and perform query processing by exploiting Wikipedia. We construct a concept middle-ware between the layer of tags and resources to fully capture the semantic meaning of the resources. Such a cross-modal search system based on Wikipedia, named Crew, is built and demonstrates promising search results. 0 0
Cross-cultural analysis of the Wikipedia community Noriko Hara
Pnina Shachaf
Khe Foon Hew
Wikipedia
Communities of practice
Cross cultural aspects
Non English languages
User behavior
J. Am. Soc. Inf. Sci. Technol.
Journal of the American Society for Information Science and Technology
English This article reports a cross-cultural analysis of four Wikipedias in different languages and demonstrates their roles as communities of practice {(CoPs).} Prior research on {CoPs} and on the Wikipedia community often lacks cross-cultural analysis. Despite the fact that over 75\% of Wikipedia is written in languages other than English, research on Wikipedia primarily focuses on the English Wikipedia and tends to overlook Wikipedias in other languages. This article first argues that Wikipedia communities can be analyzed and understood as {CoPs.} Second, norms of behaviors are examined in four Wikipedia languages {(English,} Hebrew, Japanese, and Malay), and the similarities and differences across these four languages are reported. Specifically, typical behaviors on three types of discussion spaces (talk, user talk, and Wikipedia talk) are identified and examined across languages. Hofstede's dimensions of cultural diversity as well as the size of the community and the function of each discussion area provide lenses for understanding the similarities and differences. As such, this article expands the research on online {CoPs} through an examination of cultural variations across multiple {CoPs} and increases our understanding of Wikipedia communities in various languages. 0 4
Cross-language information retrieval using meta-language index construction and structural queries Jadidinejad A.H.
Mahmoudi F.
Lecture Notes in Computer Science English Structural Query Language allows expert users to richly represent its information needs but unfortunately, the complexity of SQLs make them impractical in the Web search engines. Automatically detecting the concepts in an unstructured user's information need and generating a richly structured, multilingual equivalent query is an ideal solution. We utilize Wikipedia as a great concept repository and also some state of the art algorithms for extracting Wikipedia's concepts from the user's information need. This process is called "Query Wikification". Our experiments on the TEL corpus at CLEF2009 achieves +23% and +17% improvement in Mean Average Precision and Recall against the baseline. Our approach is unique in that, it does improve both precision and recall; two pans that often improving one, hurt the another. 0 0
Cross-language plagiarism detection Martin Potthast
Barrón-CedeñAlberto o
Benno Stein
Paolo Rosso
Language Resources and Evaluation 0 0
Cross-language retrieval using link-based language models Benjamin Roth
Dietrich Klakow
Cross-Language Information Retrieval
Language modeling
LDA
Wikipedia
SIGIR 2010 Proceedings - 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval English We propose a cross-language retrieval model that is solely based on Wikipedia as a training corpus. The main contributions of our work are: 1. A translation model based on linked text in Wikipedia and a term weighting method associated with it. 2. A combination scheme to interpolate the link translation model with retrieval based on Latent Dirichlet Allocation. On the CLEF 2000 data we achieve improvement with respect to the best German-English system at the bilingual track (non-significant) and improvement against a baseline based on machine translation (significant). 0 0
Cross-lingual analysis of concerns and reports on crimes in blogs Hiroyuki Nakasaki
Abe Y.
Takehito Utsuro
Kawada Y.
Tomohiro Fukuhara
Kando N.
Yoshioka M.
Hiroshi Nakagawa
Yoji Kiyota
Blog feed retrieval
Crime reports
Cross-lingual blog analysis
Wikipedia
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering English Among other domains and topics on which some issues are frequently argued in the blogosphere, the domain of crime is one of the most seriously discussed by various kinds of bloggers. Such information on crimes in blogs is especially valuable for outsiders from abroad who are not familiar with cultures and crimes in foreign countries. This paper proposes a framework of cross-lingually analyzing people's concerns, reports, and experiences on crimes in their own blogs. In the retrieval of blog feeds/posts, we take two approaches, focusing on various types of bloggers such as experts in the crime domain and victims of criminal acts. 0 0
Crowdsourcing a Wikipedia Vandalism Corpus Martin Potthast Wikipedia
Vandalism detection
Evaluation
Corpus
SIGIR English We report on the construction of the PAN Wikipedia vandalism corpus, PAN-WVC-10, using Amazon’s Mechanical Turk. The corpus compiles 32 452 edits on 28 468 Wikipedia articles, among which 2 391 vandalism edits have been identified. 753 human annotators cast a total of 193 022 votes on the edits, so that each edit was reviewed by at least 3 annotators, whereas the achieved level of agreement was analyzed in order to label an edit as “regular” or “vandalism.” The corpus is available free of charge. 6 1
Crowdsourcing and Open Access: Collaborative Techniques for Disseminating Legal Materials and Scholarship Timothy K. Armstrong Open Access
Peer Production
Crowdsourcing
Online Communities
Distributed Proofreaders
Wikipedia
Wikisource
Santa Clara Computer and High Technology Law Journal English This short essay surveys the state of open access to primary legal source materials (statutes, judicial opinions and the like) and legal scholarship. The ongoing digitization phenomenon (illustrated, although by no means typified, by massive scanning endeavors such as the Google Books project and the Library of Congress's efforts to digitize United States historical documents) has made a wealth of information, including legal information, freely available online, and a number of open-access collections of legal source materials have been created. Many of these collections, however, suffer from similar flaws: they devote too much effort to collecting case law rather than other authorities, they overemphasize recent works (especially those originally created in digital form), they do not adequately hyperlink between related documents in the collection, their citator functions are haphazard and rudimentary, and they do not enable easy user authentication against official reference sources. The essay explores whether some of these problems might be alleviated by enlarging the pool of contributors who are working to bring paper records into the digital era. The same "peer production" process that has allowed far-flung communities of volunteers to build large-scale informational goods like the Wikipedia encyclopedia or the Linux operating system might be harnessed to build a digital library. The essay critically reviews two projects that have sought to "crowdsource" proofreading and archiving of texts: Distributed Proofreaders, a project frequently held up as a model in the academic literature on peer production; and Wikisource, a sister site of Wikipedia that improves on Distributed Proofreaders in a number of ways. The essay concludes by offering a few illustrations meant to show the potential for using Wikisource as an open-access repository for primary source materials and scholarship, and considers some possible drawbacks of the crowdsourced approach. 4 1
Crowdsourcing semantic content: A model and two applications Angelo Di Iorio
Alberto Musetti
Silvio Peroni
Fabio Vitali
Authoring
Community
Interfaces
Ontology
Template
3rd International Conference on Human System Interaction, HSI'2010 - Conference Proceedings English While the original design of wikis was mainly focused on a completely open free-form text model, semantic wikis have since moved towards a more structured model for editing: users are driven to create ontological data in addition to text by using ad-hoc editing interfaces. This paper introduces OWiki, a framework for creating ontological content within not-natively-semantic wikis. Ontology-driven forms and templates are the key concepts of the system, that allows even inexpert users to create consistent semantic data with little effort. Multiple and very different instances of OWiki are presented here. The expressive power and flexibility of OWiki proved to be the right trade-off to deploy the authoring environments for such very different domains, ensuring at the same time editing freedom and semantic data consistency. 0 0
Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework Buecheler T.
Sieg J.H.
Fuchslin R.M.
Pfeifer R.
Artificial Life XII: Proceedings of the 12th International Conference on the Synthesis and Simulation of Living Systems, ALIFE 2010 English The lonely researcher trying to crack a problem in her office still plays an important role in fundamental research. However, a vast exchange, often with participants from different fields is taking place in modern research activities and projects. In the "Research Value Chain" (a simplified depiction of the Scientific Method as a process used for the analyses in this paper), interactions between researchers and other individuals (intentional or not) within or outside their respective institutions can be regarded as occurrences of Collective Intelligence. "Crowdsourcing" (Howe 2006) is a special case of such Collective Intelligence. It leverages the wisdom of crowds (Surowiecki 2004) and is already changing the way groups of people produce knowledge, generate ideas and make them actionable. A very famous example of a Crowdsourcing outcome is the distributed encyclopedia "Wikipedia". Published research agendas are asking how techniques addressing "the crowd" can be applied to non-profit environments, namely universities, and fundamental research in general. This paper discusses how the non-profit "Research Value Chain" can potentially benefit from Crowdsourcing. Further, a research agenda is proposed that investigates a) the applicability of Crowdsourcing to fundamental science and b) the impact of distributed agent principles from Artificial Intelligence research on the robustness of Crowdsourcing. Insights and methods from different research fields will be combined, such as complex networks, spatially embedded interacting agents or swarms and dynamic networks. Although the ideas in this paper essentially outline a research agenda, preliminary data from two pilot studies show that non-scientists can support scientific projects with high quality contributions. Intrinsic motivators (such as "fun") are present, which suggests individuals are not (only) contributing to such projects with a view to large monetary rewards. 1 0
Crowdsourcing: How and Why Should Libraries Do It? R. Holley D-Lib Magazine /04/2011 The definition and purpose of crowdsourcing and its relevance to libraries is discussed with particular reference to the Australian Newspapers service, {FamilySearch,} Wikipedia, Distributed Proofreaders, Galaxy Zoo and The Guardian {MP's} Expenses Scandal. These services have harnessed thousands of digital volunteers who transcribe, create, enhance and correct text, images and archives. Known facts about crowdsourcing are presented and helpful tips and strategies for libraries beginning to crowdsource are given. 0 0
Cursed with self-awareness": gender-bending Suzanne M. Daughton Subversion Pages Women's Studies in Communication 0 0
DIY eBooks: Collaborative publishing made easy Battle S.
Fabio Vitali
Angelo Di Iorio
Bernius M.
Henderson T.
Choudhury M.
Document metadata
EBooks
Semantic web
Wiki
Proceedings of SPIE - The International Society for Optical Engineering English Print is undergoing a revolution as significant as the invention of the printing press. The emergence of ePaper is a major disruption for the printing industry; defining a new medium with the potential to redefine publishing in a way that is as different to today's Web, as the Web is to traditional print. In this new eBook ecosystem we don't just see users as consumers of eBooks, but as active prosumers able to collaboratively create, customize and publish their own eBooks. We describe a transclusive, collaborative publishing framework for the web. 0 0
DSMW: A distributed infrastructure for the cooperative edition of semantic wiki documents DocEng2010 - Proceedings of the 2010 ACM Symposium on Document Engineering English 0 0
DSMW: Distributed Semantic MediaWiki Hala Skaf-Molli
Gérôme Canals
Pascal Molli
Lecture Notes in Computer Science English DSMW is an extension to Semantic Mediawiki (SMW), it allows to create a network of SMW servers that share common semantic wiki pages. DSMW users can create communication channels between servers and use a publish-subscribe approach to manage the change propagation. DSMW synchronizes concurrent updates of shared semantic pages to ensure their consistency. It offers new collaboration modes to semantic wiki users and supports dataflow-oriented processes. 0 0
Dandelion: supporting coordinated, collaborative authoring in Wikis Changyan Chi
Michelle X. Zhou
Min Yang
Wenpeng Xiao
Yiqin Yu
Xiaohua Sun
Awareness
Collaborative authoring
Coordination
Conference on Human Factors in Computing Systems English 0 1
… further results