(Alternative names for this keyword)
|Related keyword(s)||image bank|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
encyclopedia is included as keyword or extra keyword in 0 datasets, 0 tools and 42 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Postures d’opposition à Wikipédia en milieu intellectuel en France||Alexandre Moatti||Wikipédia, objet scientifique non identifié||French||23 November 2015||6||0|
|La connaissance est un réseau: Perspective sur l’organisation archivistique et encyclopédique||Martin Grandjean||Les Cahiers du Numérique||French||2014||Network analysis is not revolutionizing our objects of study, it revolutionizes the perspective of the researcher on the latter. Organized as a network, information becomes relational. It makes potentially possible the creation of new information, as with an encyclopedia which links between records weave a web which can be analyzed in terms of structural characteristics or with an archive directory which sees its hierarchy fundamentally altered by an index recomposing the information exchange network within a group of people. On the basis of two examples of management, conservation and knowledge enhancement tools, the online encyclopedia Wikipedia and the archives of the Intellectual Cooperation of the League of Nations, this paper discusses the relationship between the researcher and its object understood as a whole.
Abstract (french)L’analyse de réseau ne transforme pas nos objets d’étude, elle transforme le regard que le chercheur porte sur ceux-ci. Organisée en réseau, l’information devient relationnelle. Elle rend possible en puissance la création d’une nouvelle connaissance, à l’image d’une encyclopédie dont les liens entre les notices tissent une toile dont on peut analyser les caractéristiques structurelles ou d’un répertoire d’archives qui voit sa hiérarchie bouleversée par un index qui recompose le réseau d’échange d’information à l’intérieur d’un groupe de personnes. Sur la base de deux exemples d’outils de gestion, conservation et valorisation de la connaissance, l’encyclopédie en ligne Wikipédia et les archives de la coopération intellectuelle de la Société des Nations, cet article questionne le rapport entre le chercheur et son objet compris dans sa globalité. [Version preprint disponible].
|Wikipedia and encyclopedic production||Loveland J.
|New Media and Society||English||2013||Wikipedia is often presented within a foreshortened or idealized history of encyclopedia-making. Here we challenge this viewpoint by contextualizing Wikipedia and its modes of production on a broad temporal scale. Drawing on examples from Roman antiquity onward, but focusing on the years since 1700, we identify three forms of encyclopedic production: compulsive collection, stigmergic accumulation, and corporate production. While each could be characterized as a discrete period, we point out the existence of significant overlaps in time as well as with the production of Wikipedia today. Our analysis explores the relation of editors, their collaborators, and their modes of composition with respect to changing notions of authorship and originality. Ultimately, we hope our contribution will help scholars avoid ahistorical claims about Wikipedia, identify historical cases germane to the social scientist's concerns, and show that contemporary questions about Wikipedia have a lifespan exceeding the past decade.||0||0|
|Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias||Imogen Casebourne
|English||2 August 2012||8||0|
|Wikipédia, un projet hors normes ?||Rémi Bachelet
|Responsabilité & Environnement (Annales des Mines)||French||24 July 2012||Wikipédia et l'ISO représentent toutes deux une cristallisation du savoir. que ce soit savoir-faire (ISO) ou savoir encyclopédique (Wikipédia). Toutes deux sont fondés sur la recherche de consensus et la collaboration sous forme de textes écrits. Dès le départ Wikipédia a adopté des règles, avec ses cinq principes fondateurs. La montée en puissance a conduit au développement d'un espace méta (ex. page de discussion) dont le fonctionnement a nécessité une codification.||2||0|
|Wikipedia de la A a la W||Tomás Saorín-Pérez||Editorial UOC||Spanish||July 2012||Wikipedia es una realidad que funciona, aunque en teoría pueda parecer un sueño irrealizable. Un puñado de entusiastas ha redefinido desde la nada el concepto clásico de enciclopedia y ha construido la fuente de referencia más usada de la historia. ¿Tiene suficiente calidad? La respuesta es afirmativa, y para justificarlo hay que profundizar en los mecanismos de los que está dotada, que le permiten alcanzar el nivel de calidad que se desee, combinando el esfuerzo de miles de editores voluntarios autoorganizados. Wikipedia es al mismo tiempo contenido y personas. Es el momento de conocerla por dentro y de potenciar su apuesta por el conocimiento abierto desde las instituciones culturales, científicas y educativas. Participar en Wikipedia permite aprender de este increíble laboratorio global de construcción social de información organizada.||0||0|
|Open Source Production of Encyclopedias: Editorial Policies at the Intersection of Organizational and Epistemological Trust||De Laat P.B.||Social Epistemology||English||2012||The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and "super-review" of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely coincides with epistemological trust, a link is made with theories about the acceptance of testimony. It is argued that both non-reductionist views (the "acceptance principle" and the "assurance view") and reductionist ones (an appeal to background conditions, and a-newly defined-"expertise view") have been implemented in editorial strategies over the past decade.||0||0|
|Proteopedia: Exciting advances in the 3D encyclopedia of biomolecular structure||Prilusky J.
|NATO Science for Peace and Security Series A: Chemistry and Biology||English||2012||Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units. © 2012 Springer Science+Business Media B.V.||0||0|
|The people's encyclopedia under the gaze of the sages: a systematic review of scholarly research on Wikipedia||Chitu Okoli
Finn Årup Nielsen
|English||2012||Wikipedia has become one of the ten most visited sites on the Web, and the world’s leading source of Web reference information. Its rapid success has inspired hundreds of scholars from various disciplines to study its content, communication and community dynamics from various perspectives. This article presents a systematic review of scholarly research on Wikipedia. We describe our detailed, rigorous methodology for identifying over 450 scholarly studies of Wikipedia. We present the WikiLit website (http wikilit dot referata dot com), where most of the papers reviewed here are described in detail. In the major section of this article, we then categorize and summarize the studies. An appendix features an extensive list of resources useful for Wikipedia researchers.||15||1|
|Using Wikipedia for extracting hierarchy and building geo-ontology||Ngo Q.-H.
|International Journal of Web Information Systems||English||2012||Purpose - This paper aims to serves two main purposes: First, it seeks to provide an overview of the location hierarchy from the highest divisions (continents) to the lowest divisions (wards, villages) in reality and in the Wikipedia pages. Secondly, it aims to introduce an approach to building ageographical ontology from Wikipedia. Design/methodology/approach - The paper first reviews existing applications which extract information from Wikipedia and use it as a data resource to develop natural language processing tools. The paper also reviews the structure of Wikipedia pages which show the location's information. Based on the analysis, the paper then proposes an approach to extract location hierarchy as well Abstract: geographical characteristics for the geo-ontology. The approach also rebuilds the relations between locations in the ontology. Findings - Existing location name systems are mainly based on probabilistic locations, which are mined from the data and they lack the administrative relations between locations for full levels and all countries and territories. The literature review in geographical hierarchy and using Wikipedia for natural language processing tasks offers an approach to build a geographical ontology from Wikipedia pages. The proposed approach is believed to be the first which provides a full geo-ontology for all countries. Practical implications - The paper builds a geo-ontology with full levels for all countries and territories. The administrative relations between locations are needed for real-world applications. Originality/value - The comprehensive overview on existing work on geo-ontology provides a valuable reference for researchers and system developers in related research communities. The proposed approach to build a geographical ontology by using the Wikipedia offers a promising alternative to build a knowledge system from free online multi-language encyclopedia.||0||0|
|How Accurate is Wikipedia?||Natalie Wolchover||LiveScience||English||24 January 2011||Numerous studies have rated Wikipedia's accuracy. On the whole, the web encyclopedia is fairly reliable, but Life's Little Mysteries own small investigation produced mixed results.||0||1|
|Bancos de imágenes para proyectos enciclopédicos: el caso de Wikimedia Commons||Tomás Saorín-Pérez
|El profesional de la información||Spanish||2011||This paper presents the characteristics and functionalities of the Wikimedia Commons image databank shared by all Wikipedia projects. The process of finding images and ilustrating Wikipedia articles is also explained, along with how to add images to the bank. The role of cultural institutions in promoting free and open cultural heritage content is highlighted. Se presenta la naturaleza y función del banco de imágenes Wikimedia Commons para los proyectos de enciclopedias colaborativas. Se analiza el proceso de localización de imágenes y su uso para ilustrar un artículo en Wikipedia, así como la colaboración incorporando imágenes al banco. Se hace especial referencia a las políticas de liberación de patrimonio cultural desde las instituciones culturales.||5||1|
|Human gene/protein synonym dictionary from WikiLinks||Wagholikar K.
|2011 ACM Conference on Bioinformatics, Computational Biology and Biomedicine, BCB 2011||English||2011||Many genes and proteins have alternate names (synonyms) in scientific literature, posing a challenge to effectively organize and exchange information. To address this issue, there have been several initiatives to collate the synonyms into dictionaries. Biothesaurus is an extensive dictionary derived from multiple authoritative sources. Despite its extensive coverage, there are still some synonyms not covered by Biothesaurus. Wikipedia could be a useful source of the missing synonyms, as it has a diverse set of contributors in comparison with authoritative resources, that constitute Biothesaurus. This paper reports a feasibility study of using WikiLinks to find synonyms that are not currently covered by Biothesaurus. Wikipedia pages containing the word gene or protein were included in this study. 121 candidate synonyms were extracted from WikiLinks referencing 7,339 (16%) human genes. This number is significant, given that Biothesaurus has been earlier evaluated to have a coverage of 87%. Hence, WikiLinks were found to be a useful source for collating gene synonyms that are not recorded in authoritative databases. Biothesaurus was evaluated to cover 52% of the extracted candidate synonyms not documented in NCBI. The current study will be extended in scope to cover all genes and to extract synonyms from free text in Wikipedia pages. Copyright||0||0|
|The PlanetMath Encyclopedia||Joseph Corneli||MathWikis||English||2011||The history of PlanetMath.org is discussed, tracing its inception, stabilization, and some defining challenges. Research and outreach efforts that have been conducted in the course of work on the PlanetMath project are reviewed, and the scope and reach of the resource are discussed. Recent developments are indicated briefly. Some remarks evaluating PlanetMath’s trajectory and content conclude the paper.||0||0|
|The intelligible as a new world? Wikipedia versus the eighteenth-century Encyclopédie||Perovic S.||Paragraph||English||2011||For some time now, certain theorists have been urging us to move beyond textbased understandings of culture to consider the impact of new media on the structure and organization of knowledge. This article, however, reconsiders the usual priority given to digital media by comparing Wikipedia, the free, user led online Encyclopedia, with Diderot and D'Alembert's eighteenth-century Encyclopédie. It begins by suggesting that the dichotomy between information system and text is not sufficient for describing the differences between the two. It then considers more closely the type of critical thinking presupposed by the Encyclopédie. It concludes by raising the question of the role of judgement in making sense of any encyclopedia in a modern world in which knowledge systems only coexist on the condition of being partially blind to one another.||0||0|
|Mediating at the student-wikipedia intersection||Rand A.D.||Journal of Library Administration||English||2010||Wikipedia is a free online encyclopedia. The encyclopedia is openly edited by registered users. Wikipedia editors can edit their own and others' entries, and some abuse of this editorial power has been unveiled. Content authors have also been criticized for publishing less than accurate content. Educators and students acknowledge casual use of Wikipedia in spite of its perceived inaccuracies. Use of the online encyclopedia as a reference resource in scholarly papers is still debated. The increasing popularity of Wikipedia has led to an influx of research articles analyzing the validity and content of the encyclopedia. This study provides an analysis of relevant articles on academic use of Wikipedia. This analysis attempts to summarize the status of Wikipedia in relation to the scope (breadth) and depth of its contents and looks at content validity issues that are of concern to the use of Wikipedia for higher education. The study seeks to establish a reference point from which educators can make informed decisions about scholarly use of Wikipedia as a reference resource. © A. D. Rand.||0||0|
|Mental discipline||Grier D.A.||Computer||English||2010||The practices of engineering and computer science are influenced by the same forces that shape manual labor and office work. Occasionally, it's useful to reassess our skills and question the value of our training.||0||0|
|Ontological parsing of encyclopedia information||Bocharov V.
|Lecture Notes in Computer Science||English||2010||Semi-automatic ontology learning from encyclopedia is presented with primary focus on syntax and semantic analyses of definitions.||0||0|
|The spirit of combination||Grier D.A.||Computer||English||2010||We find new ideas by starting from where we are and asking the simple question, Where can we go from here?"||0||0|
|Wikipedia, heterotopi och versioner av kulturella minnen||Haider J.
|Human IT||Swedish||2010||The article draws together studies on encyclopaedic expressions throughout history with Foucault's notion of heterotopia, i.e. actually existing utopias or 'other', particular spaces that exist besides society's regular spaces and which work according to their own rules. It explores how we can understand contemporary online encyclopaedias, specifically Wikipedia, as digital heterotopias. For this it investigates Wikipedia as an archive for our cultural memory in its different - and sometimes contested - versions. In conclusion, participatory online encyclopaedias are framed as a continuation of an Enlightenment ideal as well as distinct networked spaces that are made possible through the affordances of the Internet. © Författarna. Publicerad av Högskolan i Borås.||0||0|
|WikipediaViz: Conveying article quality for casual wikipedia readers||Fanny Chevalier
|IEEE Pacific Visualization Symposium 2010, PacificVis 2010 - Proceedings||English||2010||As Wikipedia has become one of the most used knowledge bases worldwide, the problem of the trustworthiness of the information it disseminates becomes central. With WikipediaViz, we introduce five visual indicators integrated to the Wikipedia layout that can keep casual Wikipedia readers aware of important meta-information about the articles they read. The design of WikipediaViz was inspired by two participatory design sessions with expert Wikipedia writers and sociologists who explained the clues they used to quickly assess the trustworthiness of articles. According to these results, we propose five metrics for Maturity and Quality assessment ofWikipedia articles and their accompanying visualizations to provide the readers with important clues about the editing process at a glance. We also report and discuss about the results of the user studies we conducted. Two preliminary pilot studies show that all our subjects trust Wikipedia articles almost blindly. With the third study, we show that WikipediaViz significantly reduces the time required to assess the quality of articles while maintaining a good accuracy.||0||0|
|Textual curators and writing machines: authorial agency in encyclopedias, print to digital||Krista A. Kennedy||English||July 2009||Wikipedia is often discussed as the first of its kind: the first massively collaborative, Web-based encyclopedia that belongs to the public domain. While it’s true that wiki technology enables large-scale, distributed collaborations in revolutionary ways, the concept of a collaborative encyclopedia is not new, and neither is the idea that private ownership might not apply to such documents. More than 275 years ago, in the preface to the 1728 edition of his Cyclopædia, Ephraim Chambers mused on the intensely collaborative nature of the volumes he was about to publish. His thoughts were remarkably similar to contemporary intellectual property arguments for Wikipedia, and while the composition processes involved in producing these texts are influenced by the available technologies, they are also unexpectedly similar. This dissertation examines issues of authorial agency in these two texts and shows that the “Author Construct” is not static across eras, genres, or textual technologies. In contrast to traditional considerations of the poetic author, the encyclopedic author demonstrates a different form of authorial agency that operates within strict genre conventions and does not place a premium on originality. This and related variations challenge contemporary ideas concerning the divide between print and digital authorship as well as the notion that new media intellectual property arguments are without historical precedent.||25||0|
|Creating "the Wikipedia of pros and cons"||Brooks Lindsay||WikiSym||English||2009||0||0|
|Editing encyclopedias for fun and aggravation||Ross J.I.
|Publishing Research Quarterly||English||2009||This collaborative, retrospective autoethnography begins by offering an overview of the encyclopedias with which we have been involved, as both contributors and consulting editors, over the past decade. We then review our strategies for recruiting authors and maintaining their interest to ensure the highest quality entries; it also covers the mechanics of processing these entries. Next, we discuss the actual and perceived benefits of editing an encyclopedia, the most significant issues we encountered, and our solutions. Finally, we contextualize the previous information in light of recent changes in the scholarly publishing industry.||0||0|
|Enciclopédias na web 2.0: colaboração e moderação na Wikipédia e Britannica Online||Carlos Frederico de Brito d’Andréa||Em Questão: Revista da Faculdade de Biblioteconomia e Comunicação da UFRGS||Portuguese||2009||This paper compares the editorial policies of Wikipedia and Britannica Online, with focus in openness to public participation and the mechanisms of moderation to allow or limit the collective editing of articles. The description and analysis of those mechanisms consider the technical resources that enable collaboration and validation of information, the rules of internal management of content and community of users. Despite some common features shared by both projects, we point out fundamental differences in those initiatives, such as valuing expertise (Britannica) or the engagement of users (Wikipedia).||16||2|
|Publishing in hard times||Jovanovich P.||Publishing Research Quarterly||English||2009||The former head of Pearson Education, McGraw-Hill Book Publishing, and Harcourt Brace Jovanovich presents an assessment of the state of publishing during this recession. The paper analyses actions taken by publishers in previous downturns, recommends approaches to dealing with recessions that maintain long-term health of the business. Recessions can mask more profound problems, such as happened in the newspaper industry and as is occurring in trade publishing. The paper also analyzes outlook for college and school publishing.||0||0|
|Collaboration in context: Comparing article evolution among subject disciplines in Wikipedia||Katherine Ehmann
And Jamshid Beheshti
|2008||This exploratory study examines the relationships between article and Talk page contributions and their effect on article quality in Wikipedia. The sample consisted of three articles each from the hard sciences, soft sciences, and humanities, whose talk page and article edit histories were observed over a five–month period and coded for contribution types. Richness and neutrality criteria were then used to assess article quality and results were compared within and among subject disciplines. This study reveals variability in article quality across subject disciplines and a relationship between Talk page discussion and article editing activity. Overall, results indicate the initial article creator’s critical role in providing a framework for future editing as well as a remarkable stability in article content over time.||0||2|
|Comparison of Wikipedia and other encyclopedias for accuracy, breadth, and depth in historical articles||Lucy Holman Rector||Reference Services Review||English||2008||Purpose – This paper seeks to provide reference librarians and faculty with evidence regarding the comprehensiveness and accuracy of Wikipedia articles compared with respected reference resources.
Design/methodology/approach – This content analysis evaluated nine Wikipedia articles against comparable articles in Encyclopaedia Britannica, The Dictionary of American History and American National Biography Online in order to compare Wikipedia's comprehensiveness and accuracy. The researcher used a modification of a stratified random sampling and a purposive sampling to identify a variety of historical entries and compared each text in terms of depth, accuracy, and detail.
Findings – The study did reveal inaccuracies in eight of the nine entries and exposed major flaws in at least two of the nine Wikipedia articles. Overall, Wikipedia's accuracy rate was 80 percent compared with 95-96 percent accuracy within the other sources. This study does support the claim that Wikipedia is less reliable than other reference resources. Furthermore, the research found at least five unattributed direct quotations and verbatim text from other sources with no citations.
Research limitations/implications – More research must be undertaken to analyze Wikipedia entries in other disciplines in order to judge the source's accuracy and overall quality. This paper also shows the need for analysis of Wikipedia articles' histories and editing process.
Practical implications – This research provides a methodology for further content analysis of Wikipedia articles.Originality/value – Although generalizations cannot be made from this paper alone, the paper provides empirical data to support concerns regarding the accuracy and authoritativeness of Wikipedia.
|Learning to Trust the Crowd: Some Lessons from Wikipedia||F. Xavier Olleros||MCETECH||English||2008||0||3|
|The importance of link evidence in Wikipedia||Jaap Kamps
|Lecture Notes in Computer Science||English||2008||Wikipedia is one of the most popular information sources on the Web. The free encyclopedia is densely linked. The link structure in Wikipedia differs from the Web at large: internal links in Wikipedia are typically based on words naturally occurring in a page, and link to another semantically related entry. Our main aim is to find out if Wikipedia's link structure can be exploited to improve ad hoc information retrieval. We first analyse the relation between Wikipedia links and the relevance of pages. We then experiment with use of link evidence in the focused retrieval of Wikipedia content, based on the test collection of INEX 2006. Our main findings are: First, our analysis of the link structure reveals that the Wikipedia link structure is a (possibly weak) indicator of relevance. Second, our experiments on INEX ad hoc retrieval tasks reveal that if the link evidence is made sensitive to the local context we see a significant improvement of retrieval effectiveness. Hence, in contrast with earlier TREC experiments using crawled Web data, we have shown that Wikipedia's link structure can help improve the effectiveness of ad hoc retrieval.||0||0|
|Toward an Epistemology of Wikipedia||Don Fallis||Journal of the American Society for Information Science and Technology, , No. 10,||2008||Wikipedia (the "free online encyclopedia that anyone can edit") is having a huge impact on how a great many people gather information about the world. So, it is important for epistemologists and information scientists to ask whether or not people are likely to acquire knowledge as a result of having access to this information source. In other words, is Wikipedia having good epistemic consequences? After surveying the various concerns that have been raised about the reliability of Wikipedia, this paper argues that the epistemic consequences of people using Wikipedia as a source of information are likely to be quite good. According to several empirical studies, the reliability of Wikipedia compares favorably to the reliability of traditional encyclopedias. Furthermore, the reliability of Wikipedia compares even more favorably to the reliability of those information sources that people would be likely to use if Wikipedia did not exist (viz., websites that are as freely and easily accessible as Wikipedia). In addition, Wikipedia has a number of other epistemic virtues (e.g., power, speed, and fecundity) that arguably outweigh any deficiency in terms of reliability. Even so, epistemologists and information scientists should certainly be trying to identify changes (or alternatives) to Wikipedia that will bring about even better epistemic consequences. This paper suggests that, in order to improve Wikipedia, we need to clarify what our epistemic values are and we need a better understanding of why Wikipedia works as well as it does.||0||4|
|Locapedias: Generación de contenido local de manera colaborativa||Alfredo Romeo Molina||IX Jornadas de Gestión de la Información||Spanish||November 2007||Wikipedia has become the biggest encyclopedia ever made in the world. With more than four million articles written in hundreds of languages, Wikipedia is nowadays one of the five internet well-known branches in the world. In 2004, following the Wikipedia model, Alfredo Romeo suggests the launching of “locapedias”, based on the voluntary and collaborative model of contributors, with the aim of creating the biggest knowledge centre ever written about a local area. In 2005 the first “locapedia”, Cordobapedia, is founded. About two years later, 12 locapedias can be found in Spain, with about 20000 local articles made in a collaborative way. As time goes by, locapedias will probably represent for cities and regions the same as wikipedia has achieved: the largest reference web-site for local knowledge in any city with a locapedia. For locapedias, the fact that public institutions, as libraries or local archives, head these proyects could be the needed guaranteed mark to consolidate a movement of local-free knowleadge creation, which will be the reference for the society we are unstoppable going to: the knowledge society.||1||1|
|Wiki-Philosophizing in a Marketplace of Ideas: Evaluating Wikipedia’s Entries on Seven Great Minds||George Bragues||SSRN Electronic Journal||English||3 April 2007||A very conspicuous part of the new participatory media, Wikipedia has emerged as the Internet's leading source of all-purpose information, the volume and range of its articles far surpassing that of its traditional rival, the Encyclopedia Britannica. This has been accomplished by permitting virtually anyone to contribute, either by writing an original article or editing an existing one. With almost no entry barriers to the production of information, the result is that Wikipedia exhibits a perfectly competitive marketplace of ideas. It has often been argued that such a marketplace is the best guarantee that quality information will be generated and disseminated. We test this contention by examining Wikipedia's entries on seven top Western philosophers. These entries are evaluated against the consensus view elicited from four academic reference works in philosophy. Wikipedia's performance turns out to be decidedly mixed. Its average coverage rate of consensus topics is 52%, while the median rate is 56%. A qualitative analysis uncovered no outright errors, though there were significant omissions. The online encyclopedia's harnessing of the marketplace of ideas, though not unimpressive, fails to emerge as clearly superior to the traditional alternative of relying on individual expertise for information.||7||4|
|A tale of information ethics and encyclopædias; Or, is Wikipedia just another internet scam?||Gorman G.E.||Online Information Review||English||2007||Purpose - This paper seeks to look at the question of accuracy of content regarding Wikipedia and other internet encyclopædias. Design/methodology/ approach - By looking at other sources, the paper considers whether the information contained within Wikipedia can be relied on to be accurate. Findings - Wikipedia poses as an encyclopædia when by no stretch of the definition can it be termed such; therefore, it should be subject to regulation. Originality/value - The paper highlights the issue that, without regulation, content cannot be relied on to be accurate.||0||2|
|Relation extraction from Wikipedia using subtree mining||Nguyen D.P.T.
|Proceedings of the National Conference on Artificial Intelligence||English||2007||The exponential growth and reliability of Wikipedia have made it a promising data source for intelligent systems. The first challenge of Wikipedia is to make the encyclopedia machine-processable. In this study, we address the problem of extracting relations among entities from Wikipedia's English articles, which in turn can serve for intelligent systems to satisfy users' information needs. Our proposed method first anchors the appearance of entities in Wikipedia articles using some heuristic rules that are supported by their encyclopedic style. Therefore, it uses neither the Named Entity Recognizer (NER) nor the Coreference Resolution tool, which are sources of errors for relation extraction. It then classifies the relationships among entity pairs using SVM with features extracted from the web structure and subtrees mined from the syntactic structure of text. The innovations behind our work are the following: a) our method makes use of Wikipedia characteristics for entity allocation and entity classification, which are essential for relation extraction; b) our algorithm extracts a core tree, which accurately reflects a relationship between a given entity pair, and subsequently identifies key features with respect to the relationship from the core tree. We demonstrate the effectiveness of our approach through evaluation of manually annotated data from actual Wikipedia articles. Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.||0||0|
|What have innsbruck and Leipzig in common? Extracting semantics from wiki content||Sören Auer
|Lecture Notes in Computer Science||English||2007||Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.||0||0|
|Xanthusbase: Adapting wikipedia principles to a model organism database||Arshinoff B.I.
|Nucleic Acids Research||English||2007||xanthusBase (http://www.xanthusbase.org) is the official model organism database (MOD) for the social bacterium Myxococcus xanthus. In many respects, M.xanthus represents the pioneer model organism (MO) for studying the genetic, biochemical, and mechanistic basis of prokaryotic multicellularity, a topic that has garnered considerable attention due to the significance of biofilms in both basic and applied microbiology research. To facilitate its utility, the design of xanthusBase incorporates open-source software, leveraging the cumulative experience made available through the Generic Model Organism Database (GMOD) project, MediaWiki (http://www.mediawiki.org), and dictyBase (http://www.dictybase.org), to create a MOD that is both highly useful and easily navigable. In addition, we have incorporated a unique Wikipedia-style curation model which exploits the internet's inherent interactivity, thus enabling M.xanthus and other myxobacterial researchers to contribute directly toward the ongoing genome annotation.||0||0|
|A war of words: An explosion of encyclopedias||No author name available||Laser Focus World||English||2006||An attempt had been made by the researcher for the critical comparison of the equally venerable Encyclopedia Britannica with the online encyclopedia Wikipedia. The band of eminent researchers found error in both publications, some major but the majority were minor. Britannica is conventionally developed with contribution from widely recognized academic experts in the fields, whereas Wikipedia is developed in an online collegial manner in which anyone can write an article and any registered reader can amend the article. Wikipedia has been a runaway success, with some arguably dubious contributions, while Britannica has been struggling to find its role in the Internet. It was found that Wikipedia was often the more pithy of the two but Britannica was often more literate, in a typical British way.||0||0|
|Overcoming the brittleness bottleneck using Wikipedia: Enhancing text categorization with encyclopedic knowledge||Evgeniy Gabrilovich
|Proceedings of the National Conference on Artificial Intelligence||English||2006||When humans approach the task of text categorization, they interpret the specific wording of the document in the much larger context of their background knowledge and experience. On the other hand, state-of-the-art information retrieval systems are quite brittle - they traditionally represent documents as bags of words, and are restricted to learning from individual word occurrences in the (necessarily limited) training set. For instance, given the sentence "Wal-Mart supply chain goes real time", how can a text categorization system know that Wal-Mart manages its stock with RFID technology? And having read that "Ciprofioxacin belongs to the quinolones group", how on earth can a machine know that the drug mentioned is an antibiotic produced by Bayer? In this paper we present algorithms that can do just that. We propose to enrich document representation through automatic use of a vast compendium of human knowledge - an encyclopedia. We apply machine learning techniques to Wikipedia, the largest encyclopedia to date, which surpasses in scope many conventional encyclopedias and provides a cornucopia of world knowledge. Each Wikipedia article represents a concept, and documents to be categorized are represented in the rich feature space of words and relevant Wikipedia concepts. Empirical results confirm that this knowledge-intensive representation brings text categorization to a qualitatively new level of performance across a diverse collection of datasets. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.||0||1|
|Understanding user perceptions on usefulness and usability of an integrated Wiki-G-Portal||Theng Y.-L.
|Lecture Notes in Computer Science||English||2006||This paper describes a pilot study on Wiki-G-Portal, a project integrating Wikipedia, an online encyclopedia, into G-Portal, a Web-based digital library, of geography resources. Initial findings from the pilot study seemed to suggest positive perceptions on usefulness and usability of Wiki-G-Portal, as well as subjects' attitude and intention to use.||0||0|
|Wiki means more: Hyperreading in Wikipedia||Yuejiao Z.||Proceedings of the Seventeenth ACM Conference on Hypertext and Hypermedia, HT'06||English||2006||Based on the open-sourcing technology of wiki, Wikipedia has initiated a new fashion of hyperreading. Reading Wikipedia creates an experience distinct from reading a traditional encyclopedia. In an attempt to disclose one of the site's major appeals to the Web users, this paper approaches the characteristics of hyperreading activities in Wikipedia from three perspectives. Discussions are made regarding reading path, user participation, and navigational apparatus in Wikipedia. Copyright 2006 ACM.||0||0|
|Wild wiki||Joyce J.||Scientific Computing and Instrumentation||English||2005||The features of Wiki Wiki software, a name given to either a hypertext document collection or a Web site, which allows the generation of a composite base of information involving multiple users, are discussed. The ability of a wiki to allow many people to contribute to its content makes it a valuable tool for many purposes where a repository of information is required. The online activities of this software included the production of encyclopedias, dictionaries, and books. They have been successfully used to document software projects on RubyForge. The markup of a wiki article is simple, but generally follows its own specialized format. As all the versions of a wiki page are generally backed up, it is easy to restore the vandalized page.||0||0|