2007

From WikiPapers
Jump to: navigation, search
<< 2004 - 2005 - 2006 - 2007 - 2008 - 2009 - 2010 >>

This is a list of 4 events celebrated and 478 publications published in 2007.

Events

Name City Country DateThis property is a special property in this wiki.
RecentChangesCamp 2007 Portland United States 2 February 2007
RoCoCoCamp 2007 Montreal Canada 18 May 2007
WikiSym 2007 Montreal Canada 21 October 2007
Wikimania 2007 Taipei Republic of China 3 August 2007


Publications

Title Author(s) Keyword(s) Published in Language Abstract R C
"More like these": Growing entity classes from seeds Sarmento L.
Maarten de Rijke
Jijkoun V.
Oliveira E.
Lexical acquisition
List expansion
International Conference on Information and Knowledge Management, Proceedings English We present a corpus-based approach to the class expansion task. For a given set of seed entities we use co-occurrence statistics taken from a text collection to define a membership function that is used to rank candidate entities for inclusion in the set. We describe an evaluation framework that uses data from Wikipedia. The performance of our class extension method improves as the size of the text collection increases. Copyright 2007 ACM. 0 0
A Bag-of-Words Based Ranking Method for the Wikipedia Question Answering Task Davide Buscaldi
Paolo Rosso
Evaluation of Multilingual and Multi-modal Information Retrieval English This paper presents a simple approach to the Wikipedia Question Answering pilot task in CLEF 2006. The approach ranks the snippets, retrieved using the Lucene search engine, by means of a similarity measure based on bags of words extracted from both the snippets and the articles in wikipedia. Our participation was in the monolingual English and Spanish tasks. We obtained the best results in the Spanish one. 0 0
A Decentralized Wiki Engine for Collaborative Wikipedia Hosting G Urdaneta
G Pierre
M van Steen
3rd International Conference on Web Information Systems and Technology (WEBIST), March 2007 This paper presents the design of a decentralized system for hosting large-scale wiki web sites like Wikipedia, using a collaborative approach. Our design focuses on distributing the pages that compose the wiki across a network of nodes provided by individuals and organizations willing to collaborate in hosting the wiki. We present algorithms for placing the pages so that the capacity of the nodes is not exceeded and the load is balanced, and algorithms for routing client requests to the appropriate nodes. We also address fault tolerance and security issues. 0 0
A Framework for Studying the Use of Wikis in Knowledge Work Using Client-Side Access Data Uri Dekel WikiSym English While measurements of wiki usage typically focus on the active contribution of content, information on the passive use of existing content can be valuable for a range of commercial and research purposes. In particular, such data is necessary for reconstructing the context or tracing the flow of information in settings where wikis are used as collaboration platforms in knowledge work that relies on specialized tools, such as software development.

Meeting these needs requires detailed knowledge of user behavior, such as the duration for which a page was read and the sections visible at each point. This data cannot be collected by present wiki implementations and must be collected from the client-side, which presents a range of technical and privacy problems. In addition, this data must be correlated with traces of interaction with other tools.

In this paper we present an approach for solving these problems in which scripts embedded by the wiki server are executed by the client browser, and report on the user’s interaction with that document along with relevant structural information. These reports are relayed to a comprehensive framework for storing and accessing interaction and context data from the wiki and from additional tools used in knowledge work. This framework can be used to correlate these traces to obtain a complete view of the user’s work across tools, or to approximate his context at specific points in time.
0 0
A Graph-based Approach to Named Entity Categorization in Wikipedia Using Conditional Random Fields Y Watanabe
M Asahara
Y Matsumoto
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) This paper presents a method for categorizing named entities in Wikipedia. In Wikipedia, an anchor text is glossed in a linked HTML text. We formalize named entity categorization as a task of categorizing anchor texts with linked HTML texts which glosses a named entity. Using this representation, we introduce a graph structure in which anchor texts are regarded as nodes. In order to incorporate HTML structure on the graph, three types of cliques are defined based on the HTML tree structure. We propose a method with Conditional Random Fields (CRFs) to categorize the nodes on the graph. Since the defined graph may include cycles, the exact inference of CRFs is computationally expensive. We introduce an approximate inference method using Treebased Reparameterization (TRP) to reduce computational cost. In experiments, our proposed model obtained significant improvements compare to baseline models that use Support Vector Machines. 0 0
A Knowledge-Based Search Engine Powered by Wikipedia David N. Milne
Ian H. Witten
David M. Nichols
Information retrieval
Query Expansion
Wikipedia
Data mining
Thesauri.
CIKM ‘07 This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide a vast amount of structured world knowledge about the terms of interest. Our system, the Wikipedia Link Vector Model or WLVM, is unique in that it does so using only the hyperlink structure of Wikipedia rather than its full textual content. To evaluate the algorithm we use a large, widely used test set of manually defined measures of semantic relatedness as our bench-mark. This allows direct comparison of our system with other similar techniques. 0 1
A Thesaurus Construction Method from Large Scale Web Dictionaries Kotaro Nakayama
Takahiro Hara
Sojiro Nishio
Data mining
Association Thesaurus
Link Structure Analysis
Link Text
Synonyms
21st IEEE International Conference on Advanced Information Networking and Applications (AINA) Web-based dictionaries, such as Wikipedia, have become dramatically popular among the internet users in past several years. The important characteristic of Web-based dictionary is not only the huge amount of articles, but also hyperlinks. Hyperlinks have various information more than just providing transfer function between pages. In this paper, we propose an efficient method to analyze the link structure of Web-based dictionaries to construct an association thesaurus. We have already applied it to Wikipedia, a huge scale Web-based dictionary which has a dense link structure, as a corpus. We developed a search engine for evaluation, then conducted a number of experiments to compare our method with other traditional methods such as co-occurrence analysis. 0 0
A comparison of dimensionality reduction techniques for Web structure mining Chikhi N.F.
Rothenburger B.
Aussenac-Gilles N.
Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, WI 2007 English In many domains, dimensionality reduction techniques have been shown to be very effective for elucidating the underlying semantics of data. Thus, in this paper we investigate the use of various dimensionality reduction techniques (DRTs) to extract the implicit structures hidden in the web hyperlink connectivity. We apply and compare four DRTs, namely, Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Random Projection (RP). Experiments conducted on three datasets allow us to assert the following: NMF outperforms PCA and ICA in terms of stability and interpretability of the discovered structures; the wellknown WebKb dataset used in a large number of works about the analysis of the hyperlink connectivity seems to be not adapted for this task and we suggest rather to use the recent Wikipedia dataset which is better suited. 0 0
A comparison of methods for the automatic identification of locations in Wikipedia Davide Buscaldi
Paolo Rosso
English In this paper we compare two methods for the automatic identification of geographical articles in encyclopedic resources such as Wikipedia. The methods are a WordNet-based method that uses a set of keywords related to geographical places, and a multinomial Naïve Bayes classificator, trained over a randomly selected subset of the English Wikipedia. This task may be included into the broader task of Named Entity classification, a well-known problem in the field of Natural Language Processing. The experiments were carried out considering both the full text of the articles and only the definition of the entity being described in the article. The obtained results show that the information contained in the page templates and the category labels is more useful than the text of the articles. 0 0
A comparison of optimistic approaches to collaborative editing of Wiki pages C.-L. Ignat
G. Oster
P. Molli
M. Cart
J. Ferrie
A.-M. Kermarrec
P. Sutra
M. Shapiro
L. Benmouffok
J.-M. Busca
R. Guerraoui
COLCOM English 0 0
A content-driven reputation system for the Wikipedia B. Thomas Adler
Luca de Alfaro
English We present a content-driven reputation system for Wikipedia authors. In our system, authors gain reputation when the edits they perform to Wikipedia articles are preserved by subsequent authors, and they lose reputation when their edits are rolled back or undone in short order. Thus, author reputation is computed solely on the basis of content evolution; user-to-user comments or ratings are not used. The author reputation we compute could be used to flag new contributions from low-reputation authors, or it could be used to allow only authors with high reputation to contribute to controversialor critical pages. A reputation system for the Wikipedia could also provide an incentive for high-quality contributions. We have implemented the proposed system, and we have used it to analyze the entire Italian and French Wikipedias, consisting of a total of 691,551 pages and 5,587,523 revisions. Our results show that our notion of reputation has good predictive value: changes performed by low-reputation authors have a significantly larger than average probability of having poor quality, as judged by human observers, and of being later undone, as measured by our algorithms. 0 10
A decentralized wiki engine for collaborative Wikipedia hosting Guido Urdaneta
Guillaume Pierre
Maarten van Steen
Wiki
P2P
Collaborative web hosting
Decentralized
WEBIST ’07: Proceedings of the 3rd International Conference on Web Information Systems and Technologies English This paper presents the design of a decentralized system for hosting large-scale wiki web sites likeWikipedia, using a collaborative approach. Our design focuses on distributing the pages that compose the wiki across a network of nodes provided by individuals and organizations willing to collaborate in hosting the wiki. We present algorithms for placing the pages so that the capacity of the nodes is not exceeded and the load is balanced, and algorithms for routing client requests to the appropriate nodes. We also address fault tolerance and security issues. 1 0
A framework for inter-organizational collaboration using communication and knowledge management tools Nuschke P.
Xing Jiang
Blogs
Bulletin board
Collaboration
HCI
Knowledge management
Online community
Organizations
Transcriber
Wiki
Lecture Notes in Computer Science English Organizations are often involved in joint ventures or coalitions with multiple, diverse partners. While the ability to communicate across organizational boundaries is important to their success, the organizations may have different cultures, processes, and jargon which inhibit their ability to effectively collaborate. The objective of this paper is to identify a framework that enables organizations to communicate complex knowledge across organizational boundaries. It leverages communication and knowledge management tools such as the wiki, and calls for more integration between these tools. 0 0
A framework for knowledge management in a public library - based on a case study on knowledge management in a Dutch public library Selhorst K. Communities of practice
Knowledge audits
Knowledge exchange
Knowledge management
Public libraries
Wiki
Proceedings of the European Conference on Knowledge Management, ECKM English The public library of Vlissingen (Holland) is very ambitious in providing the best possible service to its users. In order to successfully realise this goal, the library wants to make maximum use of the knowledge that resides in the heads of library workers (human capital). In the past year, however, the library had been experiencing several problems related to the exchange of knowledge between library staff members and between library workers and library clients. The solution was found in the field of knowledge management, still a fairly new discipline in the public library sector. A knowledge audit was the first step in solving these internal knowledge problems and in establishing a future knowledge management strategy. The main objective of this audit was to identify and describe the current knowledge gaps and knowledge flows within the unique context of a library with both internal and external 'knowledge clients'. The data collected during the research were both qualitative (interviews with key knowledge players in the library) and quantitative (online survey). The audit revealed that the library has an enormously rich 'tacit' knowledge potential that until now has remained unexplored. A series of recommendations for leveraging this 'tacit' knowledge to a more operational level have been proposed and are being implemented at this moment. In order to make internal knowledge more visible and better searchable, an internal library wiki has been set up. To encourage knowledge sharing outside the boundaries of fixed teams, several 'communities of practice' have been made operational. Finally, After Action Review-techniques have been introduced forevaluating projects and for stimulating library workers to learn from 'best' and 'worst' practices. This paper presents the findings of our research in more detail, looks at the recommendations that are being implemented and at the same time proposes an overall approach to knowledge management for public libraries that wish to embark on a knowledge management journey in order to achieve their strategic goals in the best possible manner. 0 0
A graph-based approach to named entity categorization in Wikipedia using conditional random fields Watanabe Y.
Asahara M.
Matsumoto Y.
EMNLP-CoNLL 2007 - Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning English This paper presents a method for categorizing named entities in Wikipedia. In Wikipedia, an anchor text is glossed in a linked HTML text. We formalize named entity categorization as a task of categorizing anchor texts with linked HTML texts which glosses a named entity. Using this representation, we introduce a graph structure in which anchor texts are regarded as nodes. In order to incorporate HTML structure on the graph, three types of cliques are defined based on the HTML tree structure. We propose a method with Conditional Random Fields (CRFs) to categorize the nodes on the graph. Since the defined graph may include cycles, the exact inference of CRFs is computationally expensive. We introduce an approximate inference method using Treebased Reparameterization (TRP) to reduce computational cost. In experiments, our proposed model obtained significant improvements compare to baseline models that use Support Vector Machines. 0 0
A knowledge-based search engine powered by Wikipedia David N. Milne
Ian H. Witten
David M. Nichols
English This paper describes Koru, a new search interface that offers effective domain-independent knowledge-based information retrieval. Koru exhibits an understanding of the topics of both queries and documents. This allows it to (a) expand queries automatically and (b) help guide the user as they evolve their queries interactively. Its understanding is mined from the vast investment of manual effort and judgment that is Wikipedia. We show how this open, constantly evolving encyclopedia can yield inexpensive knowledge structures that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We conducted a detailed user study with 12 participants and 10 topics from the 2005 TREC HARD track, and found that Koru and its underlying knowledge base offers significant advantages over traditional keyword search. It was capable of lending assistance to almost every query issued to it; making their entry more efficient, improving the relevance of the documents they return, and narrowing the gap between expert and novice seekers. 0 1
A little known fact is... Answering other questions using interest-markers Razmara M.
Kosseim L.
Lecture Notes in Computer Science English In this paper, we present an approach to answering "Other" questions using the notion of interest marking terms. "Other" questions have been introduced in the TREC-QA track to retrieve other interesting facts about a topic. To answer these types of questions, our system extracts from Wikipedia articles a list of interest-marking terms related to the topic and uses them to extract and score sentences from the document collection where the answer should be found. Sentences are then re-ranked using universal interest-markers that are not specific to the topic. The top sentences are then returned as possible answers. When using the 2004 TREC data for development and 2005 data for testing, the approach achieved an F-score of 0.265, placing it among the top systems. 0 0
A new method for identifying detected communities based on graph substructure Kameyama S.
Uchida M.
Shirayama S.
Proceedings - 2007 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2007 English Many methods have been developed that can detect community structures in complex networks. The detection methods can be classified into three groups based on their characteristic properties. In this study, the inherent features of the detection methods were used to develop a method that identifies communities extracted using a given community detection method. Initially, a common detection method is used to divide a network into communities. The communities are then identified using another detection method from a different class. In this paper, the community structures are first extracted from a network using the method proposed by Newman and Girvan. The extracted communities are then identified using the proposed detection method that is an extension of the vertex similarity method proposed by Leicht et al. The proposed method was used to identify communities in a blog network (blogosphere) and in a Wikipedia word network. 0 0
A participação do público no Wikinews e no Kuro5hin Marcelo Ruschel Träsel Journalism
Online Journalism
Grassroots
Content Analysis
Interaction
E-Compós Portuguese This paper presents the results of a mastership research focused on the interventions of collaborators over journalistic material published in the grassroots news sites Wikinews and Kuro5hin. A sample of ten texts was collected over seven weeks, in order to form a corpus of interventions. This corpus was later submitted to a content analysis, aimed in verifying if the interventions had a predominant pluralizing character. The results show that indeed the interventions are for the most part pluralizing, suggesting that grassroots journalism may bring important contributions to a democratic society. 2 1
A tale of information ethics and encyclopædias; Or, is Wikipedia just another internet scam? Gorman G.E. Accuracy
Encyclopedia
Internet
Regulation
Online Information Review English Purpose - This paper seeks to look at the question of accuracy of content regarding Wikipedia and other internet encyclopædias. Design/methodology/ approach - By looking at other sources, the paper considers whether the information contained within Wikipedia can be relied on to be accurate. Findings - Wikipedia poses as an encyclopædia when by no stretch of the definition can it be termed such; therefore, it should be subject to regulation. Originality/value - The paper highlights the issue that, without regulation, content cannot be relied on to be accurate. 0 2
A theoretical framework of collaborative knowledge building with wikis: a systemic and cognitive perspective Ulrike Cress
Joachim Kimmerle
CSCL English 0 0
A wiki based concept of a generic process model of IPD for university teaching in an interdisciplinary environment Von Specht E.U.
Vajna S.
Jordan A.
Generic process
Industrial design
Integrated product development
Low effort supporting methods
Methods
Tools and technology
University teaching
Wiki
Proceedings of ICED 2007, the 16th International Conference on Engineering Design English The integration of the approach of Integrated Product Development (IPD) as a graduate course of studies into the field of university teaching of the University Magdeburg will be portrayed. This is based on showing the limits of design methodologies and describing the necessity of IPD accompanied by a brief excursus to its history for creating the necessary basic knowledge. Afterwards the need of a process view in product development is described, taking especially in account the generic IPD process odel for supporting university project work in interdisciplinary student teams. Following the development philosophy of the IPD odel of Magdeburg it forms a holistic way of proceeding that integrates and describes technical and industrial designing procedures in the context of all necessary aspects for a successful product development. But this generic product development process is neither rigid nor normative. It should rather be a smart form of support while conducting development projects of students in the framework of university teaching. Currently. just a manual is available as a guideline for supporting the project work and for self-studies. It describes in detail a process driven procedure of IPD, goes into the bases of project and process anagement and additionally contains a description of a huge number of ethods and tools based on the generic IPD process. However this form of knowledge documentation and ediation can only provide a snapshot and is therefore not flexible enough for handling with changing conditions and requirements in product development. That is why we decided to go for an own IPD wiki to support our students more effectively. But which requirements does a wiki need to fulfil that is used in university teaching? What needs should be taken into account and which content shall be transported? The second part of the paper answers these questions and describes the actual experience with the wiki system. After a successful pilot phase with students. we plan to expand the IPD-wiki for the use in industry, especially focusing on the small business sector. For this purpose more research needs to be done. 0 0
A wiki that knows where it is being used: Social hazard or social service? Maria Plummer
Linda Plotnick
Roxanne S.
Jones H.Q.
Accuracy
Collaborative authoring
Context-aware
People-to-people-to-places or p3 systems
Pervasive computing
Privacy
Wiki
Association for Information Systems - 13th Americas Conference on Information Systems, AMCIS 2007: Reaching New Heights English This study assesses reactions to a wiki enhanced with context-aware features that enable users to learn about people, places, and events in their proximity. In a physically compact enclave such as the urban university in which this wiki is being implemented, context-aware applications can support a hybrid community in which individuals develop and sustain physical and virtual social ties. Participants in this study were first-time users. They were given a guided tour of the wiki and then their impressions, concerns and intention to use were elicited through a semi-structured interview. Participants were enthusiastic about the prospects of the wiki in assisting them in learning about events and interesting places on campus, and in exchanging information. However, they were concerned about issues such as privacy, accuracy, and the potential for intentional misuse of the system. Privacy concerns were based primarily on a misconception of the location-aware feature of the wiki. These findings can guide designers and implementers on the desirable and possibly undesirable features of such a system. 0 0
Accelerating networks David M. D. Smith
Jukka Onnela-Pekka
Neil F. Johnson
New Journal of Physics Evolving out-of-equilibrium networks have been under intense scrutiny recently. In many real-world settings the number of links added per new node is not constant but depends on the time at which the node is introduced in the system. This simple idea gives rise to the concept of accelerating networks, for which we review an existing definition and-after finding it somewhat constrictive-offer a new definition. The new definition provided here views network acceleration as a time dependent property of a given system as opposed to being a property of the specific algorithm applied to grow the network. The definition also covers both unweighted and weighted networks. As time-stamped network data becomes increasingly available, the proposed measures may be easily applied to such empirical datasets. As a simple case study we apply the concepts to study the evolution of three different instances of Wikipedia, namely, those in English, German, and Japanese, and find that the networks undergo different acceleration regimes in their evolution. {IOP} Publishing Ltd and Deutsche Physikalische Gesellschaft. 0 0
Adynamic voting wiki model Carolynne White
Linda Plotnick
Murray Turoff
Hiltz S.R.
Communities of practice
Feedback
Leadership
Real-time
Social decision support system
Voting
Wiki
Association for Information Systems - 13th Americas Conference on Information Systems, AMCIS 2007: Reaching New Heights English Defining a problem and understanding it syntactically as well as semantically enhances the decision process because the written agenda and solutions are understood on a token level. Consensus in groups can be challenging in present web based environments given the dynamics of types of interactions and needs. Larger virtual communities are beginning to use wiki based decision support systems for time critical interactions where the quality of the information is high and a near real time feedback system is necessary. Understanding the meaning of the problem and group consensus can be improved exploiting a voting enhanced wiki structure implemented into select parts of the decision making process. A decision support model integrating a wiki structure and a social decision support system (voting) is presented. Findings from a pilot study describe differences of idea generation between groups. Other issues are identified requiring further research. 0 0
Agile elicitation of semantic goals by wiki David Lambert
Stefania Galizia
John Domingue
WISE English 0 0
Aisles through the category forest;Utilising the Wikipedia Category System for Corpus Building in Machine Learning Rudiger Gleim
Alexander Mehler
Matthias Dehmer
Olga Pustylnikov
Category system
Corpus construction
Social tagging
Wikipedia
Webist 2007 - 3rd International Conference on Web Information Systems and Technologies, Proceedings English The Word Wide Web is a continuous challenge to machine learning. Established approaches have to be enhanced and new methods be developed in order to tackle the problem of finding and organising relevant information. It has often been motivated that semantic classifications of input documents help solving this task. But while approaches of supervised text categorisation perform quite well on genres found in written text, newly evolved genres on the web are much more demanding. In order to successfully develop approaches to web mining, respective corpora are needed. However, the composition of genre- or domain-specific web corpora is still an unsolved problem. It is time consuming to build large corpora of good quality because web pages typically lack reliable meta information. Wikipedia along with similar approaches of collaborative text production offers a way out of this dilemma. We examine how social tagging, as supported by the MediaWiki software, can be utilised as a source of corpus building. Further, we describe a representation format for social ontologies and present the Wikipedia Category Explorer, a tool which supports categorical views to browse through the Wikipedia and to construct domain specific corpora for machine learning. 0 0
All that Glisters is not gold' - Web 2.0 and the Librarian Anderson P. Blogs
Library 2.0
Social media
Web 2.0
Wiki
Journal of Librarianship and Information Science English Web 2.0 and social media applications such as blogs, wikis and social networking sites offer the promise of a more vibrant, social and participatory Internet. There is a growing interest within the library community in debating the potential impact that such services might have within libraries and such debates have gathered around the moniker of 'Library 2.0'. To date, however, there has been little theoretical work and there is a need to develop more formal definitions and frameworks. This editorial discusses the origins of the term Web 2.0, provides a structured framework for rationalizing the implications of Web 2.0 services and outlines some of the areas in which librarians are positioned to provide a unique contribution to the further development of such services. Copyright 0 0
Amodel for information quality change: Completed paper Besiki Stvilia Activity Theory
Information quality
Information Quality Dynamics
Proceedings of the 2007 International Conference on Information Quality, ICIQ 2007 English To manage information quality (IQ) effectively, one needs to know how IQ changes over time, what causes it to change, and whether the changes can be predicted. In this paper we analyze the structure of IQ change in Wikipedia, an open, collaborative general encyclopedia. We found several patterns in Wikipedia's IQ process trajectories and linked them to article types. Drawing on the results of our analysis, we develop a general model of IQ change that can be used for reasoning about IQ dynamics in many different settings, including traditional databases. 0 0
An API for Measuring the Relatedness of Words in Wikipedia Simone P. Ponzetto
Michael Strube
Api
Relatedness semantic\ web
Sematic
Wikipedia
Companion Volume to the Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, : 23--30, 2007. We present an API for computing the semantic relatedness of words in Wikipedia. 0 1
An API for measuring the relatedness of words in Wikipedia Simone Paolo Ponzetto
Michael Strube
ACL English 0 1
An EBNF grammar for Wiki Creole 1.0 Martin Junghans
Dirk Riehle
Rama Gurram
Matthias Kaiser
Mário Lopes
Umit Yalcinalp
SIGWEB Newsl. English 0 0
An XML interchange format for Wiki Creole 1.0 Martin Junghans
Dirk Riehle
Umit Yalcinalp
SIGWEB Newsl. English 0 0
An experimental CALL system enhanced with wiki Proceedings - The 7th IEEE International Conference on Advanced Learning Technologies, ICALT 2007 English 0 0
An integrated web environment for fast access and easy management of a synchrotron beam line Qian K.
Stojanoff V.
Beam time schedule
Fast proposal submission
MediaWiki
Web management
Web-based statistical evaluation tools
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment English Tired of all the time spent on the phone or sending emails to schedule beam time? Why not make your own schedule when it is convenient to you? The integrated web environment at the NIGMS East Coast Structural Biology Research Facility allows users to schedule their own beam time as if they were making travel arrangements and provides staff with a set of toolkits for management of routine tasks. These unique features are accessible through the MediaWiki-powered home pages. Here we describe the main features of this web environment that have shown to allow for an efficient and effective interaction between the users and the facility. © 2007 Elsevier B.V. All rights reserved. 0 0
An investigation of the use of a wiki to support knowledge exchange in public health R. Giordano English This paper describes the use of a wiki to foster joint learning among a group of non-profit, community-based organizations involved with improving public health in London. The goal of the wiki was to encourage the growth of a community of practice-clear patterns of mutual engagement, joint enterprise and an emergent and shared repertoire of action-among these organizations. Participants reported that they failed to contribute to the wiki largely for economic reasons, issues of identity, and a lack of group norms. A critical factor in its failure, however, was the lack of its integration into existing work practices and project governance. 0 0
Analysis of the Wikipedia Category Graph for NLP Applications. Iryna Gurevych Torsten Zesch Natural Language Processing
Relatedness
Semantics
Wikipedia
Proceedings of the TextGraphs-2 Workshop (NAACL-HLT) In this paper, we discuss two graphs in Wikipedia (i) the article graph, and (ii) the category graph. We perform a graphtheoretic analysis of the category graph, and show that it is a scale-free, small world graph like other well-known lexical semantic networks. We substantiate our findings by transferring semantic relatedness algorithms defined on WordNet to the Wikipedia category graph. To assess the usefulness of the category graph as an NLP resource, we analyze its coverage and the performance of the transferred semantic relatedness algorithms. 0 0
Analyzing and Accessing Wikipedia as a Lexical Semantic Resource. Torsten Zesch
Iryna Gurevych
Max Muhlhauser
Api Biannual Conference of the Society for Computational Linguistics and Language Technology pp. 213-221 We analyze Wikipedia as a lexical semantic resource and compare it with conventional resources, such as dictionaries, thesauri, semantic wordnets, etc. Different parts of Wikipedia record different aspects of these resources. We show that Wikipedia contains a vast amount of knowledge about, e.g., named entities, domain specific terms, and rare word senses. If Wikipedia is to be used as a lexical semantic resource in large-scale NLP tasks, efficient programmatic access to the knowledge therein is required. We review existing access mechanisms and show that they are limited with respect to performance and the provided access functions. Therefore, we introduce a general purpose, high performance Java-based Wikipedia API that overcomes these limitations. 0 0
Analyzing and visualizing the semantic coverage of Wikipedia and its authors Todd Holloway
Miran Bozicevic
Katy Börner
Complexity English This article presents a novel analysis and visualization of English Wikipedia data. Our specific interest is the analysis of basic statistics, the identification of the semantic structure and the age of the categories in this free online encyclopedia, and the content coverage of its highly productive authors. 0 9
AniAniWeb: A Wiki Approach to Personal Home Pages Jochen Rick WikiSym English This article reports on my dissertation research on personal home pages. It focuses on the design of AniAniWeb, a server-based system for authoring personal home pages. AniAniWeb builds on a wiki foundation to address many of the limitations of static technologies used to author personal home pages. This article motivates the technical hypotheses behind AniAniWeb and reflects on these hypotheses, based on a two year study of adopters using AniAniWeb in academia, a prominent vocational setting where personal home pages are important. In particular, I reflect on two broad categories: 1) the usefulness of wiki features (wiki authoring, wiki mark-up, and interaction / collaboration) to authoring personal home pages; 2) the other features (structure, designing looks, and access control) needed to make a wiki approach to personal home pages viable. 0 0
AniAniWeb: A wiki approach to personal home pages Jochen Rick Access control
AniAniWeb
Personal home pages
Wiki design
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English This article reports on my dissertation research on personal home pages. It focuses on the design of AniAniWeb, a server-based system for authoring personal home pages. AniAniWeb builds on a wiki foundation to address many of the limitations of static technologies used to author personal home pages. This article motivates the technical hypotheses behind AniAniWeb and reflects on these hypotheses, based on a two year study of adopters using AniAniWeb in academia, a prominent vocational setting where personal home pages are important. In particular, I reflect on two broad categories: 1) the usefulness of wiki features (wiki authoring, wiki mark-up, and interaction / collaboration) to authoring personal home pages; 2) the other features (structure, designing looks, and access control) needed to make a wiki approach to personal home pages viable. Copyright 0 0
Análise do verbete da Wikipédia sob a ótica da teoria de gênero como ação social Vanessa Wendhausen Lima Wikipedia Entry
Genre
Social Action
Portuguese This study aims to analyze the Wikipedia entry. The analysis is based upon the theory of genre as a social action as proposed by Miller (1994a) and her follower Bazerman (1994). These authors state that the notion of genre is a typified social action that functions as a reply to recurrent situations. Methodological proposals for the study of the Wikipedia entry composition process are given by Paré and Smart (1994). They believe that a genre analysis based on Miller‟s theory should identify which elements are observable besides textual ones and which is the relationship between them. Therefore, the analysis of the textual elements was based on the rhetorical movements, the compositional aspects of the entry, the text format, and the social roles of the writers. This study reveals that the Wikipedia entry is a digital genre resulting from a social act which involves the whole free encyclopedia. It took this form because of the recurrence and text typifying through the rules that regulate the free encyclopedia process of writing. 2 0
Applying Wikipedia's multilingual knowledge to cross-lingual question answering Ferrandez S.
Antonio Toral
Oscar Ferrandez
Antonio Ferrandez
Munoz R.
Lecture Notes in Computer Science English The application of the multilingual knowledge encoded in Wikipedia to an open-domain Cross-Lingual Question Answering system based on the Inter Lingual Index (ILI) module of EuroWordNet is proposed and evaluated. This strategy overcomes the problems due to ILI's low coverage on proper nouns (Named Entities). Moreover, as these are open class words (highly changing), using a community-based up-to-date resource avoids the tedious maintenance of hand-coded bilingual dictionaries. A study reveals the importance to translate Named Entities in CL-QA and the advantages of relying on Wikipedia over ILI for doing this. Tests on questions from the Cross-Language Evaluation Forum (CLEF) justify our approach (20% of these are correctly answered thanks to Wikipedia's Multilingual Knowledge). 0 0
Applying Wikipedia’s Multilingual Knowledge to Cross-Lingual Question Answering Sergio Ferrández
Antonio Toral
Oscar Ferrandez
Antonio Ferrandez
Rafael Muñoz
Lecture Notes in Computer Science The application of the multilingual knowledge encoded in Wikipedia to an open-domain Cross-Lingual Question Answering system based on the Inter Lingual Index (ILI) module of EuroWordNet is proposed and evaluated. This strategy overcomes the problems due to ILI’s low coverage on proper nouns (Named Entities). Moreover, as these are open class words (highly changing), using a community-based up-to-date resource avoids the tedious maintenance of hand-coded bilingual dictionaries. A study reveals the importance to translate Named Entities in CL?QA and the advantages of relying on Wikipedia over ILI for doing this. Tests on questions from the Cross-Language Evaluation Forum (CLEF) justify our approach (20% of these are correctly answered thanks to Wikipedia’s Multilingual Knowledge). 0 0
Applying wikis to managing knowledge - A socio-technical approach Kosonen M.
Kianto A.
Knowledge management
Knowledge sharing
Organizational culture
Social software
Wiki
Proceedings of the European Conference on Knowledge Management, ECKM English As organizations are increasingly moving towards geographically dispersed and virtual forms of collaboration, knowledge sharing through social software such as wikis, is widely acknowledged as an important area of research and practice. Wikis are systems of interlinked Web pages that allow users to easily create and edit content. They represent an open-source technology for knowledge, focusing on its incremental creation and enhancement, and on multi-user participation. However, social software remains an under-investigated issue in the literature on knowledge management, and there are no previous studies demonstrating how organizations can successfully start using it. The influence of information and communication technologies (ICTs) on knowledge sharing has been approached mostly from the individual perspective, considering the roles of ICT in either lowering or heightening the cognitive barrier to sharing. Accordingly, ICT tools are mainly designed to support the acquisition and retrieval of codified knowledge in order to improve individual knowledge bases. Less has been written on supporting informal emergent knowledge sharing within communities through novel collaboration tools. In this paper we examine how internal wikis have been successfully implemented in a case organization. We chose an information-rich case, the type of single case that provides various opportunities for learning about an emerging phenomenon. On the basis of our analysis we claim that understanding the implementation of wikis requires a socio-technical perspective focusing on the organizational context and activity system in which they are implemented rather than on their technological proficiency per se. We thereby demonstrate how implementing wikis hinges on the practical and context-dependent features of the organization. Furthermore, we claim that their implementation brings about change in existing social systems, and results in new kinds of social constellations, interactions and identities, which are manageable and controllable only to a limited extent. 0 0
ArchVoc - Towards an ontology for software architecture Lenin Babu T.
Seetha Ramaiah M.
Prabhakar T.V.
Rambabu D.
Proceedings - ICSE 2007 Workshops:Second Workshop on SHAring and Reusing architectural Knowledge Architecture, Rationale, and Design Intent, SHARK-ADI'07 English Knowledge management of any domain requires controlled vocabularies, taxonomies, thesauri, ontologies, concept maps and other such artifacts. This paper describes an effort to identify the major concepts in software architecture that can go into such meta knowledge. The concept terms are identified through two different techniques (I) manually, through backof-the-book index of some of the major texts in Software Architecture (2) through a semi-automatic technique by parsing the Wikipedia pages. Only generic architecture knowledge is considered. Apart from identifying the important concepts of software architecture, we could also see gaps in the software architecture content in the Wikipedia. 0 0
Assessing the value of cooperation in Wikipedia Dennis M. and Bernardo A. Huberman Wilkinson Wikipedia cooperation Since its inception six years ago, the online encyclopedia Wikipedia has accumulated 6.40 million articles and 250 million edits, contributed in a predominantly undirected and haphazard fashion by 5.77 million unvetted volunteers. Despite the apparent lack of order, the 50 million edits by 4.8 million contributors to the 1.5 million articles in the English–language Wikipedia follow strong certain overall regularities. We show that the accretion of edits to an article is described by a simple stochastic mechanism, resulting in a heavy tail of highly visible articles with a large number of edits. We also demonstrate a crucial correlation between article quality and number of edits, which validates Wikipedia as a successful collaborative effort. 0 14
Automatising the Learning of Lexical Patterns: an Application to the Enrichment of WordNet by Extracting Semantic Relationships from Wikipedia Maria Ruiz-Casado
Enrique Alfonseca and Pablo Castells
Information extraction
Lexical patterns
Ontology and thesaurus acquisition
Relation extraction
Data & Knowledge Engineering , Issue 3 (June 2007) This paper describes Koru, a new search interface that offers effective domain-independent knowledge-based information retrieval. Koru exhibits an understanding of the topics of both queries and documents. This allows it to (a) expand queries automatically and (b) help guide the user as they evolve their queries interactively. Its understanding is mined from the vast investment of manual effort and judgment that is Wikipedia. We show how this open, constantly evolving encyclopedia can yield inexpensive knowledge structures that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We conducted a detailed user study with 12 participants and 10 topics from the 2005 TREC HARD track, and found that Koru and its underlying knowledge base offers significant advantages over traditional keyword search. It was capable of lending assistance to almost every query issued to it; making their entry more efficient, improving the relevance of the documents they return, and narrowing the gap between expert and novice seekers. 0 0
Automatising the learning of lexical patterns: An application to the enrichment of WordNet by extracting semantic relationships from Wikipedia Maria Ruiz-Casado
Enrique Alfonseca
Pablo Castells
Information extraction
Lexical patterns
Ontology and thesaurus acquisition
Relation extraction
Data Knowl. Eng. English 0 0
Autonomously semantifying Wikipedia Fei Wu
Daniel S. Weld
English Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-and-egg problem, which can be best solved by a bootstrapping method - creating enough structured data to motivate the development of applications. This paper argues that autonomously "Semantifying Wikipedia" is the best way to solve the problem. We choose Wikipedia as an initial data source, because it is comprehensive, not too large, high-quality, and contains enough manually-derived structure to bootstrap an autonomous, self-supervised process. We identify several types of structures which can be automatically enhanced in Wikipedia (e.g., link structure, taxonomic data, infoboxes, etc.), and we describea prototype implementation of a self-supervised, machine learning system which realizes our vision. Preliminary experiments demonstrate the high precision of our system's extracted data - in one case equaling that of humans. 0 1
Avoiding Tragedy in the Wiki-Commons Andrew George Wikipedia
Public good
Volunteer
English For some reason, thousands of volunteers contribute to Wikipedia, with no expectation of remuneration or direct credit, with the constant risk of their work being altered. As a voluntary public good, it seems that Wikipedia ought to face a problem of non-contribution. Yet, this Article argues that like much of the Open Source Movement, Wikipedia overcomes this problem by locking-in a core group of dedicated volunteers who are motivated by a desire to join and gain status within the Wikipedia community. Yet, undesirable contribution is just as significant a risk to Wikipedia as under-contribution. Bad informational inputs, including vandalism and anti-intellectualism, put the project at risk, because Wikipedia requires a degree of credibility to maintain its lock-in effect. At the same time, Wikipedia is so dependent on the work of its core community, that governance strategies to exclude bad inputs must be delicately undertaken. Therefore, this Article argues that to maximize useful participation, Wikipedia must carefully combat harmful inputs while preserving the zeal of its core-community, as failure to do either may result in tragedy. 6 1
BLOGS, RSS, and WIKIS Thomas C. Lominac J. Comput. Sci. Coll. English 0 0
BOWiki - A collaborative annotation and ontology curation framework Michael Backhaus
Janet Kelso
Ontology curation
Semantic wiki
CEUR Workshop Proceedings English As the amount of data being generated in biology has increased, a major challenge has been how to store and represent this data in a way that makes it easily accessible to researchers from diverse domains. Understanding the relationship between genotype and phenotype is a major focus of biological research. Various approaches to providing the link between genes and their functions have been undertaken - most require significant and dedicated manual curation. Advances in web technologies make possible an alternative route for the construction of such knowledge bases - large-scale community collaboration. We describe here a system, the BOWiki, for the collaborative annotation of gene information. We argue that a semantic wiki provides the functionality required for this project since this can capitalize on the existing representations in biological ontologies. We describe our implementation and show how formal ontologies could be used to increase the usability of the software through type-checking and automatic reasoning. 0 0
Beyond Ubiquity: Co-creating Corporate Knowledge with a Wiki H. Hasan
J. A. Meloche
C. C. Pfaff
D. Willis
Knowledge management
Q methodology
Ubiquitous computing
Ubiquitous knowledge
Wiki
Proceedings - International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM 2007 English Despite their reputation as an evolving shared knowledge repository, Wikis are often treated with suspicion in organizations for management, social and legal reasons. Following studies of unsuccessful Wiki projects, a field study was undertaken of a corporate Wiki that has been developed to capture, and make available, organizational knowledge for a large manufacturing company as an initiative of their Knowledge Management program. A Q Methodology research approach was selected to uncover employees' subjective attitudes to the Wiki so that the firm could more fully exploit the potential of the Wiki as a ubiquitous tool for tacit knowledge management. 0 1
Blogs, Wikis and Podcasts: Social software in the library Bordeaux A.
Boyd M.
Academic libraries
Blogs
Library users
Podcasting
Social software
Wiki
Serials Librarian English Social software, particularly blogs, wikis, and podcasting, are new tools that help libraries connect with users. Abigail Bordeaux shared the practical experience of Binghamton University Libraries using a blog for news and events and a staff wiki for collaboration and information sharing. She also explored the emergence of podcasts at libraries. Libraries were encouraged to experiment with social software to engage with patrons who commonly use these tools for other purposes. © by The Haworth Press, Inc. All rights reserved. 0 0
Blogs, wikis, and discussion forums: Attributes and implications for clinical information systems Weiss J.B.
Campion Jr. T.R.
Blogs
Communication
Medical records
Social support
Wiki
Studies in Health Technology and Informatics English Informaticians increasingly view clinical information systems as asynchronous communication systems instead of data processing tools. Outside of health care, popular web technologies like blogs, wikis, and discussion forums have proven to be platforms for effective asynchronous communication. These popular technologies have implications for improving the coordination of clinical care and social support. In order to appropriately evaluate these webbased tools for use in clinical information systems, it will be essential for the informatics community to formally identify the distinguishing attributes of these communication methodologies. The authors propose seven interpersonal and informational attributes to compare and contrast the purposes of blogs, wikis, and discussion forums. This attribute-based approach to analyzing emerging web technologies will lead to a better understanding of the design choices involved in web-based information systems. Two case studies demonstrate how informatics researchers and developers can consider these attributes in the design and evaluation of clinical information systems. © 2007 The authors. All rights reserved. 0 0
Boosting inductive transfer for text classification using Wikipedia Somnath Banerjee Proceedings - 6th International Conference on Machine Learning and Applications, ICMLA 2007 English Inductive transfer is applying knowledge learned on one set of tasks to improve the performance of learning a new task. Inductive transfer is being applied in improving the generalization performance on a classification task using the models learned on some related tasks. In this paper, we show a method of making inductive transfer for text classification more effective using Wikipedia. We map the text documents of the different tasks to a feature space created using Wikipedia, thereby providing some background knowledge of the contents of the documents. It has been observed here that when the classifiers are built using the features generated from Wikipedia they become more effective in transferring knowledge. An evaluation on the daily classification task on the Reuters RCV1 corpus shows that our method can significantly improve the performance of inductive transfer. Our method was also able to successfully overcome a major obstacle observed in a recent work on a similar setting. 0 0
Bouillon: A wiki-wiki Social Web Lecture Notes in Computer Science English 0 0
Building Collaborative Capacities in Learners: The M/Cyclopedia Project, Revisited Axel Bruns
Sal Humphreys
Wiki
Tertiary education
Pedagogy
Produsage
Social constructivism
WikiSym English In this paper we trace the evolution of a project using a wiki-based learning environment in a tertiary education setting. The project has the pedagogical goal of building learners’ capacities to work effectively in the networked, collaborative, creative environments of the knowledge economy. The paper explores the four key characteristics of a ‘produsage’ environment and identifies four strategic capacities that need to be developed in learners to be effective ‘produsers’ (user/producers). A case study is presented of our experiences with the subject New Media Technologies, run at Queensland University of Technology. This progress report updates our observations made at the 2005 WikiSym conference. 0 0
Building collaborative capacities in learners: The M/cyclopedia project revisited Axel Bruns
Sal Humphreys
Pedagogy
Produsage
Social constructivism
Tertiary education
Wiki
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English In this paper we trace the evolution of a project using a wiki-based learning environment in a tertiary education setting. The project has the pedagogical goal of building learners' capacities to work effectively in the networked, collaborative, creative environments of the knowledge economy. The paper explores the four key characteristics of a 'produsage' environment and identifies four strategic capacities that need to be developed in learners to be effective 'produsers' (user-producers). A case study is presented of our experiences with the subject New Media Technologies, run at Queensland University of Technology, Brisbane, Australia. This progress report updates our observations made at the 2005 WikiSym conference. Copyright 0 0
CAWS: Alla wiki system to improve workspace awareness to advance effectiveness of co-authoring activities Conference on Human Factors in Computing Systems - Proceedings English 0 0
CAWS: a wiki system to improve workspace awareness to advance effectiveness of co-authoring activities Ilaria Liccardi
Hugh C. Davis
Su White
English Crucial to effective collaborative writing is knowledge of what other people are doing and have done, what meaningful changes are made to a document, who is editing each section of a document and why. This is because awareness of individual and group activities is critical to successful collaboration. This paper presents the problems that surround co-authoring activities, and the advantages of using CAWS are explained and compared with other implementation and techniques for collaborative authoring. This co-authoring wiki based system (CAWS), aims to improve workspace awareness in order to improve user.s response to the document development activit. 0 2
Can I go out and play now? Hawkins C. Electronic Device Failure Analysis English The use of online information sources, such as Wikipedia, for getting information about any field to expand knowledge, is discussed. Wikipedia is a nonprofit and free encyclopedia, which was started in 2001. It has more than two million English language encyclopedic articles on its websites. It is an open source media that allows users to write an article or edit it successfully. It has been designed for web user interface and contributions. It shows instructions for writing and editing an article on websites. It provides hyperlinks that guide readers for accessing specific information. EDFAS can also use it as a source material website with material written and reviewed by FA experts for disseminating information about its products. 0 0
Catalyzing chemical bonding - The WIKI way ACS Chemical Biology English 0 0
Categorizing Learning Objects Based On Wikipedia as Substitute Corpus Marek Meyer
Christoph Rensing
Ralf Steinmetz
Wikipedia
Categorization
Metadata
KNN
Classification
Substitute Corpus
Automatic Metadata Generation
First International Workshop on Learning Object Discovery & Exchange (LODE'07), September 18, 2007, Crete, Greece As metadata is often not sufficiently provided by authors of Learning Resources, automatic metadata generation methods are used to create metadata afterwards. One kind of metadata is categorization, particularly the partition of Learning Resources into distinct subject cat- egories. A disadvantage of state-of-the-art categorization methods is that they require corpora of sample Learning Resources. Unfortunately, large corpora of well-labeled Learning Resources are rare. This paper presents a new approach for the task of subject categorization of Learning Re- sources. Instead of using typical Learning Resources, the free encyclope- dia Wikipedia is applied as training corpus. The approach presented in this paper is to apply the k-Nearest-Neighbors method for comparing a Learning Resource to Wikipedia articles. Different parameters have been evaluated regarding their impact on the categorization performance. 0 1
Categorizing learning objects based on wikipedia as substitute corpus Meyer M.
Rensing C.
Steinmetz R.
CEUR Workshop Proceedings English As metadata is often not sufficiently provided by authors of Learning Resources, automatic metadata generation methods are used to create metadata afterwards. One kind of metadata is categorization, particularly the partition of Learning Resources into distinct subject cat- egories. A disadvantage of state-of-the-art categorization methods is that they require corpora of sample Learning Resources. Unfortunately, large corpora of well-labeled Learning Resources are rare. This paper presents a new approach for the task of subject categorization of Learning Re- sources. Instead of using typical Learning Resources, the free encyclopedia Wikipedia is applied as training corpus. The approach presented in this paper is to apply the k-Nearest-Neighbors method for comparing a Learning Resource to Wikipedia articles. Different parameters have been evaluated regarding their impact on the categorization performance. 0 1
Chapter 7 Achieving a Holistic Web in the Chemistry Curriculum Rzepa H.S. Chemistry curriculum
Information-liberation
Metadata
OpenSource
Podcasts
RDF
Semantic web
Wiki
World-wide web
Annual Reports in Computational Chemistry English [No abstract available] 0 0
Clustering short texts using Wikipedia Somnath Banerjee
Krishnan Ramanathan
Ajay Gupta
English Subscribers to the popular news or blog feeds (RSS/Atom) often face the problem of information overload as these feed sources usually deliver large number of items periodically. One solution to this problem could be clustering similar items in the feed reader to make the information more manageable for a user. Clustering items at the feed reader end is a challenging task as usually only a small part of the actual article is received through the feed. In this paper, we propose a method of improving the accuracy of clustering short texts by enriching their representation with additional features from Wikipedia. Empirical results indicate that this enriched representation of text items can substantially improve the clustering accuracy when compared to the conventional bag of words representation 0 0
Collaborative Knowledge Management: Evaluation of Automated Link Discovery in the Wikipedia Wei Che Huang
Andrew Trotman
Shlomo Geva
English 0 0
Collaborative Learning in a Wiki Environment: Experiences from a software engineering course Shailey Minocha
Peter G. Thomas
New Review of Hypermedia and Multimedia English The post-graduate course, Software Requirements for Business Systems, in the Department of Computing of the Open University involves teaching systematic elicitation and documentation of requirements for software systems. On a software development project, team members often work remotely from one another and increasingly use wikis to collaboratively develop the requirements specification. In order to emulate requirements engineering practice, the course has been enhanced to include group collaboration using a wiki. In this paper, we describe the wiki-based collaborative activities and the evaluation of the pedagogical effectiveness of a wiki for collaborative learning. Our evaluations have confirmed that the strength of a wiki, as a collaborative authoring tool, can facilitate the learning of course concepts and students’ appreciation of the distributed nature of the RE process context. However, there is a need to support the discussion aspects of collaborative activities with more appropriate tools. We have also found that there are certain usability aspects of wikis that can mar a positive student experience. This paper will be of interest to academics aspiring to employ wikis on their courses and to practitioners who wish to realize the potential of wikis in facilitating information sharing, knowledge management, and in fostering collaboration within and between organizations. 0 0
Collaborative classification of growing collections with evolving facets Wu H.
Zubair M.
Maly K.
Collaborative classification
Faceted classification
Social classification
Tag
Wiki
ACM Conference on Hypertext and Hypermedia English There is a lack of tools for exploring large non-textual collections. One challenge is the manual effort required to add metadata to these collections. In this paper, we propose an architecture that enables users to collaboratively build a faceted classification for a large, growing collection. Besides a novel wiki-like classification interface, the proposed architecture includes automated document classification and facet schema enrichment techniques. We have implemented a prototype for the American Political History multimedia collection from usa.gov. Copyright 2007 ACM. 0 0
Collaborative knowledge at the grass-roots level: The risks and rewards of corporate Wikis Pfaff C.C.
Helen Hasan
Corporate Wiki
Intellectual Property
Knowledge management
Open source
PACIS 2007 - 11th Pacific Asia Conference on Information Systems: Managing Diversity in Digital Enterprises English The open source movement is founded on the concept of democratising knowledge to freely collaborate and exchange information at the grass-roots level. As Wikis are philosophically grounded in this movement, the use of corporate Wikis in the collaborative creation and operation of knowledge management systems holds considerable potential. However, the impact of using corporate Wikis in the business environment has uncovered some challenging issues such as licensing, accountability and liability regarding copyright, which may require a change in the way we think about intellectual property and licensing in this connected world. 0 0
Collaborative learning by modelling: Observations in an online setting Reimann P.
Thompson K.
Weinel M.
Chat
OLE
Postgraduate students
System dynamics modelling
Wiki
ASCILITE 2007 - The Australasian Society for Computers in Learning in Tertiary Education English A custom-designed combination of a chat tool and a wiki tool was used to engage postgraduate education students online in system dynamics modelling tasks. The purpose of the course was to familiarise students with core concepts of the complexity sciences, and to introduce them to modelling complex systems as a means to research processes of learning and organisational change. The rationale for the online course as well as the technology employed is described. Observations from two student teams using the Stella™ modelling software while cooperating in the online learning environment are reported, both with respect to their modelling activities as well as their team coordination behavior. We conclude with an identification of the main advantages of learning about a difficult subject area collaboratively and net-based. 0 0
Collaborative lesson-preparing environments: EduWiki designing and its applications Yiping Zhou
Chaohua Gong
Collaborative Lesson-preparing
Eduwiki
Wiki
15th International Conference on Computers in Education: Supporting Learning Flow through Integrative Technologies, ICCE 2007 English Eduwiki is a version of Wiki integrated educational special needs aims to support collaborative lesson-preparing. The paper described the workflow and function of developing Eduwiki. The framework of Eduwiki and its applications were demonstrated in details, focusing on collaborative authoring, history-version comparing, Tag, modification reasons and other mechanisms. These mechanisms are effective in monitoring and recording processes of lesson-preparing. The evaluation of teachers are the important reference standards for further improving availability of Eduwiki. 0 0
Collectivism vs. Individualism in a Wiki World: Librarians Respond to Jaron Lanier's Essay "Digital Maoism: The Hazards of the New Online Collectivism" Serials Review English 0 0
Collectivism vs. individualism in a wiki world: Librarians respond to Jaron Lanier's essay Digital Maoism: The Hazards of the New Online Collectivism"" M Tumlin
SR Harris
H Buchanan
K Schmidt
K Johnson
SERIALS REVIEW Jaron Lanier's essay {Digital} Maoism: The Hazards of the New Online Collectivism" is a self-described rant of the dangers of the hive mentality in suppressing individual human intelligence as demonstrated in online resources such as Wikipedia and {MySpace.} He sees merit in collective decision-making and problem-solving if evaluation is uncontroversial but argues that individuals are essential in providing judgment taste and user experiences in many situations. Lanier's essay appeared in the online progressive publication Edge and received responses from a variety of technologists academics and writers. In this {"Balance} Point" column four academic librarians provide a library public services viewpoint in responding to Lanier's essay." 0 0
Colonising web sites of wiki pages with ultra lightweight web applications Rees M. Ajax
JavaScript
Single page application
Ultra-lightweight web application
Wiki
Wiki interchange format
XML
AusWeb 2007: 13th Australasian World Wide Web Conference English As an ultra lightweight web application DotWikIE (Rees, 2006) showed that a single web page, loaded directly from the local machine's filestore, could support a wiki application running within a web browser. This allows the web page to carry data content which can be used and manipulated by a browser on any machine without requiring an Internet connection. The single web page contains both the application logic and data repository for the wiki, is highly portable, and can easily copied for backup and deployment. While DotWikIE is useful a web-based wiki with the same functionality has the advantage of being accessible on any Internet-connected machine. DotWikIEWeb is the evolution of DotWikIE as an ultra lightweight web application that works either from the local filestore or from a web site. This paper presents the technological problems and discusses an implementation of DotWikIEWeb and its ability to become the single page seed of a colony of associated wiki pages. DotWikIEWeb retains the benefits of single page web applications while gaining the capability to operate on a web site. Because of their flexibility wikis in general tend to become unstructured quickly as the user grasps the freedom to populate and format each wiki component in an ad hoc way. This is seen as one of the main advantages of a wiki. The paper concludes by discussing some approaches to how wikis could retain a more regular structure for their content. © 2007. Michael Rees. 0 0
Community experience at OpenOffice.org Muller-Prove M. Interactions English The community perspective for open source projects with OpenOffice.org as an example is discussed. OpenOffice.org is the leading open source office suite, with about 85 million downloaded copies worldwide. It is available for all major platforms and has been localized for almost 100 languages. The project is initiated by Sun Microsystems by open-sourcing StarOffice's code base. Sun also contributes a team of dedicated user-experience engineers to the project to support the development process and to improve the usefulness and usability of OpenOffice.org. Wikipedia is the prominent example of collaborative systems where everyone is invited to contribute and edit articles. Open source projects can also be seen as a kind of social network with the open source product as the connecting social object. The Web presence is a complex network of websites and databases that has a significant impact on the perceived image of the open source project. 0 0
Community tools for repurposing learning objects Chao Wang
Dickens K.
Davis H.C.
Gary Wills
Community of practice
Contextual metadata
Learning objects
Repurposing
Wiki
Lecture Notes in Computer Science English A critical success factor for the reuse of learning objects is the ease by which they may be repurposed in order to enable reusability in a different teaching context from which they were originally designed. The current generation of tools for creating, storing, describing and locating learning objects are best suited for users with technical expertise. Such tools are an obstacle to teachers who might wish to perform alterations to learning objects in order to make them suitable for their context. In this paper we describe a simple set of tools to enable practitioners to adapt the content of existing learning objects and to store and modify metadata describing the intended teaching context of these learning objects. We are deploying and evaluating these tools within the UK language teaching community. 0 0
Community, Consensus, Coercion, Control: CS*W or How Policy Mediates Mass Participation Travis Kriplean
Ivan Beschastnikh
David W. McDonald
Scott A. Golder
Wikipedia
Collaborative authoring
Community
Policy
Power
GROUP 2007 -- ACM Conference on Supporting Group Work. When large groups cooperate, issues of conflict and control surface because of differences in perspective. Managing such diverse views is a persistent problem in cooperative group work. The Wikipedian community has responded with an evolving body of policies that provide shared principles, processes, and strategies for collaboration. We employ a grounded approach to study a sample of active talk pages and examine how policies are employed as contributors work towards consensus. Although policies help build a stronger community, we find that ambiguities in policies give rise to power plays. This lens demonstrates that support for mass collaboration must take into account policy and power. 0 5
Community, consensus, coercion, control: CS*W or how policy mediates mass participation Travis Kriplean
Ivan Beschastnikh
David W. McDonald
Golder S.A.
Collaborative authoring
Community
Policy
Power
Wikipedia
GROUP'07 - Proceedings of the 2007 International ACM Conference on Supporting Group Work English When large groups cooperate, issues of conflict and control surface because of differences in perspective. Managing such diverse views is a persistent problem in cooperative group work. The Wikipedian community has responded with an evolving body of policies that provide shared principles, processes, and strategies for collaboration. We employ a grounded approach to study a sample of active talk pages and examine how policies are employed as contributors work towards consensus. Although policies help build a stronger community, we find that ambiguities in policies give rise to power plays. This lens demonstrates that support for mass collaboration must take into account policy and power. 0 5
Comparing Wikipedia and German Wordnet by Evaluating Semantic Relatedness on Multiple Datasets. Torsten Zesch
Iryna Gurevych
Max Muhlhauser
Wordnet Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT) We evaluate semantic relatedness measures on different German datasets showing that their performance depends on: (i) the definition of relatedness that was underlying the construction of the evaluation dataset, and (ii) the knowledge source used for computing semantic relatedness. We analyze how the underlying knowledge source influences the performance of a measure. Finally, we investigate the combination of wordnets and Wikipedia to improve the performance of semantic relatedness measures. 0 0
Computational Trust in Web Content Quality: A Comparative Evalutation on the Wikipedia Project Pierpaolo Dondio
Stephen Barret
Computational trust
Wikipedia
Content-quality
Informatica English The problem of identifying useful and trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publication. It is not hard to predict that in the future the direct reliance on this material will expand and the problem of evaluating the trustworthiness of this kind of content become crucial. The Wikipedia project represents the most successful and discussed example of such online resources. In this paper we present a method to predict Wikipedia articles trustworthiness based on computational trust techniques and a deep domain-specific analysis. Our assumption is that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia – i.e. content quality in a collaborative environment – mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. We present a series of experiment. The first is a study-case over a specific category of articles; the second is an evaluation over 8 000 articles representing 65% of the overall Wikipedia editing activity. We report encouraging results on the automated evaluation of Wikipedia content using our domain-specific expertise method. Finally, in order to appraise the value added by using domain-specific expertise, we compare our results with the ones obtained with a pre-processed cluster analysis, where complex expertise is mostly replaced by training and automatic classification of common features. 0 0
Computational trust in web content quality: A comparative evalutation on the Wikipedia project Pierpaolo Dondio
Stephen Barrett
Computational trust
Content-quality
Wikipedia
Informatica (Ljubljana) English The problem of identifying useful and trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publication. It is not hard to predict that in the future the direct reliance on this material will expand and the problem of evaluating the trustworthiness of this kind of content become crucial. The Wikipedia project represents the most successful and discussed example of such online resources. In this paper we present a method to predict Wikipedia articles trustworthiness based on computational trust techniques and a deep domain-specific analysis. Our assumption is that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia ñ i.e. content quality in a collaborative environment ñmapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. We present a series of experiment. The first is a study-case over a specific category of articles; the second is an evaluation over 8 000 articles representing 65% of the overall Wikipedia editing activity. We report encouraging results on the automated evaluation of Wikipedia content using our domain-specific expertise method. Finally, in order to appraise the value added by using domain-specific expertise, we compare our results with the ones obtained with a pre-processed cluster analysis, where complex expertise is mostly replaced by training and automatic classification of common features. 0 0
Computing Semantic Relatedness using Wikipedia Link Structure David N. Milne Wikipedia
Data mining
Semantic relatedness
Proc. of NZCSRSC, 2007 This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide a vast amount of structured world knowledge about the terms of interest. Our system, the Wikipedia Link Vector Model or WLVM, is unique in that it does so using only the hyperlink structure of Wikipedia rather than its full textual content. To evaluate the algorithm we use a large, widely used test set of manually defined measures of semantic relatedness as our bench-mark. This allows direct comparison of our system with other similar techniques. 0 2
Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis. Gabrilovich
Evgeniy
Shaul Markovitch
Semantics
Text-mining
Wikipedia
Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India, January 2007. Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r = 0:56 to 0:75 for individual words and from r = 0:60 to 0:72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users. 0 0
Computing semantic relatedness using Wikipedia Link structure Milne D. Data mining
Semantic relatedness
Wikipedia
Proceedings of NZCSRSC 2007, the 5th New Zealand Computer Science Research Student Conference English This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide a vast amount of structured world knowledge about the terms of interest. Our system, the Wikipedia Link Vector Model or WLVM, is unique in that it does so using only the hyperlink structure of Wikipedia rather than its full textual content. To evaluate the algorithm we use a large, widely used test set of manually defined measures of semantic relatedness as our bench-mark. This allows direct comparison of our system with other similar techniques. 0 2
Computing semantic relatedness using Wikipedia-based explicit semantic analysis Evgeniy Gabrilovich
Shaul Markovitch
IJCAI English 0 3
Conceptual enhancement via textual plurality: A pedagogical wiki bow towards collaborative structuration Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English 0 0
Concordance-based entity-oriented search Bautin M.
Skiena S.
Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, WI 2007 English We consider the problem of finding the relevant named entities in response to a search query over a given text corpus. Entity search can readily be used to augment conventional web search engines for a variety of applications. To assess the significance of entity search, we analyzed the AOL dataset of 36 million web search queries with respect to two different sets of entities: namely (a) 2.3 million distinct entities extracted from a news text corpus and (b) 2.9 million Wikipedia article titles. The results clearly indicate that search engines should be aware of entities, for under various criteria of matching between 18-39% of all web search queries can be recognized as specifically searching for entities, while 73-87% of all queries contain entities. Our entity search engine creates a concordance document for each entity, consisting of all the sentences in the corpus containing that entity. We then index and search these documents using open-source search software. This gives a ranked list of entities as the result of search. Visit http://www.textmap.com for a demonstration of our entity search engine over a large news corpus. We evaluate our system by comparing the results of each query to the list of entities that have highest statistical juxtaposition scores with the queried entity. Juxtaposition score is a measure of how strongly two entities are related in terms of a probabilistic upper bound. The results show excellent performance, particularly over well-characterized classes of entities such as people. 0 0
Concordia University at the TREC 2007 QA track Razmara M.
Fee A.
Kosseim L.
NIST Special Publication English In this paper, we describe the system we used for the TREC-2007 Question Answering Track. For factoid questions our redundancy-based approach using a modified version of ARANEA was enhanced further. Our list question answerer uses a clustering method to group the candidate answers that co-occur together. It also uses the target and question keywords as spies to pinpoint the right cluster of candidates. To answer other types of questions, our system extracts from Wikipedia articles a list of interest-marking terms and uses them to extract and score sentences from the AQUAINT-2 and BLOG document collections using various interest-marking triggers. 0 0
Connecting Wikis and natural language processing systems René Witte
Gitzinger T.
Self-aware Wiki System
Wiki/NLP integration
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English We investigate the integration of Wiki systems with automated natural language processing (NLP) techniques. The vision is that of a "self-aware" Wiki system reading, understanding, transforming, and writing its own content, as well as supporting its users in information analysis and content development. We provide a number of practical application examples, including index generation, question answering, and automatic summarization, which demonstrate the practicability and usefulness of this idea. A system architecture providing the integration is presented, as well as first results from an initial implementation based on the GATE framework for NLP and the MediaWiki system. Copyright 0 1
Constructing an authentic learning community through Wiki for advanced group collaboration and knowledge sharing Proceedings - The 7th IEEE International Conference on Advanced Learning Technologies, ICALT 2007 English 0 0
Constructing text: Wiki as a toolkit for (collaborative?) learning Andrea Forte
Amy Bruckman
Collaboration
Constructionism
Education
Knowledge building
Open content
Wiki
WikiSym English Writing a book from which others can learn is itself a powerful learning experience. Based on this proposition, we have launched Science Online, a wiki to support learning in high school science classrooms through the collaborative production of an online science resource. Our approach to designing educational uses of technology is based on an approach to education called constructionism, which advocates learning by working on personally meaningful projects. Our research examines the ways that constructionism connects to collective models of knowledge production and learning such as Knowledge Building. In this paper, we explore ways that collaboration using wiki tools fits into the constructionist approach, we examine learning goals for youth growing up in a read-write culture, and we discuss preliminary findings in an ongoing year-long study of Science Online in the classroom. Despite the radically open collaboration afforded by wiki, we observe that many factors conspired to stymie collaborative writing on the site. We expected to find cultural barriers to wiki adoption in schools. Unexpectedly, we are also finding that the design of the wiki tool itself contributed barriers to collaborative writing in the classroom. 0 2
Construction of a knowledge management framework based on Web 2.0 Liyong W.
Chengling Z.
Blogs
Knowledge managemen
RSS
Web 2.0
Wiki
2007 International Conference on Wireless Communications, Networking and Mobile Computing, WiCOM 2007 English ICT have a profound impact on the mode of organizational learning and that it offers a number of advantages and opportunities. But it also brings about a lot of potential problems in the field of knowledge management. Web2.0 is a term coined by Tim O'Reilly. It redefines the interactions between Internet and users and brings about a new Internet ecosystem. In this paper, we firstly introduced the main components and technologies of Web 2.0, then we proposed a framework that can corporate the Web 2.0 technologies into the filed of KM. Besides, we also proposed a knowledge management service strategy based on Web 2.0. With the help of the framework and the strategy, the potential problems will be solved to a great extent. 0 0
Contributions of the Web 2.0 to collaborative work around Learning Objects Del Moral M.E.
Cernea D.A.
Martinez L.V.
Collaborative work
Folksonomy
Learning Objects
Web 2.0
Wiki
CEUR Workshop Proceedings Spanish The new framework of the Web 2.0 and the concepts associated to the social software (Owen, Grant, Sayers and Facer, 2006) brought new added-values to the formative practices that could be carried out in Virtual Learning Environments, allowing users to develop a great diversity of collaborative projects, based on novel social tools like wikis (Baggetun, 2006) and folksonomies. This new scenario is based on the social constructivism postulates (Doffy and Cunningham, 1996) and promotes a qualitative change that that defines learning as a social process, migrating from e-Learning to the c-Learning paradigm, and introduces alternative working forms which underline the social dimension of knowledge. It also enables virtual communities' development and favor interactive process and concurrent problem resolution, becoming collaborative social spaces, where the use of Learning Objects could contribute to create learning meaningful context. 0 0
Cooperation and Quality in Wikipedia Dennis M. Wilkinson
Bernardo A. Huberman
WikiSym English The rise of the Internet has enabled collaboration and cooperation on anunprecedentedly large scale. The online encyclopedia Wikipedia, which presently comprises 7.2 million articles created by 7.04 million distinct editors, provides a consummate example. We examined all 50 million edits made tothe 1.5 million English-language Wikipedia articles and found that the high-quality articles are distinguished by a marked increase in number of edits, number of editors, and intensity of cooperative behavior, as compared to other articles of similar visibility and age. This is significant because in other domains, fruitful cooperation has proven to be difficult to sustain as the size of the collaboration increases. Furthermore, in spite of the vagaries of human behavior, we show that Wikipedia articles accrete edits according to a simple stochastic mechanism in which edits beget edits. Topics of high interest or relevance are thus naturally brought to the forefront of quality. 0 2
Cooperative repositories for formal proofs a wiki-based solution Lecture Notes in Computer Science English 0 0
Cracking software reuse Spinellis D. Collaboration
Packages
Reuse
Shared libraries
Wikipedia
IEEE Software English The Unix system and its pipelines are a model of software reuse. Although many subsequent developments weren't similarly successful, by looking at Wikipedia and its MediaWiki engine, we find many levels of successful reuse. It seems that software repositories, package-management systems, shared-library technologies, and language platforms have increased reuse's return on investment. The Internet has also catalyzed software reuse by bringing both developer groups and development efforts closer to their users. 0 0
Creating a Knowledge Base from a Collaboratively Generated Encyclopedia Simone Paolo Ponzetto Proceedings of the NAACL-HLT 2007 Doctoral Consortium, pp 9-12, Rochester, NY, April 2007 We present our work on using Wikipedia as a knowledge source for Natural Language Processing. We first describe our previous work on computing semantic relatedness from Wikipedia, and its application to a machine learning based coreference resolution system. Our results suggest that Wikipedia represents a semantic resource to be treasured for NLP applications, and accordingly present the work directions to be explored in the future. 0 0
Creating and managing ontology data on the web: A semantic wiki approach Chao Wang
Lu J.
Guangquan Zhang
Xianyi Zeng
Lecture Notes in Computer Science English The creation of ontology data on web sites and proper management of them would help the growth of the semantic web. This paper proposes a semantic wiki approach to tackle this issue. Desirable functions that a semantic wiki approach should implement to offer a better solution to this issue are discussed. Along with that, some key problems such as usability, data reliability and data quality are identified and analyzed. Based on that, a system framework is presented to show how such functions are designed. These functions are further explained along with the description of our implemented prototype system. By addressing the identified key problems, our semantic wiki approach is expected to be able to create and manage web ontology data more effectively. 0 0
Creating and managing ontology data on the web: a semantic wiki approach Chao Wang
Jie Lu
Guangquan Zhang
Xianyi Zeng
WISE English 0 0
Creating, Destroying, and Restoring Value in Wikipedia Reid Priedhorsky
Jilin Chen
Shyong (Tony) K. Lam
Katherine Panciera
Loren Terveen
John Riedl
Wikipedia Department of Computer Science and Engineering University of Minnesota Wikipedia's brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that an overwhelming majority of the viewed words were written by frequent editors and that this majority is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings. 0 12
Creating, destroying, and restoring value in Wikipedia Reid Priedhorsky
Jilin Chen
Shyong Tony
Katherine Panciera
Loren Terveen
John Riedl
English Wikipedia's brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that an overwhelming majority of the viewed words were written by frequent editors and that this majority is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings. 0 12
Cultural diversity and participatory evolution in IS: Global vs. local issues Deyrich M.-C.
Ess C.
Community networks
Culture
Hall
Hofstede
Participatory approaches
ACIS 2007 Proceedings - 18th Australasian Conference on Information Systems English A core issue in communication, culture should thus have considerable weight in IS as communication technologies. We review research documenting the importance of diverse cultural elements - including those identified by Hall and Hofstede - to IS design and usage if these are to be successful. An analysis of emerging participatory approaches facilitated by ICTs, including recent research on community networks and how users from diverse languages and cultures participate differently in Wikipedia, further highlights specific aspects of culture and language essential to successful IS design and implementation. We argue that participatory approaches and user-centric technologies appear to play increasingly important roles in diverse cultures and societies: this suggests IS research should take advantage of both extant and emerging frameworks for analyzing culture, technology, and communication - especially if IS is to continue to play a key role in the cultural (re)evolution ICTs facilitate. 0 0
DBpedia: A nucleus for a Web of open data Sören Auer
Christian Bizer
Georgi Kobilarov
Janette Lehmann
Richard Cyganiak
Zachary Ives
Lecture Notes in Computer Science English DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. 0 2
DBpedia: A nucleus for a web of open data Sören Auer
Christian Bizer
Jens Lehmann
Georgi Kobilarov
Richard Cyganiak
Zachary Ives
ISWC English 0 3
Democratising organisational knowledge: The potential of the corporate wiki Helen Hasan
Pfaff C.
Activity Theory
Democratisation of knowledge
Knowledge worker
Wiki
ICIS 2007 Proceedings - Twenty Eighth International Conference on Information Systems English Attempts to impose knowledge management often ignore the vast organisational resource of workrelated tacit knowledge possessed by knowledge workers. Our research reveals that activities supported by social technologies such as Wikis may provide a more appropriate capability for tacit knowledge management where a network-centric focus is adopted. A corporate Wiki has the potential to engage the collective responsibilities of knowledge workers to transfer their collective experience and skills into a dynamic shared knowledge repository. However, the traditional organisational culture can be reluctant to allow this power shift that surrenders the monopolistic control of the few over the creation and management of organisational knowledge. In order to frame the theoretical perspectives of these new processes of creation, accumulation, and maintenance of tacit knowledge in organisations, this paper uses Activity Theory to analyse the Wiki as a tool that mediates employee-based knowledge management activities leading to the democratisation of organisational knowledge. 0 0
Democratizing scientific vulgarization. The balance between cooperation and conflict in french Wikipedia Nicolas Auray
Céline Poudat
Pascal Pons
Observatorio (OBS*), , No 3 (2007) The free online encyclopedia project Wikipedia has become in less than six years one of the most prominent commons-based peer production example. The present study investigates the patterns of involvement and the patterns of cooperation within the French version of the encyclopaedia. In that respect, we consider different groups of users, highlighting the opposition between passerby contributors and core members, and we attempt to evaluate for each class of contributors the main motivations for their participation to the project. Then, we study the qualitative and quantitative patterns of cowriting and the correlation between size and quality of the production process. 0 0
Deriving a Large Scale Taxonomy from Wikipedia Simone Paolo Ponzetto
Michael Strube
AAAI'07: Proceedings of the 22nd national conference on Artificial intelligence English We take the category system in Wikipedia as a conceptual network. We label the semantic relations between categories using methods based on connectivity in the network and lexicosyntactic matching. As a result we are able to derive a large scale taxonomy containing a large amount of subsumption, i.e. isa, relations. We evaluate the quality of the created resource by comparing it with ResearchCyc, one of the largest manually annotated ontologies, as well as computing semantic similarity between words in benchmarking datasets. 2 0
Deriving a large scale taxonomy from Wikipedia Ponzetto S.P.
Michael Strube
Proceedings of the National Conference on Artificial Intelligence English We take the category system in Wikipedia as a conceptual network. We label the semantic relations between categories using methods based on connectivity in the network and lexico-syntactic matching. As a result we are able to derive a large scale taxonomy containing a large amount of subsumption, i.e. isa, relations. We evaluate the quality of the created resource by comparing it with ResearchCyc, one of the largest manually annotated ontologies, as well as computing semantic similarity between words in benchmarking datasets. Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 0
Design of a Wiki-based collaborative working strategy within the context of legal sciences European Journal of Legal Education English 0 0
Determining factors behind the PageRank log-log plot Yana Volkovich
Litvak N.
Debora Donato
PageRank
Power laws
Ranking algorithms
Stochastic equations
Web graph
Wikipedia
Lecture Notes in Computer Science English We study the relation between PageRank and other parameters of information networks such as in-degree, out-degree, and the fraction of dangling nodes. We model this relation through a stochastic equation inspired by the original definition of PageRank. Further, we use the theory of regular variation to prove that PageRank and in-degree follow power laws with the same exponent. The difference between these two power laws is in a multiplicative constant, which depends mainly on the fraction of dangling nodes, average in-degree, the power law exponent, and the damping factor. The out-degree distribution has a minor effect, which we explicitly quantify. Finally, we propose a ranking scheme which does not depend on out-degrees. 0 0
Discovering unknown connections - The DBpedia relationship finder Janette Lehmann
Schuppel J.
Sören Auer
The Social Semantic Web 2007 - Proceedings of the 1st Conference on Social Semantic Web, CSSW 2007 English The Relationship Finder is a tool for exploring connections between objects in a Semantic Web knowledge base. It offers a new way to get insights about elements in an ontology, in particular for large amounts of instance data. For this reason, we applied the idea to the DBpedia data set, which contains an enormous amount of knowledge extracted from Wikipedia. We describe the workings of the Relationship Finder algorithm and present some interesting statistical discoveries about DBpedia and Wikipedia. 0 0
DistriWiki: A Distributed Peer-to-Peer Wiki Joseph C. Morris Collaborative publishing
Peer-to-peer networks
Wiki
WikiSym English 0 0
Diversity in Excellence Fostering Programs: The Case of the Informatics Olympiad Ornit Sagy
Orit Hazzan
Journal of Computers in Mathematics and Science Teaching 0 0
Do As I Do: Authorial Leadership in Wikipedia Joseph M. Reagle WikiSym English 0 0
Do Wandering Albatrosses Care about Math? John Travis Science Repudiating a decade-old study of sea birds, a new report questions a popular model of how animals--as well as fishing boats and people--search for food. 0 0
Do as I do: authorial leadership in wikipedia Reagle
Joseph M.
Wikipedia
Authorial
Benevolent dictator
Leadership
WikiSym '07: Proceedings of the 2007 international symposium on Wikis In seemingly egalitarian collaborative on-line communities, like Wikipedia, there is often a paradoxical, or perhaps merely playful, use of the title "Benevolent Dictator" for leaders. I explore discourse around the use of this title so as to address how leadership works in open content communities. I first review existing literature on "emergent leadership" and then relate excerpts from community discourse on how leadership is understood, performed, and discussed by Wikipedians. I conclude by integrating concepts from existing literature and my own findings into a theory of "authorial" leadership. 0 0
Does it matter who contributes: a study on featured articles in the German Wikipedia Klaus Stein
Claudia Hess
Wikipedia
Collaborative working
Measures of quality and reputation
Statistical analysis of Wikipedia
Wiki
ACM Conference on Hypertext and Hypermedia English The considerable high quality of Wikipedia articles is often accredited to the large number of users who contribute to Wikipedia's encyclopedia articles, who watch articles and correct errors immediately. In this paper, we are in particular interested in a certain type of Wikipedia articles, namely, the featured articles - articles marked by a community's vote as being of outstanding quality. The German Wikipedia has the nice property that it has two types of featured articles: excellent and worth reading. We explore on the German Wikipedia whether only the mere number of contributors makes the difference or whether the high quality of featured articles results from having experienced authors contributing with a reputation for high quality contributions. Our results indicate that it does matter who contributes. 0 0
Dr. Seuss's Sneetches W. Locander
D. Luechauer
Marketing Management The first Dr. Seuss book to which a sales and marketing executive might turn for business lessons in this era of diversity initiatives is The Sneetches and Other Stories. A contributor to online encyclopedia Wikipedia suggests that the story is an obvious parable for the cycle of fashion and how snobbery and insecurity drive consumerism to consumers' own detriment. Although it has a powerful message to those in marketing, the leadership lesson in this story relates to the divisiveness" of creating divisions (formal or informal) between people. Perhaps the most deleterious Sneetch-like division experienced in organizations is the artificial distinction between the two kindred functions of marketing and sales." 0 0
Dynamic link service 2.0: using Wikipedia as a linkbase Patrick A. S. Sinclair
Kirk Martinez
Paul H. Lewis
Dynamic link service
Wikipedia
ACM Conference on Hypertext and Hypermedia English This paper describes how a Web 2.0 mashup approach, reusing technologies and services freely available on the web, have enabled the development of a dynamic link service system that uses Wikipedia as its linkbase. 0 0
ESTER: Efficient search on text, entities, and relations Holger Bast
Chitea A.
Fabian Suchanek
Ingmar Weber
Interactive
Ontology
Proactive
Semantic search
Wikipedia
Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'07 English We present ESTER, a modular and highly efficient system for combined full-text and ontology search. ESTER builds on a query engine that supports two basic operations: prefix search and join. Both of these can be implemented very efficiently with a compact index, yet in combination provide powerful querying capabilities. We show how ESTER can answer basic SPARQL graph-pattern queries on the ontology by reducing them to a small number of these two basic operations. ESTER further supports a natural blend of such semantic queries with ordinary full-text queries. Moreover, the prefix search operation allows for a fully interactive and proactive user interface, which after every keystroke suggests to the user possible semantic interpretations of his or her query, and speculatively executes the most likely of these interpretations. As a proof of concept, we applied ESTER to the English Wikipedia, which contains about 3 million documents, combined with the recent YAGO ontology, which contains about 2.5 million facts. For a variety of complex queries, ESTER achieves worst-case query processing times of a fraction of a second, on a single machine, with an index size of about 4 GB. Copyright 2007 ACM. 0 0
EXTIRP: Baseline Retrieval from Wikipedia Miro Lehtonen
Antoine Doucet
Comparative Evaluation of XML Information Retrieval Systems English The Wikipedia XML documents are considered an interesting challenge to any XML retrieval system that is capable of indexing and retrieving XML without prior knowledge of the structure. Although the structure of the Wikipedia XML documents is highly irregular and thus unpredictable, EXTIRP manages to handle all the well-formed XML documents without problems. Whether the high flexibility of EXTIRP also implies high performance concerning the quality of IR has so far been a question without definite answers. The initial results do not confirm any positive answers, but instead, they tempt us to define some requirements for the XML documents that EXTIRP is expected to index. The most interesting question stemming from our results is about the line between high-quality XML markup which aids accurate IR and noisy “XML spam” that misleads flexible XML search engines. 0 0
EXTIRP: Baseline retrieval from Wikipedia Miro Lehtonen
Antoine Doucet
Lecture Notes in Computer Science English The Wikipedia XML documents are considered an interesting challenge to any XML retrieval system that is capable of indexing and retrieving XML without prior knowledge of the structure. Although the structure of the Wikipedia XML documents is highly irregular and thus unpredictable, EXTIRP manages to handle all the well-formed XML documents without problems. Whether the high flexibility of EXTIRP also implies high performance concerning the quality of IR has so far been a question without definite answers. The initial results do not confirm any positive answers, but instead, they tempt us to define some requirements for the XML documents that EXTIRP is expected to index. The most interesting question stemming from our results is about the line between high-quality XML markup which aids accurate IR and noisy "XML spam" that misleads flexible XML search engines. 0 0
EachWiki: Suggest to be an easy-to-edit wiki interface for everyone Haisu Zhang
Linyun Fu
Haofen Wang
Haiping Zhu
Yafang Wang
Yiqin Yu
CEUR Workshop Proceedings English In this paper, we present EachWiki, an extension of Semantic MediaWiki characterized by an intelligent suggestion mechanism. It aims to facilitate the wiki authoring by recommending the following elements: links, categories, and properties. We exploit the semantics of Wikipedia data and leverage the collective wisdom of web users to provide high quality annotation suggestions. The proposed mechanism not only improves the usability of Semantic MediaWiki but also speeds up its converging use of terminology. The suggestions are applied to relieve the burden of wiki authoring and attract more inexperienced contributors, thus making Semantic MediaWiki even better Semantic Web proto types and data source. 0 0
Efficient interactive query expansion with complete Search Holger Bast
Debapriyo Majumdar
Ingmar Weber
Index building
Interactive
Query expansion
Synsets
Wikipedia
Wordnet
International Conference on Information and Knowledge Management, Proceedings English We present an efficient realization of the following interactive search engine feature: as the user is typing the query, words that are related to the last query word and that would lead to good hits are suggested, as well as selected such hits. The realization has three parts: (i) building clusters of related terms, (ii) adding this information as artificial words to the index such that (iii) the described feature reduces to an instance of prefix search and completion. An efficient solution for the latter is provided by the CompleteSearch engine, with which we have integrated the proposed feature. For building the clusters of related terms we propose a variant of latent semantic indexing that, unlike standard approaches, is completely transparent to the user. By experiments on two large test-collections, we demonstrate that the feature is provided at only a slight increase in query processing time and index size. Copyright 2007 ACM. 0 0
Efficient time-travel on versioned text collections Berberich K.
Bedathur S.
Gerhard Weikum
Datenbanksysteme in Business, Technologie und Web, BTW 2007 - 12th Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS), Proceedings The availability of versioned text collections such as the Internet Archive opens up opportunities for time-aware exploration of their contents. In this paper, we propose time-travel retrieval and ranking that extends traditional keyword queries with a temporal context in which the query should be evaluated. More precisely, the query is evaluated over all states of the collection that existed during the temporal context. In order to support these queries, we make key contributions in (i) defining extensions to well-known relevance models that take into account the temporal context of the query and the version history of documents, (ii) designing an immortal index over the full versioned text collection that avoids a blowup in index size, and (iii) making the popular NRA algorithm for top-k query processing aware of the temporal context. We present preliminary experimental analysis over the English Wikipedia revision history showing that the proposed techniques are both effective and efficient. 0 0
Emergence of learning in computer-supported, large-scale collective dynamics: A research agenda Kapur M.
Hung D.
Jacobson M.
Voiklis J.
Kinzer C.K.
Victor C.D.-T.
Computer-Supported Collaborative Learning Conference, CSCL English Seen through the lens of complexity theory, past CSCL research may largely be characterized as small-scale (i.e., small-group) collective dynamics. While this research tradition is substantive and meaningful in its own right, we propose a line of inquiry that seeks to understand computer-supported, large-scale collective dynamics: how large groups of interacting people leverage technology to create emergent organizations (knowledge, structures, norms, values, etc.) at the collective level that are not reducible to any individual, e.g., Wikipedia, online communities etc. How does learning emerge in such large-scale collectives? Understanding the interactional dynamics of large-scale collectives is a critical and an open research question especially in an increasingly participatory, inter-connected, media-convergent culture of today. Recent CSCL research has alluded to this; we, however, develop the case further in terms of what it means for how one conceives learning, as well as methodologies for seeking understandings of how learning emerges in these large-scale networks. In the final analysis, we leverage complexity theory to advance computational agent-based models (ABMs) as part of an integrated, iteratively-validated phenomenological-ABM inquiry cycle to understand emergent phenomenon from the "bottom up". 0 0
Enabling Customer-Centricity Using Wikis and the Wiki Way Christian Wagner
Ann Majchrzak
Journal of Management Information Systems English Customer-centric business makes the needs and resources of individual customers the starting point for planning new products and services or improving existing ones. While customer-centricity has received recent attention in the marketing literature, technologies to enable customer-centricity have been largely ignored in research and theory development. In this paper, we describe one enabling technology—wikis. Wiki is a Web-based collaboration technology designed to allow anyone to update any information posted to a wiki-based Web site. As such, wikis can be used to enable customers to not only access but also change the organization's Web presence, creating previously unheard of opportunities for joint content development and "peer production" of Web content. At the same time, such openness may make the organization vulnerable to Web site defacing, destruction of intellectual property, and general chaos. In this zone of tension—between opportunity and possible failure—an increasing number of organizations are experimenting with the use of wikis and the wiki way to engage customers. Three cases of organizations using wikis to foster customer-centricity are described, with each case representing an ever-increasing level of customer engagement. An examination of the three cases reveals six characteristics that affect customer engagement—community custodianship, goal alignment among contributors, value-adding processes, emerging layers of participation, critical mass of management and monitoring activity, and technologies in which features are matched to assumptions about how the community collaborates. Parallels between our findings and those evolving in studies of the open source software movement are drawn. 0 1
End of paper: Electronic Book Technologies White K.
Townsend S.
Blogs
E-book
E-literature
Electronic texts
Games
Hypertext
Internet
Networked book
Wiki
Collection Management English Kim White and Sarah Townsend created the End [of] Paper blog while preparing for their panel on "Electronic Book Technologies." In their blog they discussed innovations in creative-critical works, reference works; mass market publications; electronic textbooks; ephemera; and collaborative constructions. They also discussed issues of preservation; standardization; and literacy. Fittingly, their paper was presented in blog format. Exemplary texts from each category were presented. Links to many of these texts as well as the text of the presentation itself can be found on the blog website "End [of] Paper" www.endofpaper.blogspot.com. To accompany your web journey, we are including a bibliography of related electronic resources. © Copyright (c) by The Haworth Press, Inc. All rights reserved. 0 0
Engaging the YouTube Google-eyed generation: Strategies for using web 2.0 in teaching and learning Duffy P. Blogs
E-Learning
Web 2.0
Wiki
YouTube
ECEL 2007: 6th European Conference on e-Learning English YouTube, Podcasting, Blogs, Wikis and RSS are buzz words currently associated with the term Web 2.0 and represent a shifting pedagogical paradigm for the use of a new set of tools within education. The implication here is a possible shift from the basic archetypical vehicles used for (e)learning today (lecture notes, printed material, PowerPoint, websites, animation) towards a ubiquitous user-centric, user-content generated and user-guided experience. It is not sufficient to use online learning and teaching technologies simply for the delivery of content to students. A new "Learning Ecology" is present where these Web 2.0 technologies can be explored for collaborative and (co)creative purposes as well as for the critical assessment, evaluation and personalization of information. Web 2.0 technologies provide educators with many possibilities for engaging students in desirable practices such as collaborative content creation, peer assessment and motivation of students through innovative use of media. These can be used in the development of authentic learning tasks and enhance the learning experience. However in order for a new learning tool, be it print, multimedia, blog, podcast or video, to be adopted, educators must be able to conceptualize the possibilities for use within a concrete framework. This paper outlines some possible strategies for educators to incorporate the use of some of these Web 2.0 technologies into the student learning experience. 0 0
Enhancing relation extraction by eliciting selectional constraint features from Wikipedia Gang W.
Huajie Z.
Haofen W.
Yong Y.
Feature generation
Relation extraction
Selectional constraints
Lecture Notes in Computer Science English Selectional Constraints are usually checked for detecting semantic relations. Previous work usually defined the constraints manually based on handcrafted concept taxonomy, which is time-consuming and impractical for large scale relation extraction. Further, the determination of entity type (e.g. NER) based on the taxonomy cannot achieve sufficiently high accuracy. In this paper, we propose a novel approach to extracting relation instances using the features elicited from Wikipedia, a free online encyclopedia. The features are represented as selectional constraints and further employed to enhance the extraction of relations. We conduct case studies on the validation of the extracted instances for two common relations hasArtist(album, artist) and hasDirector(film, director). Substantially high extraction precision (around 0.95) and validation accuracy (near 0.90) are obtained. 0 0
Enhancing single-document summarization by combining RankNet and third-party sources Svore K.M.
Vanderwende L.
Burges C.J.C.
EMNLP-CoNLL 2007 - Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning English We present a new approach to automatic summarization based on neural nets, called NetSum. We extract a set of features from each sentence that helps identify its importance in the document. We apply novel features based on news search query logs and Wikipedia entities. Using the RankNet learning algorithm, we train a pair-based sentence ranker to score every sentence in the document and identify the most important sentences. We apply our system to documents gathered from CNN.com, where each document includes highlights and an article. Our system significantly outperforms the standard baseline in the ROUGE-1 measure on over 70% of our document set. 0 0
Enhancing traditional media services utilising lessons learnt from successful social media applications - Case studies and framework Back A.
Vainikainen S.
MySpace
Social media
Wikipedia
YouTube
Openness in Digital Publishing: Awareness, Discovery and Access - Proceedings of the 11th International Conference on Electronic Publishing, ELPUB 2007 English The paper presents a framework for describing electronic media services. The framework was created by utilising earlier models and case studies of successful social media applications. Wikipedia, YouTube and MySpace were analysed because they are among the most popular sites in the world and they highlight different aspects of social media applications. The proposed model consists of two main parts: Concept and system, and Content and user. Both of them were further divided into four subgroups. With the help of a radar view, various applications can be described and compared and their further development opportunities identified. A prototype application, StorySlotMachine, is used as a case example, where the framework is used. 0 0
Evaluating structured information retrieval and multimedia retrieval using PF/Tijah Westerveid T.
Henning Rode
Van Os R.
Djoerd Hiemstra
Ramirez G.
Mihajlovie V.
De Vries A.P.
Lecture Notes in Computer Science English We used a flexible XML retrieval system for evaluating structured document retrieval and multimedia retrieval tasks in the context of the INEX 2006 benchmarks. We investigated the differences between article and element retrieval for Wikipedia data as well as the influence of an elements context on its ranking. We found that article retrieval performed well on many tasks and that pinpointing the relevant passages inside an article may hurt more than it helps. We found that for finding images in isolation the associated text is a very good descriptor in the Wikipedia collection, but we were not very succesful at identifying relevant multimedia fragments consisting of a combination of text and images. 0 0
Evaluating the Comprehensiveness of Wikipedia: The Case of Biochemistry Brendan Luyt
Wee Kwek
Ju Sim
Peng York
Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers English In recent years, the world of encyclopedia publishing has been challenged as new collaborative models of online information gathering and sharing have developed. Most notable of these is Wikipedia. Although Wikipedia has a core group of devotees, it has also attracted critical comment and concern, most notably in regard to its quality. In this article we compare the scope of Wikipedia and Encyclopedia Britannica in the subject of biochemistry using a popular first year undergraduate textbook as a benchmark for concepts that should appear in both works, if they are to be considered comprehensive in scope. 0 0
Evaluating the comprehensiveness of wikipedia: The case of biochemistry Brendan Luyt
Kwek W.T.
Sim J.W.
Peng York
Encyclopedia britannica
Evaluation
Reference sources
Wikipedia
Lecture Notes in Computer Science English [No abstract available] 0 0
Evaluating the effectiveness of TaiwanBaseballWiki Lin S.-C.
Lin Y.-S.
TaiwanBaseballWiki
Web log analysis
Wiki
Journal of Educational Media and Library Science English; Chinese This research study aims to evaluate the effectiveness of TaiwanBaseballWiki, established in 2005. First, the researchers collected data and analyzed the web logs from April 14, 2005 to August 19, 2007. A number of statistical measurements were used to analyze the data. The results show that 1. The number of page views has grown by almost 15% per month on average; 74% of page views came from 20% high browsing pages; 2. The registered users has grown by nearly 33% every month on average. It is also found that only 163 users have edited more than 10 times and were accounted for 91% of all the editing. Among them, the most active 1.9% of the users did 80.8% of all the editing; 3. The pages have grown by almost 6.06% per month on average, and only 8.75% of baseball pages had been viewed over 10 times on average. Moreover, the research also adopted the web questionnaire method to understand the community users from different perspective. The result shows that 1. The community users were mostly fans aged 24.8 on average and were young students. This result was the same as what we have previously predicted; 2. 64% of respondents were unregistered users, and among them 40% stayed unregistered because they only needed a quick search for some information; 3. 34% of them have edited website pages, and among them, 41% are active participants who have edited more than 1000 times and 45% of them started editing after using the website for one week. Finally, the suggestions for the website, as the results of the evaluation, are to promote baseball history pages, promote low viewing pages, design a user-friendly interface, and design new website styles suitable for young users. 0 0
Everyone's a Superhero: A Cultural Theory of Mary Sue" Fan Fiction as Fair Use" Anupam Chander
Madhavi Sunder
California Law Review Fan fiction spans all genres of popular culture, from anime to literature. In every fan lierature, there is the Mary Sue. According to Wikipedia, a {Mary} Sue" is a fictional character who is portrayed in an idealized way and lacks noteworthy flaws and appears in the form of a new character beamed into the story or a marginal character brought out from the shadows. {"Mary} Sue" is often a pejorative expression used to deride fan fiction perceived as narcissistic. In this essay Mary Sue is rehabilitated as a figure of Subaltern critique and empowerment." 0 0
Evolution of Mexico and Other Single-Party States George W. Grayson
Joseph L. Klesner
Steven T. Wuhs
González
Francisco E.
International Studies Review 0 0
Experimental attempts of using the Wiki-based knowledge sharing system at Syowa Station by 47th JARE Antarctic Record English 0 0
Exploit Semantic Information for Category Annotation Recommendation in Wikipedia Yang Wang
Haofen Wang
Haiping Zhu
Yong Yu
Natural Language Processing and Information Systems English Compared with plain-text resources, the ones in “semi-semantic” web sites, such as Wikipedia, contain high-level semantic information which will benefit various automatically annotating tasks on themself. In this paper, we propose a “collaborative annotating” approach to automatically recommend categories for a Wikipedia article by reusing category annotations from its most similar articles and ranking these annotations by their confidence. In this approach, four typical semantic features in Wikipedia, namely incoming link, outgoing link, section heading and template item, are investigated and exploited as the representation of articles to feed the similarity calculation. The experiment results have not only proven that these semantic features improve the performance of category annotating, with comparison to the plain text feature; but also demonstrated the strength of our approach in discovering missing annotations and proper level ones for Wikipedia articles. 0 0
Exploit semantic information for category annotation recommendation in Wikipedia Yafang Wang
Haofen Wang
Haiping Zhu
Yiqin Yu
Collaborative annotating
Semantic features
Vector space model
Wikipedia category
Lecture Notes in Computer Science English Compared with plain-text resources, the ones in "semi-semantic" web sites, such as Wikipedia, contain high-level semantic information which will benefit various automatically annotating tasks on themself. In this paper, we propose a "collaborative annotating" approach to automatically recommend categories for a Wikipedia article by reusing category annotations from its most similar articles and ranking these annotations by their confidence. In this approach, four typical semantic features in Wikipedia, namely incoming link, outgoing link, section heading and template item, are investigated and exploited as the representation of articles to feed the similarity calculation. The experiment results have not only proven that these semantic features improve the performance of category annotating, with comparison to the plain text feature; but also demonstrated the strength of our approach in discovering missing annotations and proper level ones for Wikipedia articles. 0 0
Exploiting Syntactic and Semantic Information for Relation Extraction from Wikipedia D. P. T. Nguyen
Y. Matsuo
M. Ishizuka
Knowledge-extraction wikipedia IJCAI Workshop on Text-Mining \\& Link-Analysis (TextLink 2007), 2007. The exponential growth of Wikipedia recently attracts the attention of a large number of researchers and practitioners. However, one of the current challenges on Wikipedia is to make the encyclopedia processable for machines. In this paper, we deal with the problem of extracting relations between entities from Wikipedia’s English articles, which can straightforwardly be transformed into Semantic Web meta data. We propose a novel method to exploit syntactic and semantic information for relation extraction. We mine frequent subsequences from the path between an entity pair in the syntactic and semantic structure in order to explore key patterns reflecting the relationship between the pair. In addition, our method can utilize the nature of Wikipedia to automatically obtain training data. The preliminary results of our experiments strongly support our hyperthesis that analyzing language in higher level is better for relation extraction on Wikipedia and show that our method is promising for text understanding. 0 0
Exploiting Wikipedia as External Knowledge for Named Entity Recognition Junichi Kazama
Kentaro Torisawa
English 0 0
Exploiting web 2.0 forallknowledge-based information retrieval Milne D.N. Data mining
Knowledge-based information retrieval
Query expansion
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English This paper describes ongoing research into obtaining and using knowledge bases to assist information retrieval. These structures are prohibitively expensive to obtain manually, yet automatic approaches have been researched for decades with limited success. This research investigates a potential shortcut: a way to provide knowledge bases automatically, without expecting computers to replace expert human indexers. Instead we aim to replace the professionals with thousands or even millions of amateurs: with the growing community of contributors who form the core of Web 2.0. Specifically we focus on Wikipedia, which represents a rich tapestry of topics and semantics and a huge investment of human effort and judgment. We show how this can be directly exploited to provide manually-defined yet inexpensive knowledge-bases that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We are also concerned with how best to make these structures available to users, and aim to produce a complete knowledge-based retrieval system-both the knowledge base and the tools to apply it-that can be evaluated by how well it assists real users in performing realistic and practical information retrieval tasks. To this end we have developed Koru, a new search engine that offers concrete evidence of the effectiveness of our Web 2.0 based techniques for assisting information retrieval. 0 0
Exploitingwikipedia as external knowledge for named entity recognition Jun'ichi Kazama
Kentaro Torisawa
EMNLP-CoNLL 2007 - Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning English We explore the use of Wikipedia as external knowledge to improve named entity recognition (NER). Our method retrieves the corresponding Wikipedia entry for each candidate word sequence and extracts a category label from the first sentence of the entry, which can be thought of as a definition part. These category labels are used as features in a CRF-based NE tagger. We demonstrate using the CoNLL 2003 dataset that the Wikipedia category labels extracted by such a simple method actually improve the accuracy of NER. 0 0
Exploring Wikipedia and Query Log's Ability for Text Feature Representation Bing Li
Qing-Cai Chen
Daniel S Yeung.
Wing W Ng.Y.
Xiao-Long Wang
Machine Learning and Cybernetics, 2007 International Conference on The rapid increase of internet technology requires a better management of web page contents. Many text mining researches has been conducted, like text categorization, information retrieval, text clustering. When machine learning methods or statistical models are applied to such a large scale of data, the first step we have to solve is to represent a text document into the way that computers could handle. Traditionally, single words are always employed as features in Vector Space Model, which make up the feature space for all text documents. The single-word based representation is based on the word independence and doesn't consider their relations, which may cause information missing. This paper proposes Wiki-Query segmented features to text classification, in hopes of better using the text information. The experiment results show that a much better F1 value has been achieved than that of classical single-word based text representation. This means that Wikipedia and query segmented feature could better represent a text document. 0 0
Exploring wikipedia and query log's ability for text feature representation Li B.
Chen Q.-C.
Yeung D.S.
Ng W.W.Y.
Wang X.-L.
Query-log
Text feature representation
Wikipedia (Wiki)
Word-based model
Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, ICMLC 2007 English The rapid increase of internet technology requires a better management of web page contents. Many text mining researches has been conducted, like text categorization, information retrieval, text clustering. When machine learning methods or statistical models are applied to such a large scale of data, the first step we have to solve is to represent a text document into the way that computers could handle. Traditionally, single words are always employed as features in Vector Space Model, which make up the feature space for all text documents. The single-word based representation is based on the word independence and doesn't consider their relations, which may cause information missing. This paper proposes Wiki-Query segmented features to text classification, in hopes of better using the text information. The experiment results show that a much better F1 value has been achieved than that of classical single-word based text representation. This means that Wikipedia and query segmented feature could better represent a text document. 0 0
Extracting Named Entities and Relating Them over Time Based on Wikipedia A Bhole
B Fortuna
M Grobelnik
D Mladenic
Text mining
Document categorization
Information extraction
Informatica, 2007 This paper presents an approach to mining information relating people, places, organizations and events extracted from Wikipedia and linking them on a time scale. The approach consists of two phases: (1) identifying relevant categorizing the articles as containing people, places or organizations; (2) generating timeline - linking named entities and extracting events and their time frame. We illustrate the proposed approach on 1.7 million Wikipedia articles. 0 0
Facilitating exploratory conversations: Here and now Ann Majchrzak
Christian Wagner
Sengupta K.
Zmud R.
Online learning
Wiki
ICIS 2007 Proceedings - Twenty Eighth International Conference on Information Systems English The format of academic conferences has generally remained unchanged for decades. It has on the whole been taken for granted despite major advances in communication technologies. The panel's objective is to learn if and how computer-mediated conversations increase the audience's participation level and capability to offer, discuss, and refine exploratory comments that a speaker's paper might stimulate. To this end, we propose an experiential exercise in which the audience will use an internet-based wiki to support exploratory conversations while listening to Bob Zmud's lecture about 'Overcoming Cognitive Boundaries in Knowledge Sharing' and then discussing it. To manage the complexity and risk of the experiment, we will offer access to the wiki to the FIRST 20 registrants. (If interested, please email Dov.Teeni@case.edu.) Therefore, while the panel will be open to all ICIS conference participants, 20 of the participants will be engaged in the exploration via the internet and other participants are invited to participate orally in the face-to-face discussions. 0 0
Fact Discovery in Wikipedia Sisay F. Adafre
V. Jijkoun
Maarten de Rijke
English 0 0
Fact discovery in Wikipedia Adafre S.F.
Jijkoun V.
Maarten de Rijke
Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, WI 2007 English We address the task of extracting focused salient information items, relevant and important for a given topic, from a large encyclopedic resource. Specifically, for a given topic (a Wikipedia article) we identify snippets from other articles in Wikipedia that contain important information for the topic of the original article, without duplicates. We compare several methods for addressing the task, and find that a mixture of content-based, link-based, and layout-based features outperforms other methods, especially in combination with the use of so-called reference corpora that capture the key properties of entities of a common type. 0 0
Favorite Reference Books. Janyne Ste Marie Key Words The article highlights several medical references used in the discussion of indexing and suitable for the medical profession in the {U.S.} These include {Taber's} Cyclopedic Medical Dictionary {Dorland's} Illustrated Medical Dictionary {Stedman's} Medical Dictionary Wikipedia, {PubMed,} Google Inc., Sigma catalogs, Merck manuals, {MediLexicon,} {ChemNetBase,} Toxnet, and {SciMed} indexing Web site. The Web sites provided offer various information aside from the medical field and are accessible and easy to use. 0 0
Finding Related Pages Using Green Measures: An Illustration with Wikipedia. Ollivier
Yann
Senellart
Pierre
PageRank
Markov chain
Green measure
Wikipedia
Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI 2007) We introduce a new method for finding nodes semantically related to a given node in a hyperlinked graph: the Green method, based on a classical Markov chain tool. It is generic, adjustment-free and easy to implement. We test it in the case of the hyperlink structure of the English version of Wikipedia, the on-line encyclopedia. We present an extensive comparative study of the performance of our method versus several other classical methods in the case of Wikipedia. The Green method is found to have both the best average results and the best robustness. 0 0
Finding experts using Wikipedia Gianluca Demartini CEUR Workshop Proceedings English When we want to find experts on the Web we might want to search where the knowledge is created by the users. One of such knowledge repository is Wikipedia. People expertises are described in Wikipedia pages and also the Wikipedia users can be considered experts on the topics they produce content on. In this paper we propose algorithms to find experts in Wikipedia. The two different approaches are finding experts in the Wikipedia content or among the Wikipedia users. We also use semantics from WordNet and Yago in order to disambiguate expertise topics and to improve the retrieval effectiveness. In the end, we show how our methodology can be implemented in a system in order to improve the expert retrieval effectiveness. 0 0
Finding related pages using Green measures: an illustration with Wikipedia Yann Ollivier
Pierre Senellart
AAAI English 0 2
Finding related pages using green measures: an illustration with Wikipedia Yann Ollivier
Pierre Senellart
Proceedings of the National Conference on Artificial Intelligence English We introduce a new method for finding nodes semantically related to a given node in a hyperlinked graph: the Green method, based on a classical Markov chain tool. It is generic, adjustment-free and easy to implement. We test it in the case of the hyperlink structure of the English version of Wikipedia, the on-line encyclopedia. We present an extensive comparative study of the performance of our method versus several other classical methods in the case of Wikipedia. The Green method is found to have both the best average results and the best robustness. Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 2
Finding the right tool for the community: Bringing a Wiki-type editor to the world of reusable learning objects Proceedings - The 7th IEEE International Conference on Advanced Learning Technologies, ICALT 2007 English 0 0
Finding your way with CampusWiki: A location-aware wiki Schuler R.P.
Nathaniel Laws
Sameer Bajaj
Grandhi S.A.
Quentin Jones
Collaborative authoring
Location-awareness
Wiki
Conference on Human Factors in Computing Systems - Proceedings English Wikis provide a simple and unique approach to collaborative authoring, allowing any member of the community to contribute new, or change existing information. However, Wikis are typically disconnected from the physical context of users who are utilizing or creating content, resulting in suboptimal support for geographic communities. In addition, geographic communities might find the highly skewed generation of content by a few individuals problematic. Here we present research into addressing these challenges through location-awareness and lightweight user content rating mechanisms. We describe one such location-aware Wiki, CampusWiki and initial results from a field study demonstrating the value of location-linked content and the rating approach. We conclude with a discussion of design implications. 0 0
Finding your way with CampusWiki: a location-aware wiki Richard P. Schuler
Nathaniel Laws
Sameer Bajaj
Sukeshini A. Grandhi
Quentin Jones
Wiki
Collaborative authoring
Location-awareness
CHI EA English 0 0
Fine-grain security for military-civilian-coalition operations though virtual private workspace technology William Maule R.
Gallup S.
Coi
Collaboration
Community of interest
Military
Security
Wiki
Workspace
ICIW 2007: 2nd International Conference on i-Warfare and Security English Next-generation service-oriented architectures provide a means for unprecedented levels of collaboration. Security built upon LDAPv3 when coupled with virtual private database technology can enable secure operations within a domain while retaining need-to-know thresholds with fine-grain security. This paper discusses such a technology in use for navnetwarcom trident warrior experimentation. A simulated scenario is discussed in which 12 officers involved in a mhq with moc in mda scenario prototyped an operational application of a comprehensive suite of xml web services that provided a personalised portal for each user, email, chat, presence, instant messenger, web conference, threaded discussions, and secured virtual workspaces with libraries, discussion area, and task management. The focus was on methods to secure communications across military, civilian and coalition operations preliminary to more extensive testing to occur in Trident Warrior 07. Along with an introduction to the technology the study results are presented, addressing methodology and protocols for highly collaborative sessions with varying levels of security in highly dynamic scenarios. 0 0
Fire Next Time: Or Revisioning Higher Education in the Context of Digital Social Creativity Reijo Kupiainen
Juha Suoranta
Tere Vaden
E-Learning This article presents an idea of digital social creativity" as part of social media and examines an approach emphasising openness and experimentation and collaborative learning in the world of information and communication technologies. Wikipedia and similar digital tools provide both challenges to and possibilities for building learning sites in higher education and other forms of education and socialisation that recognise various forms of information and knowledge creation. The dialogical nature of knowledge and the emphasis on social interaction create a tremendous opportunity for education but at the same time form new hegemonic battlegrounds in terms of various uses of social media. {(Contains} 1 table.)" 0 0
Freebase: A shared database of structured general human knowledge Bollacker K.
Cook R.
Tufts P.
Proceedings of the National Conference on Artificial Intelligence English Freebase is a practical, scalable, graph-shaped database of structured general human knowledge, inspired by Semantic Web research and collaborative data communities such as the Wikipedia. Freebase allows public read and write access through an HTTP-based graph-query API for research, the creation and maintenance of structured data, and application building. Access is free and all data in Freebase has a very open (e.g. Creative Commons, GFDL) license. Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 0
From zero to hero - Is the mobile phone a viable learning tool for Africa? Ford M.
Batchelor J.
Africa
Audio-wikipedia
Computing device
Developing world
ICT
Information society
Learning scenarios
Mobile learning
Mobile phones
School
Search term
SMS
South Africa
Speech synthesizer
Technology platform
Text
Wikipedia
IMSCI 2007 - International Multi-Conference on Society, Cybernetics and Informatics, Proceedings English In many countries mobile phones are being banned from schools amidst growing concerns regarding their inappropriate use during school hours. However, the mobile phone is the de-facto most important networked knowledge exchange technology used in Africa and the most powerful universally-accessible computing device in the hands of Africans. How do we change the perception of the mobile phone as a disruptive influence in schools to one where it can be used to pragmatically support the learning process? MobilED (Mobile EDucation) is a 3-year international collaborative project aimed at creating meaningful learning environments using mobile phone technologies and services. The MobilED project was initiated in South Africa and the first two pilots consisted of exploratory research into the use of mobile phones in an advantaged private school and in a poor government school in Tshwane, South Africa. This paper examines the viability of the mobile phone as a learning tool in schools in Africa by using the MobilED project as a case study. It discusses the current anti-mobile phone situation in many schools in South Africa and suggests possible strategies to harness the potential of the mobile phone in practical ways as a pedagogically-appropriate learning tool in schools in Africa. 0 0
Fusing Visual and Textual Retrieval Techniques to Effectively Search Large Collections of Wikipedia Images C. Lau
D. Tjondronegoro
J. Zhang
S. Geva
Y. Liu
Comparative Evaluation of XML Information Retrieval Systems English This paper presents an experimental study that examines the performance of various combination techniques for content-based image retrieval using a fusion of visual and textual search results. The evaluation is comprehensively benchmarked using more than 160,000 samples from INEX-MM2006 images dataset and the corresponding XML documents. For visual search, we have successfully combined Hough transform, Object’s color histogram, and Texture (H.O.T). For comparison purposes, we used the provided UvA features. Based on the evaluation, our submissions show that Uva+Text combination performs most effectively, but it is closely followed by our H.O.T- (visual only) feature. Moreover, H.O.T+Text performance is still better than UvA (visual) only. These findings show that the combination of effective text and visual search results can improve the overall performance of CBIR in Wikipedia collections which contain a heterogeneous (i.e. wide) range of genres and topics. 0 0
Generating Educational Tourism Narratives from Wikipedia Brent Hecht
Nicole Starosielski
Drew Dara-Abrams
Narrative theory
Data mining
Educational tourism
Association for the Advancement of Artificial Intelligence Fall Symposium on Intelligent Narrative Technologies (AAAI-INT) We present a narrative theory-based approach to data mining that generates cohesive stories from a Wikipedia corpus. This approach is based on a data mining-friendly view of narrative derived from narratology, and uses a prototype mining algorithm that implements this view. Our initial test case and focus is that of field-based educational tour narrative generation, for which we have successfully implemented a proof-of-concept system called Minotour. This system operates on a client-server model, in which the server mines a Wikipedia database dump to generate narratives between any two spatial features that have associated Wikipedia articles. The server then delivers those narratives to mobile device clients. 0 0
Generating educational tourism narratives from wikipedia Brent Hecht
Starosielski N.
Dara-Abrams D.
AAAI Fall Symposium - Technical Report English We present a narrative theory-based approach to data mining that generates cohesive stories from a Wikipedia corpus. This approach is based on a data mining-friendly view of narrative derived from narratology, and uses a prototype mining algorithm that implements this view. Our initial test case and focus is that of field-based educational tour narrative generation, for which we have successfully implemented a proof-of-concept system called Minotour. This system operates on a client-server model, in which the server mines a Wikipedia database dump to generate narratives between any two spatial features that have associated Wikipedia articles. The server then delivers those narratives to mobile device clients. 0 0
Genome re-annotation: a wiki solution? Steven L. Salzberg English The annotation of most genomes becomes outdated over time, owing in part to our ever-improving knowledge of genomes and in part to improvements in bioinformatics software. Unfortunately, annotation is rarely if ever updated and resources to support routine reannotation are scarce. Wiki software, which would allow many scientists to edit each genome's annotation, offers one possible solution. 0 1
Geographic co-occurrence as a tool for GIR Overell S.E.
Stefan Ruger
Algorithms International Conference on Information and Knowledge Management, Proceedings English In this paper we describe the development of a geographic co-occurrence model and how it can be applied to geographic information retrieval. The model consists of mining co-occurrences of placenames from Wikipedia, and then mapping these placenames to locations in the Getty Thesaurus of Geographical Names. We begin by quantifying the accuracy of our model and compute theoretical bounds for the accuracy achievable when applied to placename disambiguation in free text. We conclude with a discussion of the improvement such a model could provide for placename disambiguation and geographic relevance ranking over traditional methods. 0 0
Geographic co-occurrence as a tool for GIR. Overell
Simon E.
Stefan Ruger
Wikipedia
Disambiguation
Geographic information retrieval
4th ACM workshop on Geographical Information Retrieval. Lisbon, Portugal. In this paper we describe the development of a geographic co-occurrence model and how it can be applied to geographic information retrieval. The model consists of mining co-occurrences of placenames from Wikipedia, and then mapping these placenames to locations in the Getty Thesaurus of Geographical Names. We begin by quantifying the accuracy of our model and compute theoretical bounds for the accuracy achievable when applied to placename disambiguation in free text. We conclude with a discussion of the improvement such a model could provide for placename disambiguation and geographic relevance ranking over traditional methods. 0 0
Harvesting Wiki Consensus: Using Wikipedia Entries as Vocabulary for Knowledge Management Martin Hepp
Katharina Siorpaes
Daniel Bachlechner
Knowledge management
Ontology
Semantic knowledge management
URIs
Wikipedia
Wiki
IEEE Internet Computing English Vocabularies that provide unique identifiers for conceptual elements of a domain can improve precision and recall in knowledge-management applications. Although creating and maintaining such vocabularies is generally hard, wiki users easily manage to develop comprehensive, informal definitions of terms, each one identified by a URI. Here, the authors show that the URIs of Wikipedia entries are reliable identifiers for conceptual entities. They also demonstrate how Wikipedia entries can be used for annotating Web resources and knowledge assets and give precise estimates of the amount of Wikipedia URIs in terms of the popular Proton ontology's top-level concepts. 0 0
He says, she says: conflict and coordination in Wikipedia Aniket Kittur
Bongwon Suh
Bryan A. Pendleton
Ed H. Chi
English Wikipedia, a wiki-based encyclopedia, has become one of the most successful experiments in collaborative knowledge building on the Internet. As Wikipedia continues to grow, the potential for conflict and the need for coordination increase as well. This article examines the growth of such non-direct work and describes the development of tools to characterize conflict and coordination costs in Wikipedia. The results may inform the design of new collaborative knowledge systems. 0 20
How a personalized geowiki can help bicyclists share information more effectively Reid Priedhorsky
Jordan B.
Loren Terveen
Geography
Geowiki
Personalization
Wiki
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English The bicycling community is focused around a real-world activity - navigating a bicycle - which requires planning within a complex and ever-changing space. While all the knowledge needed to find good routes exists, it is highly distributed. We show, using the results of surveys and interviews, that cyclists need a comprehensive, up-to-date, and personalized information resource. We introduce the personalized geowiki, a new type of wiki which meets these requirements, and we formalize the notion of geowiki. Finally, we state some general prerequisites for wiki contribution and show that they are met by cyclists. Copyright 0 0
IBM HRL at INEX 06 Mass Y. Lecture Notes in Computer Science English In previous INEX years we presented an XML component ranking algorithm that was based on separation of nested XML elements to different indices. This worked fine for the IEEE collection which has a small number of potential component types that can be returned as query results. However, such an assumption doesn't scale to this year Wikipedia collection where there is a large set of potential component types that can be returned. We show a new version of the Component ranking algorithm that does not assume any knowledge on the set of component types. We then show some preliminary work we did to exploit the connectivity of the Wikipedia collection to improve ranking. 0 0
Identity: How to name it, how to find it Dichev C.
Dicheva D.
Jochen Fischer
E-learning
Information retrieval
Semantic web
Subject identity
Topic maps
CEUR Workshop Proceedings English The main objective of this work is to exploit the relationship between the information findability problem and a subject-based organization of information. Identification of a subject is involved when one wants to say something about that subject or when he or she tries to comprehend what was said by others about it. An example of this type of duality can be seen in the information world where content creators and content consumers need to communicate. In this paper we discuss the concept of subject identity in learning content authoring, where we view a topic map as supporting the communication between a content author and learners. In this context we address both sides of the dual system and propose some solutions intended to assist both content creators and consumers in dealing with problems typical for e-learning repositories. Concerning the learners who need to identify the subject they are looking information about, we suggest that a set of subjects related to it can be interpreted as a weak form of its identity. This can be used for finding a starting point for content exploration and we propose an algorithm for this task. As to the content authors, they need to use agreed-upon names and possibly subject identifiers to identify the subjects they are talking about. In this relation we suggest using Wikipedia articles as a source for both consensual naming and subject identifiers. We claim that Wikipedia can play a role of a shared context between topic map authors and users and propose an approach for extracting consensual information from Wikipedia. The proposed ideas are implemented in the Topic Maps for e-Learning tool (TM4L). 0 0
If I were "You": How Academics Can Stop Worrying and Learn to Love "the Encyclopedia that Anyone Can Edit" O'Donnell
Daniel Paul
1526-1867 "Electronic Medievalia" column in the Saints and Sanctity issue. Sections include: Time Magazine and the Participatory Web, Academic Resistance, Why the Participatory Web Works, Why Don't We Like It, Why We Can't Do Anything About It, and A New Model of Scholarship: The Wikipedia as Community Service 0 0
Impact of digital information resources in the toxicology literature Robinson L. Communication technologies
Information transfer
Sciences
Worldwide web
Aslib Proceedings: New Information Perspectives English Purpose - The purpose of the study reported here was to assess the degree to which new forms of web-based information and communication resources impact on the formal toxicology literature, and the extent of any change between 2000 and 2005. Design/methodology/approach - The paper takes the form of an empirical examination of the full content of four toxicology journals for the year 2000 and for the year 2005, with analysis of the results, comparison with similar studies in other subject areas, and with a small survey of the information behaviour of practising toxicologists. Findings - Scholarly communication in toxicology has been relatively little affected by new forms of information resource (weblogs, wikis, discussion lists, etc.). Citations in journal articles are still largely to "traditional" resources, though a significant increase in the proportion of web-based material being cited in the toxicology literature has occurred between 2000 and 2005, from a mean of 3 per cent to a mean of 19 per cent. Research limitations/implications - The empirical research is limited to an examination of four journals in two samples of one year each. Originality/value - This is the only recent study of the impact of new ICTs on toxicology communication. It adds to the literature on the citation of digital resources in scholarly publications. 0 0
Improving access to and use of digital resources in a self directed learning context Terry Judd
Gregor Kennedy
Self-directed learning
Social bookmarking
Usage monitoring
ASCILITE 2007 - The Australasian Society for Computers in Learning in Tertiary Education English This paper presents the background to and progress of a project investigating the use of courseware and other digital resources by undergraduate medical students in a self-directed learning environment (shared open-access computing space) within a problem-based curriculum. The investigation draws on three parallel streams of data collection; automated usage monitoring, survey and focus group. Over 60,000 individual computer sessions and more than 500 surveys are currently being analysed. Preliminary analysis reveals that only a small percentage of the available courseware resources are regularly used, and that the level of usage appears to be highly dependent the level of promotion and support provided by teaching staff. Analysis of Internet usage data reveals that medical students rely heavily on Google and Wikipedia to locate and access self-directed learning resources and that they are relatively unsophisticated in their use of search tools. The results of the investigation are informing the design and development of an innovative software support tool that aims to improve students' awareness of and access to a wide range of digital resources. 0 0
Improving flickr discovery through Wikipedias Gobbo F. Flickr
Folksonomies
Serendipity
Wikipedia
CEUR Workshop Proceedings English This paper explores how to discover unexpected information in existing folksonomies (serendipity) using extensive multilingual open source repositories as the underlying knowledge base, overcoming linguistic barriers at the same time. A web application called Flickrpedia is given as a practical example, using Flickr as the folksonomy and diverse natural language Wikipedias as the knowledge base. 0 0
Improving text classification by using encyclopedia knowledge Pu Wang
Jian Hu
Zeng H.-J.
Long Chen
Zheng Chen
Proceedings - IEEE International Conference on Data Mining, ICDM English The exponential growth of text documents available on the Internet has created an urgent need for accurate, fast, and general purpose text classification algorithms. However, the "bag of words" representation used for these classification methods is often unsatisfactory as it ignores relationships between important terms that do not co-occur literally. In order to deal with this problem, we integrate background knowledge - in our application: Wikipedia - into the process of classifying text documents. The experimental evaluation on Reuters newsfeeds and several other corpus shows that our classification results with encyclopedia knowledge are much better than the baseline "bag of words" methods. 0 0
Improving the quality of collaboration requirements for information management through social networks analysis Sofia Pereira C.
Soares Antonio
Analysis and specification of collaborative systems
Content management systems
Information management
Social network analysis
Wiki
International Journal of Information Management English The right choice of the method of organizational analysis to use is a key factor in the process of requirements analysis and specification of an information system. Although a high number of approaches of organizational analysis exist, the choice of the most appropriate option for each concrete case will influence the quality of the results obtained in the analysis of requirements and consequent specification. This paper presents a new way for organizational analysis to improve the quality of the requirements of systems that support information management and where collaboration is an important aspect. This is achieved through the application of the social network analysis approach, applied to refine, classify and prioritize the requirements for collaboration and information management in an organization. The paper begins by analysing shortly content management systems and wiki systems as IT platforms for collaboration and information management. After having described the method, a practical case of application of SNetCol method to an R&D institution is presented. The paper finishes by presenting the results of the evaluation of the two particular technological options considered for satisfying the specified requirements are described. © 2006 Elsevier Ltd. All rights reserved. 0 0
Improving weak ad-hoc queries using Wikipedia asexternal corpus Yinghao Li
Wing
Kei
Fu
English In an ad-hoc retrieval task, the query is usually short and the user expects to find the relevant documents in the first several result pages. We explored the possibilities of using Wikipedia's articles as an external corpus to expand ad-hoc queries. Results show promising improvements over measures that emphasize on weak queries. 0 0
In-house use of Web 2.0: Enterprise 2.0 Kakizawa Y. Blogs
Enterprise 2.0
RSS
Social Network Service (SNS)
Web 2.0
Wiki
NEC Technical Journal English The concept of Enterprise 2.0 that uses the Web 2.0 technology for corporate affairs is expandino. The ooncept of Enterprise 2.0 is implemented by combining technologies for blogging, SNS, Wiki and RSS as well as open-source software. Enterprise 2.0 may be delivered to the customer as a service as well as a system. 0 0
InOrder: Enhancing Google via stigmergic query refinement Camp G.
Ulieru M.
Google
Information scent
Interactive search guidance
Memetics
Query formulation
Relevance feedback
Semantic self-organization
Stigmergic collaboration
Visual search WIKI
Computer Systems Science and Engineering English InOrder is a query refinement tool that works on top of Goolge and helps individual users to collaboratively participate in best Web query formulations. The incremental refinement works via an indirect communication process facilitated by a visual interface which adapts significantly to reflect user contributions. The interface visually guides users in an implicit manner via a 'what you click is what you mean'approach which assists semantic visualization of past interactions by highlighting relevant search terms in brighter colors. Users simply click on anything that seems relevant while being 'attracted' to the terms already clicked by other users. This elicits conceptual refinements and reduces the rating effort required by many collaborative systems. InOrder functions as a 'visual search WIKI', which represents search intent rather than search of formal articles. Since it takes less effort to click than type, the system increases search usability by reducing the interactive effort required to discover documents when search goals are unclear. 0 0
Incremental text structuring with online hierarchical ranking Chen E.
Snyder B.
Regina Barzilay
EMNLP-CoNLL 2007 - Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning English Many emerging applications require documents to be repeatedly updated. Such documents include newsfeeds, webpages, and shared community resources such as Wikipedia. In this paper we address the task of inserting new information into existing texts. In particular, we wish to determine the best location in a text for a given piece of new information. For this process to succeed, the insertion algorithm should be informed by the existing document structure. Lengthy real-world texts are often hierarchically organized into chapters, sections, and paragraphs. We present an online ranking model which exploits this hierarchical structure - representationally in its features and algorithmically in its learning procedure. When tested on a corpus of Wikipedia articles, our hierarchically informed model predicts the correct insertion paragraph more accurately than baseline methods. 0 0
Independent, synchronous and asynchronous an analysis of approaches to online concept formation Bower M. Concept formation
Groupwork
Online learning
Pedagogy
Technology enhanced learning
Virtual classroom
Wiki
ITiCSE 2007: 12th Annual Conference on Innovation and Technology in Computer Science Education - Inclusive Education in Computer Science English This paper compares and contrasts three different approaches to pre-class concept formation in an online computing course. In the initial third of the semester students made individual responses to sets of weekly pre-class tutorial style questions. In the following four weeks a virtual classroom was used to facilitate the synchronous construction of group responses to the same type of activities. In the final third of semester a wiki was used to provide an asynchronous means of composing group responses to the pre-class tutorial questions. The different patterns of student contribution and interaction that resulted from each mode are described. Implications for concept formation specifically and learning generally are discussed. Copyright 2007 ACM. 0 0
Innovating Collaborative Content Creation: The Role of Altruism and Wiki Technology Christian Wagner
Pattarawan Prasarnphanich
HICSS English Wikipedia demonstrates the feasibility and success of an innovative form of content creation, namely openly shared, collaborative writing. This research sought to understand the success of Wikipedia as a collaborative model, considering both technology and participant motivations. The research finds that while participants have both individualistic and collaborative motives, collaborative (altruistic) motives dominate. The collaboration model differs from that of open source software development, which is less inclusive with respect to participation, and more "selfish" with respect to contributor motives. The success of the Wikipedia model appears to be related to wiki technology and the "wiki way" of collaboration 0 2
Innovation in agricultural digital library services based on Web 2.0 Baoji W.
Jinnuo Z.
Ruiqing X.
Qingsheng L.
Qingshui L.
Agricultural digital library
Blogs
Information service
RSS
Social bookmark
Web 2.0
Wiki
New Zealand Journal of Agricultural Research English After several years of development, agricultural digital libraries in China have made great achievements in the acquisition and accessibility of documentation and information. However, many of their services focus on the indexing and transfer of information, based on a service concept as an agricultural information provider. As internet technology evolves from Web 1.0 to Web 2.0, the manner in which information is produced, organised, and disseminated has changed. User behaviour and their modes of using information are also changing. In this paper, we briefly introduce the principle and the operating model of Web 2.0 and go on to analyse the relationship between Web 2.0 and digital library technologies. After analysing some limitations with existing library services, we suggest that Web 2.0 be used in agricultural digital libraries to improve services, focusing on the user and on how to meet their needs. 0 0
Integrity in open collaborative authoring systems Jensen C.D. IFIP International Federation for Information Processing English Open collaborative authoring systems have become increasingly popular within the past decade. The benefits of such systems is best demonstrated by the Wiki and some of the tremendously popular applications build on Wiki technology, in particular the Wikipedia, which is a free encyclopaedia collaboratively edited by Internet users with a minimum of administration. One of the most serious problems that have emerged in open collaborative authoring systems relates to the quality, especially completeness and correctness of information. Inaccuracies in the Wikipedia have been rumoured to cause students to fail courses, innocent people have been associated with the killing of John F. Kennedy, etc. Improving the correctness, completeness and integrity of information in collaboratively authored documents is therefore of vital importance to the continued success of such systems. In this paper we propose an integrity mechanism for open collaborative authoring systems based on a combination of classic integrity mechanisms from computer security and reputation systems. While the mechanism provides a reputation based assessment of the trustworthiness of the information in a document, the primary purpose is to prevent untrustworthy authors from compromising the integrity of the document. 0 0
Intelligent Web Services Selection based on AHP and Wiki Chen Wu
Elizabeth Chang
English Web Service selection is an essential element in Service-Oriented Computing. How to wisely select appropriate Web services for the benefits of service consumers is a key issue in service discovery. In this paper, we approach QoS-based service selection using a decision making model – the Analytic Hierarchy Process (AHP). In our solution, both subjective and objective criteria are supported by the AHP engine in a context-specific manner. We also provide a flexible Wiki platform to collaboratively form the initial QoS model within a service community. The software prototype is evaluated against the system scalability. 0 0
Interlingual aspects of wikipedia's quality Hammwohner R. Indexing
Information quality
Knowledge Organization
Wikipedia
Proceedings of the 2007 International Conference on Information Quality, ICIQ 2007 English This paper presents interim results of an ongoing project on quality issues concerning Wikipedia. One focus of research is the relation of language and quality measurement. The other one is the use of interlingual relations for quality assessment and improvement. The study is based on mono- and multilingual samples of featured and non-featured Wikipedia articles in English, French, German, and Italian that are evaluated automatically. 0 1
Internet and Other Electronic Resources for Materials Education 2007 No author name available TMS Annual Meeting English The proceedings contain 1 papers. The topics discussed include: Wikipedia in materials education. 0 0
Investigating recognition-based performance in an open content community: A social capital perspective Chitu Okoli
Wonseok Oh
Information and Management As the open source movement grows, it becomes important to understand the dynamics that affect the motivation of participants who contribute their time freely to such projects. One important motivation that has been identified is the desire for formal recognition in the open source community. We investigated the impact of social capital in participants' social networks on their recognition-based performance; i.e., the formal status they are accorded in the community. We used a sample of 465 active participants in the Wikipedia open content encyclopedia community to investigate the effects of two types of social capital and found that network closure, measured by direct and indirect ties, had a significant positive effect on increasing participants' recognition-based performance. Structural holes had mixed effects on participants' status, but were generally a source of social capital. 2007 Elsevier {B.V.} All rights reserved. 0 3
Is this the party to whom I am speaking? Sullivan F. Computing in Science and Engineering English Francis Sullivan has shared his views regarding the evils of Internet technologies that make is easy to send out several of unwanted emails and the information presented on the Wikipedia. Despite the fact that entities like Wikipedia are prone to error, they offer a better chance of getting the information right because they are constantly checked and updated. Sullivan also discussed the different responses from different fields to Wikipedia. He revealed that articles on physics range from good to excellent, while articles on the literature are more varied and more contentious. 0 0
It's a wiki wiki world Medical Reference Services Quarterly English 0 0
It's time to use a wiki as part of your web site Computers in Libraries English 0 0
Keep your eyes on the enterprise: Emails, wikis, blogs, and corporate risk Martin N. EContent English The importance of email management, which is used for used for informing employees about corporate news, is discussed. International Data Corporation has estimated that 60% of email information belongs to sales, proposals, marketing plans, contracts, customer profiles, and personnel files. Emial-management solution for Microsoft indexes and classifies older and stored knowledge, which is very useful the company. Corporations that depend on email and instant messaging as their primary communication and collaboration tools need to manage them to avoid risk of losing essential data or precious assets, such as audio or visual media. Emial contains trade secrets, or product development strategies, which are sensitive and need to be properly managed. Emails and instant messaging have also become electronic evidence and helped many companies to defend themselves against legal actions. 0 0
Key biology databases go wiki Jim Giles English 0 0
KnowWE - Community-based knowledge capture with knowledge wikis Joachim Baumeister
Jochen Reutelshoefer
Frank Puppe
Documentation
Management
K-CAP'07: Proceedings of the Fourth International Conference on Knowledge Capture English This paper presents a collaborative knowledge engineering approach based on the widespread wiki technique. The interface of a standard wiki system is extended to allow for the capture, the maintenance and the use of knowledge systems. 0 0
KnowWE: community-based knowledge capture with knowledge wikis Joachim Baumeister
Jochen Reutelshoefer
Frank Puppe
Collaboration
Distributed knowledge enigneering
Knowledge systems
K-CAP English 0 0
Knowledge Derived from Wikipedia for Computing Semantic Relatedness Simone P. Ponzetto and Michael Strube Knowledge
Knowledge-extraction relatedness semantic semantic web
Wikipedia
Journal of Artificial Intelligence Research, 30: 181--212, 2007. Wikipedia provides a semantic network for computing semantic relatedness in a more structured fashion than a search engine and with more coverage than WordNet. We present experiments on using Wikipedia for computing semantic relatedness and compare it to WordNet on various benchmarking datasets. Existing relatedness measures perform better using Wikipedia than a baseline given by Google counts, and we show that Wikipedia outperforms WordNet on some datasets. We also address the question whether and how Wikipedia can be integrated into NLP applications as a knowledge base. Including Wikipedia improves the performance of a machine learning based coreference resolution system, indicating that it represents a valuable resource for NLP applications. Finally, we show that our method can be easily used for languages other than English by computing semantic relatedness for a German dataset. 0 3
Knowledge capture and dissemination using a collaborative 'wiki' environment Transactions of the American Nuclear Society English 0 0
Knowledge capturing tools for domain experts: Exploiting named entity recognition and n-ary relation discovery for knowledge capturing in e-science Brocker L.
Rossler M.
Wagner A.
Named entity recognition
Relation discovery
Semantic networks
Wiki systems
CEUR Workshop Proceedings English The success of the Semantic Web depends on the availability of content marked up using its description languages. Although the idea has been around for nearly a decade, the amount of Semantic Web content available is still fairly small. This is despite the existence of many digital archives containing lots of high quality collections which would, appropriately marked up, greatly enhance the reach of the Semantic Web. The archives themselves would benefit as well, by improved opportunities for semantic search, navigation and interconnection with other archives. The main challenge lies in the fact that ontology creation at the moment is a very detailed and complicated process. It mostly requires the service of an ontology engineer, who designs the ontology in accordance with domain experts. The software tools available, be it from the text engineering or the ontology creation disciplines, reflect this: they are built for engineers, not for domain experts. In order to really tap the potential of the digital collections, tools are needed that support the domain experts in marking up the content they understand better than anyone else. This paper presents an integrated approach to knowledge capturing and subsequent ontology creation, called WIKINGER, that aims at empowering domain experts to prepare their content for inclusion into the Semantic Web. This is done by largely automating the process through the use of named entity recognition and relation discovery. 0 0
Knowledge contribution in Wikipedia Seah Ru Hong National University of Singapore 0 0
Knowledge derived from wikipedia for computing semantic relatedness Simone Paolo Ponzetto
Michael Strube
J. Artif. Int. Res. English 0 3
Knowledge management in a Wiki platform via microformats Proceedings of the Twentieth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2007 English 0 0
Knowledge management technology and increased narrative synthesis can enhance critical decision making for space exploration Newman J.S.
Vecellio D.
Hawley J.D.
European Space Agency, (Special Publication) ESA SP English Wiki technology implementation within critical aerospace deliberative and decision-making processes is proposed as a way to enhance safety and mission success for future space exploration endeavors. The paper discusses how Wiki technology can leverage a broader corporate knowledge-base in making critical decisions while maintaining traditional hierarchal organizational accountability. Specific examples are provided for risk management, program management, safety review, and environmental management critical processes. 0 0
Koala: Capture, share, automate, personalize business processes on the web Little G.
Lau T.A.
Cypher A.
Lin J.
Haber E.M.
Kandogan E.
Automation
End-user programming
Programming by demonstration
Wiki
Conference on Human Factors in Computing Systems - Proceedings English We present Koala, a system that enables users to capture, share, automate, and personalize business processes on the web. Koala is a collaborative programming-by-demonstration system that records, edits, and plays back user interactions as pseudo-natural language scripts that are both human- and machine-interpretable. Unlike previous programming by demonstration systems, Koala leverages sloppy programming that interprets pseudo-natural language instructions (as opposed to formal syntactic statements) in the context of a given web page's elements and actions. Koala scripts are automatically stored in the Koalescence wiki, where a community of users can share, run, and collaboratively develop their "how-to" knowledge. Koala also takes advantage of corporate and personal data stores to automatically generalize and instantiate user-specific data, so that scripts created by one user are automatically personalized for others. Our initial experiences suggest that Koala is surprisingly effective at interpreting instructions originally written for people. Copyright 2007 ACM. 0 0
Korean-Chinese person name translation for cross language information retrieval Wang Y.-C.
Lee Y.-H.
Lin C.-C.
Tsai R.T.-H.
Hsu W.-L.
Korean-Chinese cross language information retrieval
Person name translation
PACLIC 21 - The 21st Pacific Asia Conference on Language, Information and Computation, Proceedings English Named entity translation plays an important role in many applications, such as information retrieval and machine translation. In this paper, we focus on translating person names, the most common type of name entity in Korean-Chinese cross language information retrieval (KCIR). Unlike other languages, Chinese uses characters (ideographs), which makes person name translation difficult because one syllable may map to several Chinese characters. We propose an effective hybrid person name translation method to improve the performance of KCIR. First, we use Wikipedia as a translation tool based on the inter-language links between the Korean edition and the Chinese or English editions. Second, we adopt the Naver people search engine to find the query name's Chinese or English translation. Third, we extract Korean-English transliteration pairs from Google snippets, and then search for the English-Chinese transliteration in the database of Taiwan's Central News Agency or in Google. The performance of KCIR using our method is over five times better than that of a dictionary-based system. The mean average precision is 0.3490 and the average recall is 0.7534. The method can deal with Chinese, Japanese, Korean, as well as non-CJK person name translation from Korean to Chinese. Hence, it substantially improves the performance of KCIR. 0 0
Large-Scale Named Entity Disambiguation Based on Wikipedia Data Silviu Cucerzan English 0 0
Learning Experience of Student Journalists: Utilizing Collaborative Writing Medium Wikis Will Wai-kit Ma
Allan Hoi-kau Yuen
News Writing Processes
Revision
Social Interaction
Wiki
English 0 0
Learning experience of student journalists: Utilizing collaborative writing medium wikis Ma W.W.-K.
Yuen A.H.K.
News Writing Processes
Revision
Social Interaction
Wiki
15th International Conference on Computers in Education: Supporting Learning Flow through Integrative Technologies, ICCE 2007 English In this paper, we explore the processes and effects of Wikis as a learning medium to journalistic writing. In a field study, undergraduate journalistic students are exposed to a Student-written Wiki to jointly compose news reporting online. A group of student journalists were then asked to complete a post survey to comment and reflect their learning experience of news reporting within the Wiki environment. Analysis of student journalists' responses to open-ended questions revealed revision as the core processing capability of Wiki. The motivational factors to revision include accuracy (fact checking), story enrichment, and personal interest toward the news topic. On the other hand, learners are also affected by the social interactions among the community users within Wiki. The results are important to provide practical guidance to the implementation of Wikis. 0 0
Learning for information extraction: From named entity recognition and disambiguation to relation extraction R. Bunescu The University of Texas at Austin English Information Extraction, the task of locating textual mentions of specific types of entities and their relationships, aims at representing the information contained in text documents in a structured format that is more amenable to applications in data mining, question answering, or the semantic web. The goal of our research is to design information extraction models that obtain improved performance by exploiting types of evidence that have not been explored in previous approaches. Since designing an extraction system through introspection by a domain expert is a laborious and time consuming process, the focus of this thesis will be on methods that automatically induce an extraction model by training on a dataset of manually labeled examples. Named Entity Recognition is an information extraction task that is concerned with finding textual mentions of entities that belong to a predefined set of categories. We approach this task as a phrase classification problem, in which candidate phrases from the same document are collectively classified. Global correlations between candidate entities are captured in a model built using the expressive framework of Relational Markov Networks. Additionally, we propose a novel tractable approach to phrase classification for named entity recognition based on a special Junction Tree representation. Classifying entity mentions into a predefined set of categories achieves only a partial disambiguation of the names. This is further refined in the task of Named Entity Disambiguation, where names need to be linked to their actual denotations. In our research, we use Wikipedia as a repository of named entities and propose a ranking approach to disambiguation that exploits learned correlations between words from the name context and categories from the Wikipedia taxonomy. Relation Extraction refers to finding relevant relationships between entities mentioned in text documents. Our approaches to this information extraction task differ in the type and the amount of supervision required. We first propose two relation extraction methods that are trained on documents in which sentences are manually annotated for the required relationships. In the first method, the extraction patterns correspond to sequences of words and word classes anchored at two entity names occurring in the same sentence. These are used as implicit features in a generalized subsequence kernel, with weights computed through training of Support Vector Machines. In the second approach, the implicit extraction features are focused on the shortest path between the two entities in the word-word dependency graph of the sentence. Finally, in a significant departure from previous learning approaches to relation extraction, we propose reducing the amount of required supervision to only a handful of pairs of entities known to exhibit or not exhibit the desired relationship. Each pair is associated with a bag of sentences extracted automatically from a very large corpus. We extend the subsequence kernel to handle this weaker form of supervision, and describe a method for weighting features in order to focus on those correlated with the target relation rather than with the individual entities. The resulting Multiple Instance Learning approach offers a competitive alternative to previous relation extraction methods, at a significantly reduced cost in human supervision. 0 0
Learning to Rank Definitions to Generate Quizzes for Interactive Information Presentation Ryuichiro Higashinaka
Kohji Dohsaka
Hideki Isozaki
English 0 0
Legal pathways for cross-border research: Building a legal platform for biomedical academia Bovenberg J.A. European Journal of Human Genetics English A proposal for the development of a dynamic, online, grass roots WIKI + legal platform for sharing, discussing, validating and issuing authoritative and reliable legal forms and standards to aid the (European) biomedical research community in navigating the legal pathways that govern cross-border, multi-jurisdictional (EU) research (legal platform). 0 0
Ler, escrever, editar comentar, voltar... Os desafios do letramento digital na web 2.0 Carlos Frederico de Brito d’Andréa Digital literacy
Web 2.0
Internet
Língua Escrita Portuguese The internet connection as a basic caracteristic of the “Network Society” resulted in new challenges for the digital literacy. The fast popularization of Internet was impacted, since 2005, by a new generation of web sites, called Web 2.0, in which is possible everyone can participate directly of the process of elaboration, publication and edition of contents. For digital literacy, this concept results in new habilities, considering that all management process is made by the users. In this paper, we present two important sites of Web 2.0: YouTube, popular in sharing videos, and Wikipedia, the enciclopedia that any one can edit. In conclusion, are discussed the habilities expected for a complete participation of “readers” in colaborative projects. 4 0
Let's Get it Right: Prismatic Habit and Other Fusses. John S. White Rocks & Minerals The article focuses on the common usage of prismatic to describe crystals, the color photographs and captions in a calendar, and Wikipedia's information on quartz. The habitual use of prismatic to describe crystals is misleading, except when used for crystal cleavage. Criticism on the color photographs and wrong captions of quartz and minerals in {Rocks} and Crystals 2007" are detailed. The article states that Wikipedia's information on quartz is inadequate disorganized and questionable." 0 0
Liberating Epistemology: Wikipedia and the Social Construction of Knowledge Rubén Rosario Rodríguez Religious Studies and Theology, , No 2 (2007) This investigation contends that postfoundationalist models of rationality provide a constructive alternative to the positivist models of scientific rationality that once dominated academic discourse and still shape popular views on science and religion. Wikipedia, a free online encyclopedia, has evolved organically into a cross-cultural, cross-contextual, interdisciplinary conversation that can help liberate epistemology—especially theological epistemology—from the stranglehold of Enlightenment foundationalism. U.S. Latino/a theology provides an alternative to the dominant epistemological perspective within academic theology that is in many ways analogous to the organic, conversational epistemology embodied by the Wikipedia online community. Accordingly, this investigation argues that the work of human liberation is better served by liberating epistemology from the more authoritarian aspects of the Enlightenment scientific tradition—especially popular positivist conceptions of rationality. 0 0
Librarians on the verge of an epistemological breakdown C. Gunnels Community \& Junior College Libraries During the enlightenment of eighteenth-century France, the encyclopedists created a systematic compilation of all human knowledge in order to dispel current disinformation imposed by kings and clergy. The resultant Encyclopedie has been considered the turning point of the enlightenment, where knowledge became power and the power was made accessible to the people. This article explores the digital phenomenon of Web 2.0 and questions whether we are experiencing another epistemological shift similar to the Encyclopedie. It then discusses teaching information literacy and gives practical ways for community college librarians to incorporate Wikipedia, Google, and other digital sources into their instruction to teach research skills and critical thinking. 0 0
Library 2.0 and User-Generated Content: What can the users do for us? Patrick Danowski World Library and Information Congress: 73rd IFLA General Conference and Council English Library 2.0 and user-generated content are two terms, which are closely connected. In the presentation, I will briefly define both terms. Two example projects where user- generated content and libraries interact will be presented. The cooperation of Wikipedia and the Personennamendatei, the German cooperative name authority files is the first. The second will be Wikisource where users provide transcribed source material. Another important area of user-generated content is social tagging where users index different resources. And if the users will do so much in the future, is there still a place for librarians? But in the future user and librarians become partners and the library will provide the platform: the library 2.0. 0 0
Library 2.0: An overview Connor E. Blikis
Blogs
Folksonomy
Librarian 2.0
Library 2.0
Library as Place
Mashups
Medical Librarian 2.0
Medical Library 2.0
Peer production
Podcasting
Read/write Web
Semantic web
Syndication
Tag clouds
Tag
User generated content
Web 2.0
Web 3.0
Wiki
Medical Reference Services Quarterly English Web 2.0 technologies focus on peer production, posting, subscribing, and tagging content; building and inhabiting social networks; and combining existing and emerging applications in new and creative ways to impart meaning, improve search precision, and better represent data. The Web 2.0 extended to libraries has been called Library 2.0, which has ramifications for how librarians, including those who work in medical settings, will interact and relate to persons who were born digital, especially related to teaching and learning, and planning future library services and facilities. 0 0
Library association 2.0 "will that be a name badge or a wiki?" Searcher: Magazine for Database Professionals English 0 0
Link-based vs. content-based retrieval for question answering using Wikipedia Adafre S.F.
Jijkoun V.
Maarten de Rijke
Lecture Notes in Computer Science English We describe our participation in the WiQA 2006 pilot on question answering using Wikipedia, with a focus on comparing linkbased vs content-based retrieval. Our system currently works for Dutch and English. 0 0
Linking Educational Materials to Encyclopedic Knowledge Andras Csomai
Rada Mihalcea
English 0 0
Locapedias: Generación de contenido local de manera colaborativa Alfredo Romeo Molina Wikipedia
Locapedia
Future
Web 2.0
The power of many
Cordobapedia
Madripedia
Encyclopedia
Locapedias
IX Jornadas de Gestión de la Información Spanish Wikipedia has become the biggest encyclopedia ever made in the world. With more than four million articles written in hundreds of languages, Wikipedia is nowadays one of the five internet well-known branches in the world. In 2004, following the Wikipedia model, Alfredo Romeo suggests the launching of “locapedias”, based on the voluntary and collaborative model of contributors, with the aim of creating the biggest knowledge centre ever written about a local area. In 2005 the first “locapedia”, Cordobapedia, is founded. About two years later, 12 locapedias can be found in Spain, with about 20000 local articles made in a collaborative way. As time goes by, locapedias will probably represent for cities and regions the same as wikipedia has achieved: the largest reference web-site for local knowledge in any city with a locapedia. For locapedias, the fact that public institutions, as libraries or local archives, head these proyects could be the needed guaranteed mark to consolidate a movement of local-free knowleadge creation, which will be the reference for the society we are unstoppable going to: the knowledge society. 1 1
MSG-052 knowledge network for federation architecture and design Ohlund G.
Lofstrand B.
Hassaine F.
Architecture
Design
HLA
Knowledge Network
NMSG
Wiki
Fall Simulation Interoperability Workshop 2007 English Development of distributed simulations is a complex process requiring extensive experience, in-depth knowledge and a certain skills set for the Architecture, Design, development and systems integration required for a federation to meet its operational, functional and technical requirements. Federation architecture and design is the blueprint that forms the basis for federation-wide agreements on how to conceive and build a federation. Architecture and design issues are continuously being addressed during federation development. Knowledge of "good design" is gained through hands-on experience, trial-and-error and experimentation. This kind of knowledge however, is seldom reused and rarely shared in an effective way. This paper presents an ongoing effort conducted by MSG-052 "Knowledge Network for Federation Architecture and Design" within the NATO Research and Technology Organisation (NATO/RTO) Modelling and Simulation group (NMSG). The main objective of MSG-052 is to initiate a "Knowledge Network" to promote development and sharing of information and knowledge about common federation architecture and design issues among NATO/PfP (Partnership for Peace) countries. By Knowledge Network, we envision a combination of a Community of Practice (CoP), various organisations and Knowledge Bases. A CoP, consisting of federation development experts from the NATO/PfP nations, will foster the development of state-of-the-art federation architecture and design solutions, and provide a Knowledge Base for the Modelling and Simulation (M&S) community as a whole. As part of the work, existing structures and tools for knowledge capture, management and utilization will be explored, refined and used when appropriate; for instance the work previously done under MSG-027 PATHFINDER Integration Environment provides lessons learned that could benefit this group. The paper will explore the concept of a Community of Practice and reveal the ideas and findings within the MSG-052 Management Group concerning ways of establishing and managing a Federation Architecture and Design CoP. It will also offer several views on the concept of operations for a collaborative effort, combining voluntary contributions as well as assigned tasks. Amongst the preliminary findings was the notion of a Wiki-based Collaborative Environment in which a large portion of our work is conducted and which also represents our current Knowledge Base. Finally, we present some of our main challenges and vision for future work. 0 0
Maintaining a federated search service: Issues and solutions Rainwater J. Blogs
Federated search
Internet search engines
Metasearch
MySQL
Web tools
Wiki
Internet Reference Services Quarterly English federated search service does not stand still. Software changes on a fairly predictable schedule but content is constantly in flux as vendors make changes in their products and platforms and libraries and librarians make changes in their selection of products and vendors. It is important to have a plan for distribution of maintenance responsibilities and a workflow that integrates the maintenance of the federated search tool into existing routines. The extent to which these routines can be automated is a focus of this article. © 2007 by The Haworth Press, Inc. All rights reserved. 0 0
Markups for knowledge wikis Joachim Baumeister
Jochen Reutelshoefer
Frank Puppe
CEUR Workshop Proceedings English Knowledge wikis extend normal wikis by the representation of explicit knowledge. In contrast to semantic wikis the defined knowledge is mainly used for knowledge-intensive tasks like collaborative recommendation and classification. In this paper, we introduce a prototype implementation of a knowledge wiki, and we discuss appropriate markups for problem-solving knowledge to be used in knowledge wikis. The main aspect of the proposed markups is their simplicity and compactness, since it is aimed that these markups are used by normal wiki users. Case studies report on practical experiences we have made with the development of various knowledge wikis. 0 0
Mass spectrometry and Web 2.0 Murray K.K. Blogs
Internet
Podcast
Web
Wiki
Journal of Mass Spectrometry English The term Web 2.0 is a convenient shorthand for a new era in the Internet in which users themselves are both generating and modifying existing web content. Several types of tools can be used. With social bookmarking, users assign a keyword to a web resource and the collection of the keyword 'tags' from multiple users form the classification of these resources. Blogs are a form of diary or news report published on the web in reverse chronological order and are a popular form of information sharing. A wiki is a website that can be edited using a web browser and can be used for collaborative creation of information on the site. This article is a tutorial that describes how these new ways of creating, modifying, and sharing information on the Web are being used for on-line mass spectrometry resources. Copyright 0 0
Measuring article quality in Wikipedia: models and evaluation Meiqun Hu
Ee P. Lim
Aixin Sun
Hady W. Lauw
Ba Q. Vuong
English 0 7
Measuring article quality in wikipedia: Models and evaluation Hu M.
Lim E.-P.
Aixin Sun
Lauw H.W.
Vuong B.-Q.
Article quality
Authority
Collaborative authoring
Peer review
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English Wikipedia has grown to be the world largest and busiest free encyclopedia, in which articles are collaboratively written and maintained by volunteers online. Despite its success as a means of knowledge sharing and collaboration, the public has never stopped criticizing the quality of Wikipedia articles edited by non-experts and inexperienced contributors. In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history. Our basic model is designed based on the mutual dependency between article quality and their author authority. The PeerReview model introduces the review behavior into measuring article quality. Finally, our ProbReview models extend PeerReview with partial reviewership of contributors as they edit various portions of the articles. We conduct experiments on a set of well-labeled Wikipedia articles to evaluate the effectiveness of our quality measurement models in resembling human judgement. Copyright 2007 ACM. 0 7
MediaWiki open-source software as infrastructure for electronic resources outreach Jackson M.
Blackburn J.D.
McDonald R.H.
Academic libraries
Digital libraries
Electronic resources management
Information literacy
Subject guides
Web 2.0
Wiki
Reference Librarian English This article describes the bundling of MediaWiki into the electronic resource access strategy to enable custom content that supports online training and course-based information literacy objectives. © 2007 by The Haworth Press, Inc. All rights reserved. 0 0
Medical Librarian 2.0 Connor E. Ajax
API
Blogs
Bookmarking
Folksonomy
Librarian 2.0
Library 2.0
Mashup
Meme digger
Meme tracker
Metadata
Peer production
Podcasting
RDF
RSS
Semantic web
Social
Social networking
Syndication
Tag
Web 2.0
Wiki
Medical Reference Services Quarterly English Web 2.0 refers to an emerging social environment that uses various tools to create, aggregate, and share dynamic content in ways that are more creative and interactive than transactions previously conducted on the Internet. The extension of this social environment to libraries, sometimes called Library 2.0, has profound implications for how librarians will work, collaborate, and deliver content. Medical librarians can connect with present and future generations of users by learning more about the social dynamics of Web 2.0's vast ecosystem, and incorporating some of its interactive tools and technologies (tagging, peer production, and syndication) into routine library practice. © 2007 by The Haworth Press, Inc. All rights reserved. 0 0
Methoden zur sprachübergreifenden Plagiaterkennung Maik Anderka University of Paderborn German 0 0
Micro-blog: Map-casting from mobile phones to virtual sensor maps Gaonkar S.
Choudhury R.R.
Human participation
Mobile phones
Sensor networks
SenSys'07 - Proceedings of the 5th ACM Conference on Embedded Networked Sensor Systems English The synergy of phone sensors (microphone, camera, GPS, etc.), wireless capability, and ever-increasing device density can lead to novel people-centric applications. Unlike traditional sensor networks, the next generation networks may be participatory, interactive, and in the scale of human users. Millions of global data points can be organized on a visual platform, queried, and sophistically answered through human participation. Recent years have witnessed the isolated impacts of distributed knowledge sharing (Wikipedia), social networks, sensor networks, and mobile communication. We believe that significant more impact is latent in their convergence, that can to be drawn out through innovations in applications. This demonstration, called Micro-Blog, is a first step towards this goal. 0 0
Midweight collaborative remembering: Wikis in the workplace White K.F.
Lutters W.G.
Computer supported collaborative work
CSCW knowledge management
Organizational memory
Wiki
Proceedings of the 2007 Symposium on Computer Human Interaction for the Management of Information Technology, CHIMIT '07 English This paper presents preliminary findings from a series of semi-structured telephone interviews regarding the use of wikis in the workplace. At both technical and non-technical organizations issues included article creation, management support, critical mass, and trust. Copyright 2007 ACM. 0 0
Midweight collaborative remembering: wikis in the workplace Kevin F. White
Wayne G. Lutters
CSCW knowledge management
Computer supported collaborative work
Organizational memory
Wiki
CHIMIT English 0 0
Mining Wikipedia and Relating Named Entities over Time Abhijit Bhole
Blaz Fortuna
Marko Grobelnik
Dunja Mladeni
English 0 0
Mobile software for gathering and managing geo-referenced information from crisis areas Gens L.
Alves H.
Paredes H.
Martins P.
Fonseca B.
Bariso E.
Ramondt L.
Mor Y.
Morgado L.
Crisis areas
Georeferenced data
Map
Mobile devices
Rural development
Wiki
Proceedings of the 2nd International Conference on Internet Technologies and Applications, ITA 07 English Gathering field data from crisis areas requires overcoming a major technical hurdle: infrastructures and terminals for Internet access are non-existent or unreliable. Teams of professionals acting on the field lack a cost-effective, convenient way to provide their data to central management headquarters and the worldwide public in a timely manner. Cell-phone software can be leveraged to bridge this. Using cellphone applications, people living or working in low-tech areas can provide their knowledge worldwide, taking full advantage of georeferenced functionalities. In this paper we present the prototype of a platform based on a wiki+map server backbone for direct information entry, query, and editing, by people on the field using cell-phones, with strong version management. This georeferenced information can then be visualized centrally for tackling development/crisis issues, such as drought problems, outbreaks of diseases, etc. - giving NGOs and governments a better framework upon which to act. 0 0
MobileMaps@sg - Mappedia version 1.1 Gan D.D.
Chia L.-T.
Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007 English Technology has always been moving. Throughout the decades, improvements in various technological areas have led to a greater sense of convenience to ordinary people, whether it is cutting down time in accessing normal-to-day activities or getting privileged services. One of the technological areas that had been moving very rapidly is that of mobile computing. The common mobile device now has the mobility, provides entertainment via multimedia, connects to the Internet and is powered by intelligent and powerful chips. This paper will touch on an idea that is currently in the works, an integration of a recent technology that has netizens talking all over the world; Google Maps, that provide street and satellite images via the internet to the PC and Wikipedia's user content support idea, the biggest free-content encyclopedia on the Internet. We will hit on how it is able to integrate such a technology with the idea of free form editing into one application in a small mobile device. The new features provided by this application will work toward supporting the development of multimedia application and computing. 0 0
Mobiled - Mobile technology access for Africa Botha A.
Ford M.
Aucamp F.
Sutinen E.
Audio-Wikipedia
Learning scenarios
Mobile learning
Mobile phones
Mobiled
School
Search term
SMS
Speech synthesizer
Texting
IADIS International Conference on Cognition and Exploratory Learning in Digital Age, CELDA 2007 English MobilED is an international collaborative project using mobile technology to facilitate and support teaching and learning through the creation and support of learning environments using mobile technology. The platform enables mobile phones that have text messaging ("texting") capabilities to access the Wikipedia through a directed search request from the user. The server responds to the user initiated request with a return call, where the requested information is then presented as a navigable audio article read using a speech synthesiser. The user is able to contribute information to the article, thus becoming a participant in the information society. This paper reports on the completion of Phase one of the initiative, utilising, in specific, the mobile phone and a prototype MobilED technology platform that supplies a mobile audiointerface. We reflect and present our findings on the initial pilots in this phase. 0 0
Museen und Wikipedia Thomas Tunsch Wikipedia
Dokumentation
Zusammenarbeit
Gemeinschaft
Wissenschaft
Verknüpfungen
EVA 2007 Berlin German In order to define the possible advantages of utilization and cooperation, both the museum world and the Wikipedia world can be considered communities dedicated to the expansion of knowledge. Museums collect objects, provide documentation and produce knowledge about those objects and the representing fields of sciences or other scholarship. Wikipedia collects data and information pieces, provides articles, and at the same time offers insight into the process of how knowledge grows. Especially the following areas demonstrate important connections:

methods (discussion, conventions, manuals, standards) practical experience (authors, stable knowledge/process) content (metadata, SWD, PND, templates, structure, quality management, languages) contributors and users (museum staff, visitors, public)

As a possible alternative or extension of using Wikipedia the project “MuseumsWiki” shall be demonstrated.
1 0
Museum Documentation and Wikipedia.de: Possibilities, opportunities and advantages for scholars and museums Thomas Tunsch Wikipedia
Documentation
Collaborative
Community
Scholars
Interconnections
J. Trant and D. Bearman (eds). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics English The importance of Wikipedia for the documentation and promotion of museum holdings is gaining acceptance, and the number of references to articles is growing. However, the museum world still pays little attention to the Wikipedia project as a collaborative community with intentions, structures, and special features. Although these observations are based on museums in Germany and focus on the German Wikipedia, they are just as important and applicable to other museums and other editions of Wikipedia. Universities and libraries have already taken advantage of the Wikipedia and have established functional links. In that the mission of museums is closely related to that of universities and libraries, the value of Wikipedia for museum professionals is worthy of consideration. This paper provides the complete study to serve as reference for the selected topics to be discussed in the professional forum. 0 0
NLPX at INEX 2006 Woodley A.
Shlomo Geva
Lecture Notes in Computer Science English XML information retrieval (XML-IR) systems aim to better fulfil users' information needs than traditional IR systems by returning results lower than the document level. In order to use XML-IR systems users must encapsulate their structural and content information needs in a structured query. Historically, these structured queries have been formatted using formal languages such as NEXI. Unfortunately, formal query languages are very complex and too difficult to be used by experienced - let alone casual - users and are too closely bound to the underlying physical structure of the collection. INEX's NLP task investigates the potential of using natural language to specify structured queries. QUT has participated in the NLP task with our system NLPX since its inception. Here, we discuss the changes we've made to NLPX since last year, including our efforts to port NLPX to Wikipedia. Second, we present the results from the 2006 INEX track where NLPX was the best performing participant in the Thorough and Focused tasks. 0 0
NSDL MatDL: Adding context to bridge materials e-research and e-education Bartolo L.
Lowe C.
Krafft D.
Tandy R.
Materials science
Plug-in
Wiki
Lecture Notes in Computer Science English The National Science Digital Library (NSDL) Materials Digital Library Pathway (MatDL) has implemented an information infrastructure to disseminate government funded research results and to provide content as well as services to support the integration of research and education in materials. This poster describes how we are integrating a digital repository into opensource collaborative tools, such as wikis, to support users in materials research and education as well as interactions between the two areas. A search results plug-in for MediaWiki has been developed to display relevant search results from the MatDL repository in the Soft Matter Wiki established and developed by MatDL and its partners. Collaborative work with the NSDL Core Integration team at Cornell University is also in progress to enable information transfer in the opposite direction, from a wiki to a repository. 0 0
Natural Language Processing and Information Systems - 12th International Conference on Applications of Natural Language to Information Systems, NLDB 2007, Proceedings No author name available Lecture Notes in Computer Science English The proceedings contain 42 papers. The topics discussed include: an alternative approach to tagging; an efficient denotational semantics for natural language database queries; developing methods and heuristics with low time complexities for filtering spam messages; exploit semantic information for category annotation recommendation in wikipedia; a lightweight approach to semantic annotation of research papers; a new text clustering method using hidden markov model; identifying event sequences using hidden markov model; selecting labels for news document clusters; generating ontologies via language components and ontology reuse; experiences using the researchcyc upper level ontology; ontological text mining of software documents; treatment of passive voice and conjunctions in use case documents; and natural language processing and the conceptual model self-organizing map; and automatic issue extraction from a focused dialogue. 0 0
Natural Resource Management on the Other Side of the World: The Nagorno Karabakh Republic Steven H. Sharrow Rangelands 0 0
New method using Wikis and forums to evaluate individual contributions in cooperative work while promoting experiential learning: results from preliminary experience Xavier de Pedro Puente Tikiwiki CMS/Groupware
Action log
Assessment
Computer supported cooperative learning (CSCL)
Experiential-reflective learning
Individual contributions
Knowledge building
WikiSym English 0 4
… further results