2009

From WikiPapers
Jump to: navigation, search
<< 2006 - 2007 - 2008 - 2009 - 2010 - 2011 - 2012 >>

This is a list of 3 events celebrated and 975 publications published in 2009.

Events

Name City Country DateThis property is a special property in this wiki.
RecentChangesCamp 2009 Portland United States 20 February 2009
WikiSym 2009 Orlando United States 25 October 2009
Wikimania 2009 Buenos Aires Argentina 26 August 2009


Publications

Title Author(s) Keyword(s) Published in Language Abstract R C
"All You Can Eat" Ontology-Building: Feeding Wikipedia to Cyc Samuel Sarjant
Catherine Legg
Michael Robinson
Olena Medelyan
Cyc
Wikipedia
Ontology
Web mining
WI-IAT English In order to achieve genuine web intelligence, building some kind of large general machine-readable conceptual scheme (i.e. ontology) seems inescapable. Yet the past 20 years have shown that manual ontology-building is not practicable. The recent explosion of free user-supplied knowledge on the Web has led to great strides in automatic ontology-building, but quality-control is still a major issue. Ideally one should automatically build onto an already intelligent base. We suggest that the long-running Cyc project is able to assist here. We describe methods used to add 35K new concepts mined from Wikipedia to collections in ResearchCyc entirely automatically. Evaluation with 22 human subjects shows high precision both for the new concepts’ categorization, and their assignment as individuals or collections. Most importantly we show how Cyc itself can be leveraged for ontological quality control by ‘feeding’ it assertions one by one, enabling it to reject those that contradict its other knowledge. 0 0
"All you can eat" ontology-building: Feeding wikipedia to Cyc Samuel Sarjant
Cathy Legg
Michael Robinson
Olena Medelyan
Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2009 English In order to achieve genuine web intelligence, building some kind of large general machine-readable conceptual scheme (i.e. ontology) seems inescapable. Yet the past 20 years have shown that manual ontology-building is not practicable. The recent explosion of free user-supplied knowledge on the Web has led to great strides in automatic ontology-building, but quality-control is still a major issue. Ideally one should automatically build onto an already intelligent base. We suggest that the long-running Cyc project is able to assist here. We describe methods used to add 35K new concepts mined from Wikipedia to collections in ResearchCyc entirely automatically. Evaluation with 22 human subjects shows high precision both for the new concepts' categorization, and their assignment as individuals or collections. Most importantly we show how Cyc itself can be leveraged for ontological quality control by 'feeding' it assertions one by one, enabling it to reject those that contradict its other knowledge. 0 0
"Edit this page": The socio-technological infrastructure of a wikipedia article Slattery S. Activity theory
Infrastructure
Wiki
Wikipedia
SIGDOC'09 - Proceedings of the 27th ACM International Conference on Design of Communication English Networked environments, such as wikis, are commonly used to support work, including the collaborative authoring of information and "fact-building. " In networked environments, the activity of fact-building is mediated not only by the technological features of the interface, but also by the social conventions of the community it supports. This paper examines the social and technological features of a Wikipedia article in order to understand how these features help mediate the activity of factbuilding and highlights the need for communication designers to consider the goals and needs of the communities for which they design. 0 1
"Language Is the Skin of My Thought": Integrating Wikipedia and AI to Support a Guillotine Player Pasquale Lops
Pierpaolo Basile
Marco Gemmis
Giovanni Semeraro
Lecture Notes in Computer Science English This paper describes OTTHO (On the Tip of my THOught), a system designed for solving a language game, called Guillotine, which demands knowledge covering a broad range of topics, such as movies, politics, literature, history, proverbs, and popular culture. The rule of the game is simple: the player observes five words, generally unrelated to each other, and in one minute she has to provide a sixth word, semantically connected to the others. The system exploits several knowledge sources, such as a dictionary, a set of proverbs, and Wikipedia to realize a knowledge infusion process. The paper describes the process of modeling these sources and the reasoning mechanism to find the solution of the game. The main motivation for designing an artificial player for Guillotine is the challenge of providing the machine with the cultural and linguistic background knowledge which makes it similar to a human being, with the ability of interpreting natural language documents and reasoning on their content. Experiments carried out showed promising results. Our feeling is that the presented approach has a great potential for other more practical applications besides solving a language game. 0 0
"edit this page": the socio-technological infrastructure of a wikipedia article Shaun P. Slattery Activity theory
Infrastructure
Wiki
Wikipedia
SIGDOC English 0 1
2009 5th International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2009 No author name available 2009 5th International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2009 English The proceedings contain 68 papers. The topics discussed include: multi-user multi-account interaction in groupware supporting single-display collaboration; supporting collaborative work through flexible process execution; dynamic data services: data access for collaborative networks in a multi-agent systems architecture; integrating external user profiles in collaboration applications; a collaborative framework for enforcing server commitments, and for regulating server interactive behavior in SOA-based systems; CASTLE: a social framework for collaborative anti-phishing databases; VisGBT: visually analyzing evolving datasets for adaptive learning; an IT appliance for remote collaborative review of mechanisms of injury to children in motor vehicle crashes; user contribution and trust in Wikipedia; and a new perspective on experimental analysis of N-tier systems: evaluating database scalability, multi-bottlenecks, and economical operation. 0 0
2LIP: Filling the gap between the current and the three-dimensional web Jacek Jankowski
Stefan Decker
2LIP
3D hypermedia
3D Web
Copernicus
Design
Transparency
Wiki
Proceedings of Web3D 2009: The 14th International Conference on Web3D Technology English In this article we present a novel approach, the 2-Layer Interface Paradigm (2LIP), for designing simple yet interactive 3D web applications, an attempt to marry advantages of 3D experience with the advantages of the narrative structure of hypertext. The hypertext information, together with graphics, and multimedia, is presented semi-transparently on the foreground layer. It overlays the 3D representation of the information displayed in the background of the interface. Hyperlinks are used for navigation in the 3D scenes (in both layers). We introduce a reference implementation of 2LIP: Copernicus - The Virtual 3D Encyclopedia, which can become a model for building 3D Wikipedia. Based on the evaluation of Copernicus we show that designing web interfaces according to 2LIP provides users with a better experience during browsing the Web, has a positive effect on the visual and associative memory, improves spatial cognition of presented information, and increases overall user's satisfaction without harming the interaction. 0 0
3DWiki: The 3D wiki engine Jacek Jankowski
Marek Jozwowicz
Yolanda Cobos
Bill McDaniel
Stefan Decker
2LIP
3D hypermedia
3D web
3D wiki
WikiSym English We demonstrate one of the potential paths of the evolution of wiki engines towards Web 3.0. We introduce 3dWiki - the 3D wiki engine, which was built according to 2-Layer Interface Paradigm (2LIP). It was developed for use by Copernicus, our vision of a 3D encyclopedia. In the demonstration: • We give an overview of 2-Layer Interface Paradigm, an attempt to marry advantages of 3D experience with the advantages of narrative structure of hypertext. • We describe step by step how to create an article for Copernicus: from creating models for the 3D background, through authoring the content, creating the c-links, to publishing the result in our encyclopedia. • We show how to use a physics engine in our wiki. Copyright 0 0
A 'uses and gratifications' approach to understanding the role of wiki technology in enhancing teaching and learning outcomes Zhongqi Guo
YanChun Zhang
Stevens K.J.
Constructivist learning
Motivation
Technology-mediated learning (TML)
Uses and gratifications approach (U&G)
Wiki technology
17th European Conference on Information Systems, ECIS 2009 English The use of the Wikis in both post-graduate and undergraduate teaching is rapidly increasing in popularity. Much of the research into the use of this technology has focused on the practical aspects of how the technology can be used and is yet to address why it is used, or in what way it enhances teaching and learning outcomes. A comparison of the key characteristics of the constructivist learning approach and Wikis suggests that Wikis could provide considerable support of this approach, however research into the motivations for using the technology is required so that good teaching practices may be applied to the use of Wikis when utilized in the higher education context. This study articulates a research design grounded in the Technology Mediated Learning (TML) paradigm that could be used to explore teachers and students' motivations for using Wiki technology to enhance teaching and learning outcomes. Using the 'Uses and Gratification' approach, a popular technique used for understanding user motivation in technology adoption, a two-stage research design is set out. Finally, the paper concludes with a discussion of the implications for both information systems researchers and higher education. 0 0
A Brief Review of Studies of Wikipedia in Peer-Reviewed Journals Chitu Okoli Third International Conference on Digital Society English Since its establishment in 2001, Wikipedia, "the free encyclopedia that anyone can edit" has become a cultural icon of the unlimited possibilities of the World Wide Web. Thus, it has become a serious subject of scholarly study to objectively and rigorously understand it as a phenomenon. This paper reviews studies of Wikipedia that have been published in peer-reviewed journals. Among the wealth of studies reviewed, major sub-streams of research covered include: how and why Wikipedia works; assessments of the reliability of its content; using it as a data source for various studies; and applications of Wikipedia in different domains of endeavour. 25 3
A Jury of your Peers: Quality, Experience and Ownership in Wikipedia Aaron Halfaker
Aniket Kittur
Robert E. Kraut
John Riedl
Wikipedia peer peer review wikiwork experience ownership quality WikiSym2009: Symposium on Wikis and Open Collaboration Wikipedia is a highly successful example of what mass collaboration in an informal peer review system can accomplish. In this paper, we examine the role that the quality of the contributions, the experience of the contributors and the ownership of the content play in the decisions over which contributions become part of Wikipedia and which ones are rejected by the community. We introduce and justify a versatile metric for automatically measuring the quality of a contribution. We find little evidence that experience helps contributors avoid rejection. In fact, as they gain experience, contributors are even more likely to have their work rejected. We also find strong evidence of ownership behaviors in practice despite the fact that ownership of content is discouraged within Wikipedia. 0 6
A Persian Web Page Classifier Applying a Combination of Content-Based and Context-Based Features M. Farhoodi
A. Yari
M. Mahmoudi
International Journal of Information Studies There are many automatic classification methods and algorithms that have been propose for content-based or context-based features of web pages. In this paper we analyze these features and try to exploit a combination of features to improve categorization accuracy of Persian web page classification. In this work we have suggested a linear combination of different features and adjusting the optimum weighing during application. To show the outcome of this approach, we have conducted various experiments on a dataset consisting of all pages belonging to Persian Wikipedia in the field of computer. These experiments demonstrate the usefulness of using content-based and context-based web page features in a linear weighted combination. 0 0
A Practical Model for Conceptual Comparison Using a Wiki David Webster
Jie Xu
Darren Mundy
Paul Warren
Conceptual comparison
Conceptual context
Contextual relations
Semantic relevance
ICALT English 0 0
A Semantic Wiki Based Light-Weight Web Application Model Jie Bao
Li Ding
Rui Huang
Paul R. Smart
Dave Braines
Gareth Jones
ASWC English 0 0
A aprovação de sentidos enunciados na "Wikipédia: a enciclopédia livre" Paulo Henrique Souto Maior Serrano Anais do SILEL Portuguese 2 0
A coauthoring method of keyword dictionaries for knowledge combination on corporate discussion web sites Takao S.
Iijima T.
Sakurai A.
Bulletin board system
Coauthoring
Collective knowledge
Wiki
Lecture Notes in Computer Science English This paper states the issues faced, and the role played by keyword dictionaries with regard to discussion based web sites which aim to achieve a 'collective knowledge' through the voluntary participation of corporate employees, and proposes a corrective strategy. A keyword dictionary is valuable in that it helps to integrate fragmented accumulated knowledge with generalized knowledge. However, this necessitates a method that allows for coauthoring, and Wiki, BBS and other existing tools are still insufficient in this respect. As well as offering a method for expanding BBS, this paper shows a method for assessing use within an actual corporation. 0 0
A collaborative environment for the design of accessible educational objects Boccacci P.
Ribaudo M.
Mesiti M.
Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009 English This paper describes VisualPedia, an extension of Media- Wiki, which has been developed to promote collaboration among school teachers while producing class material. The user profile of MediaWiki is expanded to allow a personalized content delivery that is tailored to different users, including users with disabilities. 0 0
A community-curated consensual annotation that is continuously updated: the Bacillus subtilis centred wiki SubtiWiki Lope A. Flórez
Sebastian F. Roppel
Arne G. Schmeisky
Christoph R. Lammers
Jörg Stülke
Database : the journal of biological databases and curation English Bacillus subtilis is the model organism for Gram-positive bacteria, with a large amount of publications on all aspects of its biology. To facilitate genome annotation and the collection of comprehensive information on B. subtilis, we created SubtiWiki as a community-oriented annotation tool for information retrieval and continuous maintenance. The wiki is focused on the needs and requirements of scientists doing experimental work. This has implications for the design of the interface and for the layout of the individual pages. The pages can be accessed primarily by the gene designations. All pages have a similar flexible structure and provide links to related gene pages in SubtiWiki or to information in the World Wide Web. Each page gives comprehensive information on the gene, the encoded protein or RNA as well as information related to the current investigation of the gene/protein. The wiki has been seeded with information from key publications and from the most relevant general and B. subtilis-specific databases. We think that SubtiWiki might serve as an example for other scientific wikis that are devoted to the genes and proteins of one organism.Database URL: The wiki can be accessed at http://subtiwiki.uni-goettingen.de/ 0 0
A comparison of privacy issues in collaborative workspaces and social networks Martin Pekarek
Stefanie Potzsch
Identity in the Information Society 0 0
A composite calculation for author activity in Wikis: Accuracy needed Claudia Muller-Birn
Janette Lehmann
Sabina Jeschke
Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2009 English Researchers of computer science and social science are increasingly interested in the Social Web and its applications. To improve existing infrastructures, to evaluate the success of available services, and to build new virtual communities and their applications, an understanding of dynamics and evolution of inherent social and informational structures is essential. One key question is how communities which exist in these applications are structured in terms of author contributions. Are there similar contribution patterns in different applications? For example, does the so called onion model revealed from open source software communities apply to Social Web applications as well? In this study, author contributions in the open content project Wikipedia are investigated. Previous studies to evaluate author contributions mainly concentrate on editing activities. Extending this approach, the added significant content and investigation of which author groups contribute the majority of content in terms of activity and significance are considered. Furthermore, the social information space is described by a dynamic collaboration network and the topic coverage of authors is analyzed. In contrast to existing approaches, the position of an author in a social network is incorporated. Finally, a new composite calculation to evaluate author contributions in Wikis is proposed. The action, the content contribution, and the connectedness of an author are integrated into one equation in order to evaluate author activity. 0 0
A conceptual and operational definition of 'social role' in online community Gleave E.
Welser H.T.
Lento T.M.
Smith M.A.
Proceedings of the 42nd Annual Hawaii International Conference on System Sciences, HICSS English Both online and off, people frequently perform particular social roles. These roles organize behavior and give structure to positions in local networks. As more of social life becomes embedded in online systems, the concept of social role becomes increasingly valuable as a tool for simplifying patterns of action, recognizing distinct user types, and cultivating and managing communities. This paper standardizes the usage of the term 'social role' in online community as a combination of social psychological, social structural, and behavioral attributes. Beyond the conceptual definition, we describe measurement and analysis strategies for identifying social roles in online community. We demonstrate this process in two domains, Usenet and Wikipedia, identifying key social roles in each domain. We conclude with directions for future research, with a particular focus on the analysis of communities as role ecologies. 0 0
A cooperation support method between discussion space and activity space in collaborative learning and its experimental evaluation Tilwaldi D.
Kaneko S.
Hosomura T.
Dasai T.
Mitsui H.
Koizumi H.
Chat
Cooperative learning
Netmeeting
Tag
Wiki
IEEJ Transactions on Electronics, Information and Systems Japanese; English This paper describes a prototype and its experimental evaluation of the chat system that offers cooperation support between discussion space and activity space in collaborative learning. In collaborative learning in the proposed system.students are divided into groups, carry out discussion on a study theme by chats, and create on-line reports in cooperative manner. The proposed cooperation support method aims at improving the level of cooperation among students and the effectiveness of the study by making group members grasp other member's study situation mutually through cooperation support in group member's utterance and report creation. We use Wiki as a tool for collaborative work in this research. Cooperation support displays the Wiki's updating time and contents on the chat system with activity cooperation support that offers a space for remote collaborative learning and allows a student to know about other students 'condition. In addition, the number of chat utterances was displayed, and other students' condition is easily grasped. 0 0
A corporate semantic wiki for scientific workflows Paschke A.
Teymourian K.
Heese R.
Lukzac-Rosch M.
Business process management
E-science infrastructure
Scientific workflows
Semantic web
Semantic wiki
Virtual e- laboratories
Web 3.0
Proceedings of I-KNOW 2009 - 9th International Conference on Knowledge Management and Knowledge Technologies and Proceedings of I-SEMANTICS 2009 - 5th International Conference on Semantic Systems English State-of-the-art business process management tools provide only limited support for weakly-structured scientific workflow processes which involve knowledge intensive human interactions and are subject to frequent changes and agile compensations and exceptions. In order to address these shortcomings we propose a novel combination of a BPM system with a Corporate Semantic Web wiki. The user-friendliness of the later as regarding multi-site content generation and the power of semantic technologies as w.r.t. organizing and retrieving organizational knowledge, business rules and business vocabularies are likely to complement one another, leading to a new generation of collaborative Web 3.0 BPM tools. 0 0
A jury of your peers: Quality, experience and ownership in Wikipedia Aaron Halfaker
Aniket Kittur
Kraut R.
John Riedl
Experience
Ownership
Peer
Peer review
Quality
Wikipedia
WikiWork
WikiSym English Wikipedia is a highly successful example of what mass collaboration in an informal peer review system can accomplish. In this paper, we examine the role that the quality of the contributions, the experience of the contributors and the ownership of the content play in the decisions over which contributions become part of Wikipedia and which ones are rejected by the community. We introduce and justify a versatile metric for automatically measuring the quality of a contribution. We find little evidence that experience helps contributors avoid rejection. In fact, as they gain experience, contributors are even more likely to have their work rejected. We also find strong evidence of ownership behaviors in practice despite the fact that ownership of content is discouraged within Wikipedia. Copyright 0 6
A knowledge workbench for software development Panagiotou D.
Mentzas G.
Knowledge workbench
Semantic annotation
Semantic wiki
Software development
Proceedings of I-KNOW 2009 - 9th International Conference on Knowledge Management and Knowledge Technologies and Proceedings of I-SEMANTICS 2009 - 5th International Conference on Semantic Systems English Modern software development is highly knowledge intensive; it requires that software developers create and share new knowledge during their daily work. However, current software development environments are "syntactic", i.e. they do not facilitate understanding the semantics of software artefacts and hence cannot fully support the knowledge-driven activities of developers. In this paper we present KnowBench, a knowledge workbench environment which focuses on the software development domain and strives to address these problems. KnowBench aims at providing software developers such a tool to ease their daily work and facilitate the articulation and visualization of software artefacts, concept-based source code documentation and related problem solving. Building a knowledge base with software artefacts by using the KnowBench system can then be exploited by semantic search engines or P2P metadata infrastructures in order to foster the dissemination of software development knowledge and facilitate cooperation among software developers. 0 0
A large margin approach to anaphora resolution for neuroscience knowledge discovery Burak Ozyurt I. Proceedings of the 22nd International Florida Artificial Intelligence Research Society Conference, FLAIRS-22 English A discriminative large margin classifier based approach to anaphora resolution for neuroscience abstracts is presented. The system employs both syntactic and semantic features. A support vector machine based word sense disambiguation method combining evidence from three methods, that use WordNet and Wikipedia, is also introduced and used for semantic features. The support vector machine anaphora resolution classifier with probabilistic outputs achieved almost four-fold improvement in accuracy over the baseline method. Copyright © 2009, Assocation for the Advancement of ArtdicaI Intelligence (www.aaai.org). All rights reserved. 0 0
A longitudinal model of perspective making and perspective taking within fluid online collectives Kane G.C.
Johnson J.
Ann Majchrzak
Chenisern L.
Online collectives
Online community
Perspective making
Perspective taking
Theory-building
Wikipedia
ICIS 2009 Proceedings - Thirtieth International Conference on Information Systems English Although considerable research has investigated perspective making and perspective taking processes in existing communities of practice, little research has explored how these processes are manifest in fluid online collectives. Fluid collectives do not share common emotional bonds, shared languages, mental models, or clearly defined boundaries that are common in communities of practices and that aid in the perspective development process. This paper conducts a retrospective case study of a revelatory online collective - the autism article on Wikipedia - to explore how the collective develops a perspective over time with a fluid group of diverse participants surrounding a highly contentious issue. We find that the collective develops a perspective over time through three archetypical challenges - chaotic perspective taking, perspective shaping, and perspective defending. Using this data, we develop a longitudinal model of perspective development. The theoretical implications are discussed and a set of propositions are developed for testing in more generalized settings. 0 0
A mash-up authoring tool for e-learning based on pedagogical templates Capuano N.
Pierri A.
Colace F.
Gaeta M.
Mangione G.R.
Learning design
Mashup
Web 2.0
1st ACM International Workshop on Multimedia Technologies for Distance Learning, MTDL 2009, Co-located with the 2009 ACM International Conference on Multimedia, MM'09 English The purpose of this paper is twofold. On the one hand it aims at presenting the "pedagogical template" methodology for the definition of didactic activities, through the aggregation of atomic learning entities on the basis of pre-defined schemas. On the other hand it proposes a Web-based authoring tool to build learning resources applying a defined methodology. The authoring tool is inspired by mashing-up principles and allows the combination of local learning entities with learning entities coming from external sources belonging to Web 2.0 like Wikipedia, Flickr, YouTube and SlideShare. Eventually, the results of a small-scale experimentation, inside a University course, purposed both to define a pedagogical template for "virtual scientific experiments" and to build and deploy learning resources applying such template are presented. Copyright 2009 ACM. 0 0
A method of building Chinese field association knowledge from Wikipedia Lei Wang
Susumu Yata
Atlam E.-S.
Masao Fuketa
Kazuhiro Morita
Bando H.
Aoe J.-I.
Chinese documents
Feature fields
Field association terms
Field recognition
Wikipedia
2009 International Conference on Natural Language Processing and Knowledge Engineering, NLP-KE 2009 English Field Association (FA) terms form a limited set of discriminating terms that give us the knowledge to identify document fields. The primary goal of this research is to make a system that can imitate the process whereby humans recognize the fields by looking at a few Chinese FA terms in a document. This paper proposes a new approach to build a Chinese FA terms dictionary automatically from Wikipedia. 104,532 FA terms are added in the dictionary. The resulting FA terms by using this dictionary are applied to recognize the fields of 5,841 documents. The average accuracy in the experiment is 92.04%. The results show that the presented method is effective in building FA terms from Wikipedia automatically. 0 0
A new approach for semantic web service discovery and propagation based on agents Neiat A.G.
Shavalady S.H.
Mohsenzadeh M.
Rahmani A.M.
Multi agent system
Semantic web services
User ontology
Web service discovery
Proceedings of the 5th International Conference on Networking and Services, ICNS 2009 English for Web based systems integration become a time challenge. To improve the automation of Web services interoperation, a lot of technologies are recommended, such as semantic Web services and agents. In this paper an approach for semantic Web service discovery and propagation based on semantic Web services and FIPA multi agents is proposed. A broker allowing to expose semantic interoperability between semantic Web service provider and agent by translating WSDL to DF description for semantic Web services and vice versa is proposed . We describe how the proposed architecture analyzes the request and after being analyzed, matches or publishes the request. The ontology management in the broker creates the user ontology and merges it with general ontology (i.e. WordNet, Yago, Wikipedia ...). We also describe the recommender which analyzes the created WSDL based on the functional and non-functional requirements and then recommends it to Web service provider to increase their retrieval probability in the related queries. 0 0
A new financial investment management method based on knowledge management Yu Q. Financial Investment
Knowledge management
Pervasive Computing
RSS Feeding
Web services
Wiki
ISCID 2009 - 2009 International Symposium on Computational Intelligence and Design English There are many methodologies and theories developed for financial investment analysis. Nevertheless, financial analysts tend to adopt their proprietary models and systems to carry out financial investment analysis in practice. To advance both theories and practices in the financial investment domain, a knowledge management (KM) service is highly desirable to enable analysts, academics, and public investors to share their investment knowledge. This paper illustrates the design and development of a wiki-based investment knowledge management service which supports moderated sharing of structured and unstructured investment knowledge to facilitate investment decision making for both financial analysts and the general public. Our initial usability study shows that the proposed wiki-based investment knowledge management service is promising. 0 0
A new multiple kernel approach for visual concept learning Jiang Yang
Yanyan Li
Tian Y.
Duan L.
Gao W.
Multiple Kernel Learning
Support Vector Machine
Visual Concept Learning
Lecture Notes in Computer Science English In this paper, we present a novel multiple kernel method to learn the optimal classification function for visual concept. Although many carefully designed kernels have been proposed in the literature to measure the visual similarity, few works have been done on how these kernels really affect the learning performance. We propose a Per-Sample Based Multiple Kernel Learning method (PS-MKL) to investigate the discriminative power of each training sample in different basic kernel spaces. The optimal, sample-specific kernel is learned as a linear combination of a set of basic kernels, which leads to a convex optimization problem with a unique global optimum. As illustrated in the experiments on the Caltech 101 and the Wikipedia MM dataset, the proposed PS-MKL outperforms the traditional Multiple Kernel Learning methods (MKL) and achieves comparable results with the state-of-the-art methods of learning visual concepts. 0 0
A new perspective on semantics of data provenance Ram S.
Liu J.
CEUR Workshop Proceedings English Data Provenance refers to the "origin", "lineage", and "source" of data. In this work, we examine provenance from a semantics perspective and present the W7 model, an ontological model of data provenance. In the W7 model, provenance is conceptualized as a combination of seven interconnected elements including "what", "when", "where", "how", "who", "which" and "why". Each of these components may be used to track events that affect data during its lifetime. The W7 model is general and extensible enough to capture provenance semantics for data in different domains. Using the example of the Wikipedia, we illustrate how the W7 model can capture domain or application specific provenance. 0 0
A nomadic wiki for Mobile Ad Hoc Networks Hoa Ha Duong
Isabelle Demeure
CTS English 0 0
A quantitative approach to the use of the Wikipedia Antonio J. Reinoso
Felipe Ortega
Jesús M. González-Barahona
Gregorio Robles
English This paper presents a quantitative study of the use of the Wikipedia system by its users (both readers and editors), with special focus on the identification of time and kind-of-use patterns, characterization of traffic and workload, and comparative analysis of different language editions. The basis of the study is the filtering and analysis of a large sample of the requests directed to the Wikimedia systems for six weeks, each in a month from November 2007 to April 2008. In particular, we have considered the twenty most frequently visited language editions of the Wikipedia, identifying for each access to any of them the corresponding namespace (sets of resources with uniform semantics), resource name (article names, for example) and action (editions, submissions, history reviews, save operations, etc.). The results found include the identification of weekly and daily patterns, and several correlations between several actions on the articles. In summary, the study shows an overall picture of how the most visited language editions of the Wikipedia are being accessed by their users. 0 0
A query construction service for large-scale web search engines Papadakis I.
Stefanidakis M.
Stamou S.
Andreou I.
Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009 English Despite their wide usage, large-scale search engines are not always effective in tracing the best possible information for the user needs. There are times when web searchers spend too much time searching over a large-scale search engine. When (if) they eventually succeed in getting back the anticipated results, they often realize that their successful queries are significantly different from their initial one. In this paper, we introduce a query construction service for assisting web information seekers specify precise and unambiguous queries over large-scale search engines. The proposed service leverages the collective knowledge encapsulated mainly in the Wikipedia corpus and provides an intuitive GUI via which web users can determine the semantic orientation of their searches before these are executed by the desired engine. 0 0
A semantic MediaWiki-empowered terminology registry Qing Zou
Wei Fan
Knowledge organization systems
Metadata registry
Semantic wiki
Terminology registry
DCMI English 0 0
A semantic layer on semi-structured data sources for intuitive chatbots Augello A.
Vassallo G.
Gaglio S.
Pilato G.
Proceedings of the International Conference on Complex, Intelligent and Software Intensive Systems, CISIS 2009 English The main limits of chatbot technology are related to the building of their knowledge representation and to their rigid information retrieval and dialogue capabilities, usually based on simple "pattern matching rules". The analysis of distributional properties of words in a texts corpus allows the creation of semantic spaces where represent and compare natural language elements. This space can be interpreted as a "conceptual" space where the axes represent the latent primitive concepts of the analyzed corpus. The presented work aims at exploiting the properties of a data-driven semantic/conceptual space built using semistructured data sources freely available on the web, like Wikipedia. This coding is equivalent to adding, into the Wikipedia graph, a conceptual similarity relationship layer. The chatbot can exploit this layer in order to simulate an "intuitive" behavior, attempting to retrieve semantic relations between Wikipedia resources also through associative sub-symbolic paths. 0 0
A semantic wiki on cooperation in public administration in Europe Krabina B. Best practice documentation
Intercommunal cooperation
Public administration
Semantic wiki
Web 2.0
IMSCI 2009 - 3rd International Multi-Conference on Society, Cybernetics and Informatics, Proceedings English Authorities cooperate in various ways. The Web portal www. verwaltungskooperation.eu aims to share knowledge on collaboration projects. A semantic wiki approach was used to facilitate best practice documentation with Semantic Web and Web 2.0 technology. 0 0
A semantic wild within moodle for greek medical education Bratsas C.
Kapsas G.
Konstantinidis S.
Koutsouridis G.
Bamidis P.D.
Proceedings - IEEE Symposium on Computer-Based Medical Systems English Medical education requires a learning environment that enables medical students to acquire knowledge in a "hands on" and organized way. This, in turn, requires that content can be accessed, evaluated, organized and reused with ease by the students. Social Software (i.e. Weblogs, Wikis, ePortfolios, Instant Messaging) and Semantic Web technology could play an important role in such learning environments. Where Social Software gives users freedom to choose their own processes and supports the collaboration of people anytime, anywhere, Semantic Web technology gives the possibility to structure information for easy retrieval, reuse, and exchange between different systems and tools. In this article a very specific technology that combines Social Software and the Semantic Web, that is Semantic Wikis are presented, together with their possible role in medical education Moreover the first Medical Semantic Wiki in Greek Language and its use in medical education are illustrated. 0 0
A social web approach to managing information and knowledge in the AEC industry Jinghua Zhang
El-Diraby T.E.
AEC industry
Construction
Knowledge management
Publish/subscribe
Social Web
Web 2.0
Wiki
Proceedings - International Conference on Management and Service Science, MASS 2009 English As Social Web applications have begun to dominate the Internet and changed the way in which people communicate, there is a need to study the application of Social Web concepts in managing information and knowledge. This paper discusses the potential for using four types of social interaction (social tagging, wiki, blogging, and social networking) in the AEC industry. 0 0
A spatial hypertext wiki for architectural knowledge management Proceedings - International Conference on Software Engineering English 0 0
A stochastic model for assessing bush fire attack on the buildings in bush fire prone areas Tan Z.
Midgley S.
Bush fire attack assessment
Deterministic model
Monte Carlo simulation
Radiant heat flux
Stochastic model
18th World IMACS Congress and MODSIM09 International Congress on Modelling and Simulation: Interfacing Modelling and Simulation with Mathematical and Computational Sciences, Proceedings English Bush fires are a major natural and socio-economic hazard in Australia. Under extreme fire weather conditions, bush fires spread very rapidly and are difficult to contain by firefighting services. When spreading on the rural/urban interface, they can cause significant damage to buildings or structures. The well known examples of such disastrous bush fire events include the bush fires which occurred in Tasmania in 1967, Victoria and South Australia in 1983, New South Wales in 1994, Canberra in 2003 and Victoria in 2009. The number of houses lost as a result of these fires is over 1300, 1500, 200, 500 and 2000 respectively (Leonard & MacArthur, 1999; Ellis et al., 2003; Blanchi & Leonard, 2005; Wikipedia 2009). To minimize the risk of building loss from such devastating bushfires, many bushfire protection measures have been developed and implemented within each State in Australia. One of the most effective and commonly used measures is the application of construction and design standards to developments in bushfire prone areas. However, the appropriate application of this protection measure requires the use of a bushfire attack assessment model to determine the level of bushfire attack to which a development might be exposed based on the site specific variables associated with weather, fuel and topography. At present, almost all the existing bushfire attack assessment models available for use are the so-called deterministic models (Ellis, 2000; Tan et al., 2005; SA, 2009), which are based on radiant heat flux modelling. The principles of these models are the same, i.e. taking deterministic values for all the input variables and producing the deterministic output of radiant heat flux. In situations where there exists a significant level of uncertainty with the inputs required by these models, it may be difficult to choose the appropriate values for them and therefore the risk level associated with the output on which a decision is made is usually unknown. This means that the safety levels of the decisions based on the deterministicmodels' outputs may be either more than adequate, due to the use of conservatively high values, or inadequate due to the use of the conservatively low values for the inputs with uncertainties. In view of the above, a stochastic bushfire attack assessment model has been proposed by the Authors. The principle of the proposed model is that the model's output i.e. radiant heat flux is calculated repetitively with the randomly sampled values for the inputs with uncertainties using Monte Carlo sampling. The model output is not a single radiant heat flux but a radiant heat flux probability distribution reflecting the uncertainties with the model inputs. Based on the radiant heat flux probability distribution, the radiant heat flux for a given percentile or safety level and the corresponding standard construction requirements can then be determined. Therefore a risk based decision in relation to the application of appropriate standard construction requirements to a development in bushfire prone areas could be made. The implementation of the proposed model makes use of a commercial software product called @Risk, which involves a number of steps like developing @Risk spreadsheet model, analyzing the model with Monte Carlo simulation, determining radiant heat flux for a given percentile or safety level and determining the level of bush fire attack and the associated standard construction requirements as per AS 3959 (SA, 2009). The use of the model has been demonstrated by an application example. As demonstrated in the example, the major advantage of the proposed model over the existing deterministic models is that the construction standard determined by this model for a given development could be based on a known minimum safety level. This approach provides construction standards for the proposed development which are likely to be more cost effective whilst providing for pre-defined safety levels. 0 0
A study on Linking Wikipedia categories to WordNet synsets using text similarity Antonio Toral
Oscar Ferrandez
Eneko Agirre
Munoz R.
International Conference Recent Advances in Natural Language Processing, RANLP English This paper studies the application of text similarity methods to disambiguate ambiguous links between WordNet nouns and Wikipedia categories. The methods range from word overlap between glosses, random projections, WordNetbased similarity, and a full-fledged textual entailment system. Both unsupervised and supervised combinations have been tried. The goldstandard with disambiguated links is publicly available. The results range from 64.7% for the first sense heuristic, 68% for an unsupervised combination, and up to 77.74% for a supervised combination. 0 0
A study on the semantic relatedness of query and document terms in information retrieval Muller C.
Iryna Gurevych
EMNLP 2009 - Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: A Meeting of SIGDAT, a Special Interest Group of ACL, Held in Conjunction with ACL-IJCNLP 2009 English The use of lexical semantic knowledge in information retrieval has been a field of active study for a long time. Collaborative knowledge bases like Wikipedia and Wiktionary, which have been applied in computational methods only recently, offer new possibilities to enhance information retrieval. In order to find the most beneficial way to employ these resources, we analyze the lexical semantic relations that hold among query and document terms and compare how these relations are represented by a measure for semantic relatedness. We explore the potential of different indicators of document relevance that are based on semantic relatedness and compare the characteristics and performance of the knowledge bases Wikipedia, Wiktionary and WordNet. 0 0
A web 2.0 and open source approach for management and sharing of multimedia data-case of the Tzu chi foundation Chen J.-H.
Yang H.-H.
Open source
Web 2.0
Web Service
Wiki
Lecture Notes in Computer Science English The Tzu-Chi Foundation is one of the largest philanthropy foundations in Taiwan, with millions of members spread around the world. The search and sharing of vast member-generated information which could be in audio, video, photographs and various text formats, has long been a complex issue. Recently this foundation conducted an experimental project tempting to tackle this issue with web 2.0 approaches. A web-based prototype integrated from open source web album and wiki platform was developed and trial ran. This paper will discuss the experience and implication of this experimental project in the online community and managerial context. 0 0
A web recommender system based on dynamic sampling of user information access behaviors Jilin Chen
Shtykh R.Y.
Jin Q.
Data mining
Dynamic sampling
Gradual adaption
Information recommendation
Wikipedia
Proceedings - IEEE 9th International Conference on Computer and Information Technology, CIT 2009 English In this study, we propose a Gradual Adaption Model for a Web recommender system. This model is used to track users' focus of interests and its transition by analyzing their information access behaviors, and recommend appropriate information. A set of concept classes are extracted from Wikipedia. The pages accessed by users are classified by the concept classes, and grouped into three terms of short, medium and long periods, and two categories of remarkable and exceptional for each concept class, which are used to describe users' focus of interests, and to establish reuse probability of each concept class in each term for each user by Full Bayesian Estimation as well. According to the reuse probability and period, the information that a user is likely to be interested in is recommended. In this paper, we propose a new approach by which short and medium periods are determined based on dynamic sampling of user information access behaviors. We further present experimental simulation results, and show the validity and effectiveness of the proposed system. 0 0
A wicked encyclopaedia. Des Spence BMJ: British Medical Journal :b3814 The author reflects on the use of Wikipedia, a free online encyclopedia by doctors and patients. He states that Wikipedia is the common source of doctors and patients when searching for medical topics. According to research, half of doctors have used Wikipedia and the site is increasingly becoming the standard medical textbook. The author also mentions a debate on whether there should be a specific medical Wiki. 0 0
A wiki as intranet: A critical analysis using the Delone and McLean model Online Information Review English 0 0
A wiki-based approach to enterprise architecture documentation and analysis Buckl S.
Florian Matthes
Christian Neubert
Schweda C.M.
Enterprise architecture
Information modeling
Wiki
17th European Conference on Information Systems, ECIS 2009 English Enterprise architecture (EA) management is a challenging task, modern enterprises have to face. This task is often addressed via organization-specific methodologies, which are implemented or derived from a respective EA management tool, or are at least partially aligned and supported by such tools. Nevertheless, especially when starting an EA management endeavor, the documentation of the EA is often not likely to satisfy the level of formalization, which is needed to employ an EA management tool. This paper address the issue of starting EA management, more precise EA documentation and analysis, by utilizing a wiki-based approach. From there, we discuss which functions commonly implemented in wiki-systems could be used in this context, which augmentations and extensions would be needed, and which potential impediments exist. 0 0
A wiki: One tool for communication, collaboration, and collection of documentation Plummer S.M.
Fox L.J.
Collaboration
Confluence
Documentation
DokuWiki
SUNY Geneseo
Wiki
SIGUCCS'09 - Proceedings of the 2009 ACM SIGUCCS Fall Conference English Wikis are a popular tool on college campuses for documentation and collaboration. Despite the wide spread use of wikis, many users still do not understand what a wiki is, how it is used, or how to apply it to their needs. The myriad uses for wikis and figuring out how to use them can be overwhelming. At SUNY Geneseo, we use two different wikis to address two different technical requirements. One wiki is for internal documentation used by Computing & Information Technology (CIT). The other wiki is a college-wide wiki system where any department or individual can have a wiki for whatever subject they wish. The college-wide wiki system was installed two years ago and has many uses such as documentation, collaborative writing, committee work, communication, and education. We have addressed many problems along the way including administration, policies, technical issues, education, adoption, and advocacy. Much of what we have overcome can help other IT departments looking to install a wiki or to get their existing wiki to be more widely accepted and adopted. We will also show some of the unique and interesting ways our wiki is being used. Copyright 2009 ACM. 0 0
AAAI 2008 workshop reports Anand S.S.
Bunescu R.
Carvcdho V.
Chomicki J.
Conitzer V.
Cox M.T.
Dignum V.
Dodds Z.
Mark Dredze
Furcy D.
Evgeniy Gabrilovich
Goker M.H.
Guesgen H.
Hirsh H.
Jannach D.
Junker U.
Ketter W.
Kobsa A.
Koenig S.
Lau T.
Lewis L.
Matson E.
Metzler T.
Rada Mihalcea
Mobasher B.
Joelle Pineau
Poupart P.
Raja A.
Ruml W.
Sadeh N.
Guy Shani
Shapiro D.
Smith T.
Taylor M.E.
Wagstaff K.
Walsh W.
Zhou R.
AI Magazine English AAAI was pleased to present the AAAI-08 Workshop Program, held Sunday and Monday, July 13-14, in Chicago, Illinois, USA. The program included the following 15 workshops: Advancements in POMDP Solvers; AI Education Workshop Colloquium; Coordination, Organizations, Institutions, and Norms in Agent Systems, Enhanced Messaging; Human Implications of Human-Robot Interaction; Intelligent Techniques for Web Personalization and Recommender Systems; Metareasoning: Thinking about Thinking; Multidisciplinary Workshop on Advances in Preference Handling; Search in Artificial Intelligence and Robotics; Spatial and Temporal Reasoning; Trading Agent Design and Analysis; Transfer Learning for Complex Tasks; What Went Wrong and Why: Lessons from AI Research and Applications; and Wikipedia and Artificial Intelligence: An Evolving Synergy. Copyright © 2009, Association for the Advancement of Artificial Intelligence. All rights reserved. 0 0
Accessing the inaccessible: Creating a digital cartobibliography of embedded maps Haggit C. Cartobibliographic tool
Cartobibliographies
Cartoko
Collaborative indexes
Indexing
Map indexes
Online databases
Wiki
Journal of Map and Geography Libraries English Maps within printed books have long been both useful for research and often difficult to access. Numerous indexes have been published in the past 150 years to resolve this problem, though these efforts have necessarily been limited in scope. A selection of these works is reviewed, and an online and collaborative cartobibliography aimed at providing a method to index embedded maps using wiki software is discussed. 0 0
Accurate semantic class classifier for coreference resolution Huang Z.
Zeng G.
Xu W.
Celikyilmaz A.
EMNLP 2009 - Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: A Meeting of SIGDAT, a Special Interest Group of ACL, Held in Conjunction with ACL-IJCNLP 2009 English There have been considerable attempts to incorporate semantic knowledge into coreference resolution systems: different knowledge sources such as WordNet and Wikipedia have been used to boost the performance. In this paper, we propose new ways to extract WordNet feature. This feature, along with other features such as named entity feature, can be used to build an accurate semantic class (SC) classifier. In addition, we analyze the SC classification errors and propose to use relaxed SC agreement features. The proposed accurate SC classifier and the relaxation of SC agreement features on ACE2 coreference evaluation can boost our baseline system by 10.4% and 9.7% using MUC score and anaphor accuracy respectively. 0 0
AceWiki: A natural and expressive semantic wiki Kuhn T. Attempto Controlled English (ACE)
Controlled natural language
Experiment
Ontology
Semantic web
Semantic wiki
Usability test
CEUR Workshop Proceedings English We present AceWiki, a prototype of a new kind of semantic wiki using the controlled natural language Attempto Controlled English (ACE) for representing its content. ACE is a subset of English with a restricted grammar and a formal semantics. The use of ACE has two important advantages over existing semantic wikis. First, we can improve the usability and achieve a shallow learning curve. Second, ACE is more expressive than the formal languages of existing semantic wikis. Our evaluation shows that people who are not familiar with the formal foundations of the Semantic Web are able to deal with AceWiki after a very short learning phase and without the help of an expert. 0 0
Active learning in computer science courses in higher EDUCATION Serbec I.N.
Strnad M.
Rugelj J.
Collaborative learning
Constructivist learning theory
Peer-assessment
Wiki
IADIS International Conference on Cognition and Exploratory Learning in Digital Age, CELDA 2009 English Innovative learning activities, based on constructivism, were applied in the courses for students of Computer science at the Faculty of Education. We observed students' learning behaviour as well as their actions, preferences, and learning patterns in different stages of learning process, supported by the e-learning environment. Students engaged in all these activities had an opportunity to develop competences for team work and collaborative learning. Active and collaborative forms of learning were used to facilitate higher order thinking skills and to develop assessment skills. We used Bloom's Digital Taxonomy to analyse the usage of digital tools which facilitate different phases of learning. Active and collaborative forms of learning, such as mini-performances supported by workshop, autonomous learning supported by video-content with interactive questions and answers, collaborative editing of wikis with peer assessment, pair programming, explorative learning, discovery learning, reflections, self-reflections, and creation of exercises for knowledge assessment are used to facilitate higher order thinking skills. 0 0
Active learning: Engaging students in the classroom using mobile phones Ayu M.A.
Taylor K.
Mantoro T.
Active learning
Audience response system
Technology in education
Votapedia
2009 IEEE Symposium on Industrial Electronics and Applications, ISIEA 2009 - Proceedings English Audience Response Systems (ARS) are used to achieve active learning in lectures and large group environments by facilitating interaction between the presenter and the audience. However, their use is discouraged by the requirement for specialist infrastructure in the lecture theatre and management of the expensive clickers they use. We improve the ARS by removing the need for specialist infrastructure, by using mobile phones instead of clickers, and by providing a web based interface in the familiar Wikipedia style. Responders usually vote by dialing and this has been configured to be cost free in most cases. The desirability of this approach is shown by the use the demonstration system has had with 21, 000 voters voting 92, 000 times in 14, 000 surveys to date. 0 0
Adapting language modeling methods for expert search to rank wikipedia entities Jian Jiang
Lu W.
Rong X.
Gao Y.
Entity ranking
Entity retrieval
Expert search
Language model
Lecture Notes in Computer Science English In this paper, we propose two methods to adapt language modeling methods for expert search to the INEX entity ranking task. In our experiments, we notice that language modeling methods for expert search, if directly applied to the INEX entity ranking task, cannot effectively distinguish entity types. Thus, our proposed methods aim at resolving this problem. First, we propose a method to take into account the INEX category query field. Second, we use an interpolation of two language models to rank entities, which can solely work on the text query. Our experiments indicate that both methods can effectively adapt language modeling methods for expert search to the INEX entity ranking task. 0 0
Addressing gaps in knowledge while reading Chris Jordan
Carolyn Watters
Journal of the American Society for Information Science and Technology Reading is a common everyday activity for most of us. In this article, we examine the potential for using Wikipedia to fill in the gaps in one's own knowledge that may be encountered while reading. If gaps are encountered frequently while reading, then this may detract from the reader's final understanding of the given document. Our goal is to increase access to explanatory text for readers by retrieving a single Wikipedia article that is related to a text passage that has been highlighted. This approach differs from traditional search methods where the users formulate search queries and review lists of possibly relevant results. This explicit search activity can be disruptive to reading. Our approach is to minimize the user interaction involved in finding related information by removing explicit query formulation and providing a single relevant result. To evaluate the feasibility of this approach, we first examined the effectiveness of three contextual algorithms for retrieval. To evaluate the effectiveness for readers, we then developed a functional prototype that uses the text of the abstract being read as context and retrieves a single relevant Wikipedia article in response to a passage the user has highlighted. We conducted a small user study where participants were allowed to use the prototype while reading abstracts. The results from this initial study indicate that users found the prototype easy to use and that using the prototype significantly improved their stated understanding and confidence in that understanding of the academic abstracts they read. 2009 {ASIS} T. 0 0
Adessowiki on-line collaborative scientific programming platform Lotufo R.A.
Machado R.C.
Korbes A.
Ramos R.G.
Collaborative programming
Software engineering
Wiki
WikiSym English Adessowiki (http://www.adessowiki.org) is a collaborative environment for development, documentation, teaching and knowledge repository of scientific computing algorithms. The system is composed of a collection of collaborative web pages in the form of a wiki. The articles of this wiki can embed programming code that will be executed on the server when the page is rendered, incorporating the results as figures, texts and tables on the document. The execution of code at the server allows hardware and software centralization and access through a web browser. This combination of a collaborative wiki environment, central server and execution of code at rendering time enables a host of possible applications like, for example: a teaching environment, where students submit their reports and exercises on Adessowiki without needing to install special software; authoring of texts, papers and scientific computing books, where figures are generated in a reproducible way by programs written by the authors; comparison of solutions and benchmarking of algorithms given that all the programs are executed under the same configuration; creation of an encyclopedia of algorithms and executable source code. Adessowiki is an environment that carries simultaneously documentation, programming code and results of its execution without any software configuration such as compilers, libraries and special tools at the client side. Copyright 0 1
Adoption factors of online knowledge sharing service in the era of web 2.0 Jiang Yang
Shim J.P.
Collectivism
Individualism
Subjective norms
WebQual
15th Americas Conference on Information Systems 2009, AMCIS 2009 English While the topic of online knowledge sharing services based on Web 2.0 has received considerable attention, virtually all the studies dealing with online knowledge sharing services have neglected or given cursory attention to the users' perception regarding the usage of those services and the corresponding level of interaction. This study focuses on users' different attitudes and expectations toward the domestic online knowledge sharing service represented by Korea's 'Jisik iN' (translation: Knowledge iN) of Naver and a foreign counterpart of online knowledge sharing service represented by Wikipedia, which are often presented as a model of Web 2.0 applications. In Korea, the popularity gap between Jisik iN and Wikipedia drops a hint of the necessity in grasping which factors are more important in allowing for more users' engagement and satisfaction with regards to the online knowledge sharing service. This study presents and suggests an integrated model which is based on the constructs of WebQual, subjective norms, and cultural dimensions. 0 0
Adoption of web based collaboration tools in the enterprise: Challenges and opportunities Onyechi G.C.
Abeysinghe G.
Blogs
Collaboration
Social media
Social networking
Wiki
Proceedings of the 2009 International Conference on the Current Trends in Information Technology, CTIT 2009 English Many organisations nowadays are constantly seeking ways to improve their competitive edge and remain profitable. Organisations use new technology as a strategic tool which help create new ways of satisfying customer needs and working practices. Developments in internet technologies have led to the growing dependence on web-based technologies and more recently, collaboration software platforms. In spite of the vast amount of literature which describes the benefits organisations may reap through the use of these technologies, there is also skepticism regarding the adoption of these tools. This paper takes a critical look at the adoption of collaboration tools, focusing on social media, in the enterprise especially looking at the reason for skepticism in adopting these tools. Through surveys carried out amongst users of social media and case studies, the research looks at the value adding capabilities of social media in business, the challenges and opportunities, and adoption issues. 0 0
Amplifying Community Content Creation Using Mixed-Initiative Information Extraction Raphael Hoffmann
Saleema Amershi
Kayur Patel
Fei Wu
James Fogarty
Daniel S. Weld
English 0 0
Amplifying community content creation with mixed-initiative information extraction Raphael Hoffmann
Saleema Amershi
Kayur Patel
Fei Wu
James Fogarty
Weld D.S.
Community content creation
Information extraction
Mixed-initiative interfaces
Conference on Human Factors in Computing Systems - Proceedings English Although existing work has explored both information extraction and community content creation, most research has focused on them in isolation. In contrast, we see the greatest leverage in the synergistic pairing of these methods as two interlocking feedback cycles. This paper explores the potential synergy promised if these cycles can be made to accelerate each other by exploiting the same edits to advance both community content creation and learning-based information extraction. We examine our proposed synergy in the context of Wikipedia infoboxes and the Kylin information extraction system. After developing and refining a set of interfaces to present the verification of Kylin extractions as a non-primary task in the context of Wikipedia articles, we develop an innovative use of Web search advertising services to study people engaged in some other primary task. We demonstrate our proposed synergy by analyzing our deployment from two complementary perspectives: (1) we show we accelerate community content creation by using Kylin's information extraction to significantly increase the likelihood that a person visiting a Wikipedia article as a part of some other primary task will spontaneously choose to help improve the article's infobox, and (2) we show we accelerate information extraction by using contributions collected from people interacting with our designs to significantly improve Kylin's extraction performance. Copyright 2009 ACM. 0 0
An Enhanced Wiki for Requirements Engineering David de Almeida Ferreira
Alberto Manuel Rodrigues da Silva
Software Requirements
Software Engineering Tools and Methods
SEAA English 0 1
An Exploration on On-line Mass Collaboration: focusing on its motivation structure. Jae Kyung Ha
Yong Kim-Hak
International Journal of Social Sciences The Internet has become an indispensable part of our lives. Witnessing recent web-based mass collaboration, e.g. Wikipedia, people are questioning whether the Internet has made fundamental changes to the society or whether it is merely a hyperbolic fad. It has long been assumed that collective action for a certain goal yields the problem of free-riding, due to its non-exclusive and non-rival characteristics. Then, thanks to recent technological advances, the on-line space experienced the following changes that enabled it to produce public goods: 1) decrease in the cost of production or coordination 2) externality from networked structure 3) production function which integrates both self-interest and altruism. However, this research doubts the homogeneity of on-line mass collaboration and argues that a more sophisticated and systematical approach is required. The alternative that we suggest is to connect the characteristics of the goal to the motivation. Despite various approaches, previous literature fails to recognize that motivation can be structurally restricted by the characteristic of the goal. First we draw a typology of on-line mass collaboration with 'the extent of expected beneficiary' and 'the existence of externality', and then we examine each combination of motivation using Benkler's framework. Finally, we explore and connect such typology with its possible dominant participating motivation. 0 0
An agent- based semantic web service discovery framework Neiat A.G.
Mohsenzadeh M.
Forsati R.
Rahmani A.M.
Multi agent system
Semantic Web services
User ontology
Web service discovery
Proceedings - 2009 International Conference on Computer Modeling and Simulation, ICCMS 2009 English Web services have changed the Web from a database of static documents to a service provider. To improve the automation of Web services interoperation, a lot of technologies are recommended, such as semantic Web services and agents. In this paper we propose a framework for semantic Web service discovery based on semantic Web services and FIPA multi agents. This paper provides a broker which provides semantic interoperability between semantic Web service provider and agents by translating WSDL to DF description for semantic Web services and DF description to WSDL forFIPA multi agents. We describe how the proposed architecture analyzes the request and match search query. The ontology management in the broker creates the user ontology and merges it with general ontology (i.e. WordNet, Yago, Wikipedia ⋯). We also describe the recommendation component that recommends the WSDL to Web service provider to increase their retrieval probability in the related queries. 0 0
An analysis of computer mediated communication patterns Kanbe M.
Yamamoto S.
Computer mediated communication
Knowledge transformation
SNS
Wiki
International Journal of Knowledge, Culture and Change Management English Recently, Computer Mediated Communication (CMC) tools such as Social Network Service (SNS) and Wiki become popular among enterprises. However, it is not clear how to select a suitable tool for the situation of each enterprise communication. We assume that CMC tools have their own suitable pattern of enterprise communication. In this paper, patterns of CMC will be identified through analysis of real enterprise communication cases by using Knowledge transformation modes proposed by authors. Analyses of the enterprise SNS and Wiki usages show that these communication patterns include discussion and publication types. In the case of SNS, we extracted the discussion pattern of CMC from the Q&A scenario. In the case of Wiki, we also extracted the publication pattern of CMC in the software development process. In the course of the analysis, we identified the patterns of CMC by using the distribution of the 5 Knowledge transformation modes. As a result, we clarified the effectivity of knowledge transformation modes to analyze the patterns of CMC. The future works include showing the possibility of effective recombination of CMC tools based on these communication patterns. 0 0
An architecture to support intelligent user interfaces for Wikis by means of Natural Language Processing Johannes Hoffart
Torsten Zesch
Iryna Gurevych
Wiki
Content organization
Natural Language Processing
User interaction
WikiSym English 0 0
An augmented reality tourist guide on your mobile devices El Choubassi M.
Nestares O.
Wu Y.
Kozintsev I.
Haussecker H.
Geotagging
Image matching
Location and orientation sensors
Mobile augmented reality
Optical flow
SIFT
SURF
Lecture Notes in Computer Science English We present an augmented reality tourist guide on mobile devices. Many of latest mobile devices contain cameras, location, orientation and motion sensors. We demonstrate how these devices can be used to bring tourism information to users in a much more immersive manner than traditional text or maps. Our system uses a combination of camera, location and orientation sensors to augment live camera view on a device with the available information about the objects in the view. The augmenting information is obtained by matching a camera image to images in a database on a server that have geotags in the vicinity of the user location. We use a subset of geotagged English Wikipedia pages as the main source of images and augmenting text information. At the time of publication our database contained 50 K pages with more than 150 K images linked to them. A combination of motion estimation algorithms and orientation sensors is used to track objects of interest in the live camera view and place augmented information on top of them. 0 0
An automatic web site menu structure evaluation Takeuchi H. IEEE International Conference on Fuzzy Systems English The purpose of this paper is to propose a method for automatically evaluating Web site menu structures. The evaluation system requires content data and a menu structure with link names. This approach consists of three stages. First, the system classifies the content data into appropriate links. Second, the system identifies the usability problems for all content data. Third, the system calculates an index that indicates the averaged predicted mouse clicks for the menu structure. As applications, a link name selection problem and a link structure evaluation problem are discussed. This system was also applied to real data, such as Encarta's and Wikipedia's menus. The results confirmed the usefulness of the system. 0 0
An axiomatic approach for result diversification Gollapudi S.
Sharma A.
Approximation algorithms
Axiomatic framework
Diversification
Facility dispersion
Search engine
Wikipedia
WWW'09 - Proceedings of the 18th International World Wide Web Conference English Understanding user intent is key to designing an effective ranking system in a search engine. In the absence of any explicit knowledge of user intent, search engines want to diversify results to improve user satisfaction. In such a setting, the probability ranking principle-based approach of presenting the most relevant results on top can be sub-optimal, and hence the search engine would like to trade-off relevance for diversity in the results. In analogy to prior work on ranking and clustering systems, we use the axiomatic approach to characterize and design diversification systems. We develop a set of natural axioms that a diversification system is expected to satisfy, and show that no diversification function can satisfy all the axioms simultaneously. We illustrate the use of the axiomatic framework by providing three example diversification objectives that satisfy different subsets of the axioms. We also uncover a rich link to the facility dispersion problem that results in algorithms for a number of diversification objectives. Finally, we propose an evaluation methodology to characterize the objectives and the underlying axioms. We conduct a large scale evaluation of our objectives based on two data sets: a data set derived from the Wikipedia disambiguation pages and a product database. Copyright is held by the International World Wide Web Conference Committee (IW3C2). 0 0
An e-learning framework for assessment (FREMA) Wills G.B.
Bailey C.P.
Davis H.C.
Gilbert L.
Yvonne Howard
Jeyes S.
Millard D.E.
Price J.
Sclater N.
Sherratt R.
Tulloch I.
Young R.
Assessment
Community
Domain
E-framework
E-learning
Reference model
Semantic wiki
Assessment and Evaluation in Higher Education English This article reports on the e-Framework Reference Model for Assessment (FREMA) project that aimed at creating a reference model for the assessment domain: a guide to what resources (standards, projects, people, organisations, software, services and use cases) exist for the domain, aimed at helping strategists understand the state of elearning assessment, and helping developers to place their work in context and thus the community to build coherent systems. This article describes the rationale and method of developing the FREMA model and how it may be used. We delivered FREMA via a heavily interlinked website. Because the resulting network of resources was so complex, we required a method of providing users with a structured navigational method that helped them explore and identify resources useful to them. This led us to look at how overviews of e-learning domains have been handled previously, and to work towards our own concept maps that ploted the topology of the domain. FREMA represents an evolving view of the domain and therefore we developed the website into a Semantic Wiki, thereby allowing the assessment community to record their own projects and services and thus to grow the reference model over time. 0 0
An effective method for keeping design artifacts up-to-date Ben-Chaim Y.
Farchi E.
Raz O.
Proceedings - International Conference on Software Engineering English A major problem in the software development process is that design documents are rarely kept up-to-date with the implementation, and thus become irrelevant for extracting test plans or reviews. Furthermore, design documents tend to become very long and often impossible to review and comprehend. This paper describes an experimental method conducted in a development group at IBM. The group uses a Wikipedia-like process to maintain design documents, while taking measures to keep them up-todate and in use, and thus relevant. The method uses a wiki enhanced with hierarchal glossaries of terms to maintain design artifacts. Initial results indicate that these enhancements are successful and assist in the creation of more effective design documents. We maintained a large portion of the groups' design documents in use and relevant over a period of three months. Additionally, by archiving artifacts that were not in use, we were able to validate that they were no longer relevant. 0 0
An empirical study of knowledge collaboration networks in virtual community: Based on wiki 2009 International Conference on Management Science and Engineering - 16th Annual Conference Proceedings, ICMSE 2009 English 0 1
An empirical study on criteria for assessing information quality in corporate wikis Friberg T.
Reinhardt W.
Corporate Wikis
Empirical Study
Information quality
Information Quality Criteria
Proceedings of the 2009 International Conference on Information Quality, ICIQ 2009 English Wikis gain more and more attention as tool for corporate knowledge management. The usage of corporate wikis differs from public wikis like the Wikipedia as there are hardly any wiki wars or copyright issues. Nevertheless the quality of the available articles is of high importance in corporate wikis as well as in public ones. This paper presents the results from an empirical study on criteria for assessing information quality of articles in corporate wikis. Therefore existing approaches for assessing information quality are evaluated and a specific wikiset of criteria is defined. This wiki-set was examined in a study with participants from 21 different German companies using wikis as essential part of their knowledge management toolbox. Furthermore this paper discusses various ways for the automatic and manual rating of information quality and the technical implementation of such an IQ-profile for wikis. 0 0
An empirical study on media characteristics and knowledge sharing in web 2.0 environment Kim E.
Jinhyun Ahn
Lee D.
Blogs
Channel expansion theory
Enterprise 2.0
Expectation confirmation theory
Knowledge sharing
Web 2.0
Wiki
15th Americas Conference on Information Systems 2009, AMCIS 2009 English The success of the Enterprise 2.0 KMS (knowledge management systems) depends on the user's continuous participation in the process of knowledge sharing. This study attempts to identify the determinants of the user's intention to continuous knowledge sharing based on Expectation Confirmation Theory and Channel Expansion Theory. We also consider communication process modes (i.e., blog and wiki) as a moderator for perceived channel richness. The results of our analysis show the positive effects of all the predictors. With regard to the moderating effects of communication process modes, the effect of experiences shared with group members is greater in the channel to support conveyance, and the effects of experiences about the channel and the group task are greater in the channel to support convergence. 0 0
An empirical study on the use of web 2.0 by Greek adult instructors in educational procedures Vrettaros J.
Tagoulis A.
Giannopoulou N.
Drigas A.
Blogs
E-learning
Empirical study
Facebook
Social software
Web 2.0
Wiki
Youtube
Communications in Computer and Information Science English In this paper is presented an empirical study and its results. The empirical study was designed through a pilot training program which was conducted in order to learn if Greek educators can learn to use and even adopt the use of web 2.0 tools and services in the educational process and in which extend, where the type of learning is either distant learning, blended learning or the learning takes place in the traditional classroom. 0 0
An extensible semantic wiki architecture Jochen Reutelshoefer
Haupt F.
Lemmerich F.
Joachim Baumeister
CEUR Workshop Proceedings English Wikis are prominent for successfully supporting the quick and simple creation, sharing and management of content on the web. Semantic wikis improve this by semantically enriched content. Currently, notable advances in different fields of semantic technology like (paraconsistent) reasoning, expressive knowledge (e.g., rules), and ontology learning can be observed. By making use of these technologies, semantic wikis should not only allow for the agile change of its content but also the fast and easy integration of emerging semantic technologies into the system. Following this idea, the paper introduces an extensible semantic wiki architecture. 0 0
An interactive semantic knowledge base unifying Wikipedia and HowNet Hongzhi Guo
Qingcai Chen
Lei Cui
Xiaolong Wang
HowNet
Fusion
Semantic analysis
Semantic knowledge base
Wikipedia
ICICS English 0 0
An interactive semantic knowledge base unifying wikipedia and HowNet Hongzhi Guo
Qingcai Chen
Lei Cui
Xiaolong Wang
Fusion
HowNet
Semantic analysis
Semantic knowledge base
Wikipedia
ICICS 2009 - Conference Proceedings of the 7th International Conference on Information, Communications and Signal Processing English We present an interactive, exoteric semantic knowledge base, which integrates HowNet and the online encyclopedia Wikipedia. The semantic knowledge base mainly builds on items, categories, attributes and relation between. In the constructing process, a mapping relationship is established from HowNet, Wikipedia to the new knowledge base. Different from other online encyclopedias or knowledge dictionaries, the categories in the semantic knowledge base are semantically tagged, and this can be well used in semantic analysis and semantic computing. Currently the knowledge base built in this paper contains more than 200,000 items and 1,000 categories, and these are still increasing every day. 0 0
An investigation into contribution I-intention and we-intention in open web-based encyclopedia: Roles of joint commitment and mutual agreement Shen A.X.L.
Lee M.K.O.
Cheung C.M.K.
Hejie Chen
I-intention
Joint commitment
Knowledge contribution
Mutual agreement
Social cognitive theory
We-intention
Wiki community
ICIS 2009 Proceedings - Thirtieth International Conference on Information Systems English In the current study, knowledge contribution in open web-based encyclopedia is conceptualized as a group-referent intentional social action, and we-intention, which reflects one's perception of the group acting as a unit, has been employed. The motivation of this study thus is to better understand antecedents and consequences of contribution I-intention and we-intention in open web-based encyclopedia. A research model was developed and empirically examined with 202 knowledge contributors in two most famous wiki communities in Mainland China. The results demonstrated that personal outcome expectations exert significant effects on both intentions. Joint commitment, mutual agreement and community-related outcome expectations are significantly related to we-intention to contribute, but not related to I-intention. In addition, we-intention has a statistically significant positive effect on contribution behavior. However, I-intention negatively relates to contribution behavior. We believe this study will serve as a starting point for furthering our limited understanding of the intentional social action in knowledge management research. 0 0
An ontology-based approach for key phrase extraction Nguyen C.Q.
Phan T.T.
ACL-IJCNLP 2009 - Joint Conf. of the 47th Annual Meeting of the Association for Computational Linguistics and 4th Int. Joint Conf. on Natural Language Processing of the AFNLP, Proceedings of the Conf. English Automatic key phrase extraction is fundamental to the success of many recent digital library applications and semantic information retrieval techniques and a difficult and essential problem in Vietnamese natural language processing (NLP). In this work, we propose a novel method for key phrase extracting of Vietnamese text that exploits the Vietnamese Wikipedia as an ontology and exploits specific characteristics of the Vietnamese language for the key phrase selection stage. We also explore NLP techniques that we propose for the analysis of Vietnamese texts, focusing on the advanced candidate phrases recognition phase as well as part-of-speech (POS) tagging. Finally, we review the results of several experiments that have examined the impacts of strategies chosen for Vietnamese key phrase extracting. 0 0
An unsupervised model of exploiting the web to answer definitional questions Wu Y.
Kashioka H.
Proceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2009 English In order to build accurate target profiles, most definition question answering (QA) systems primarily involve utilizing various external resources, such as WordNet, Wikipedia, Biograpy.com, etc. However, these external resources are not always available or helpful when answering definition questions. In contrast, this paper proposes an unsupervised classification model, called the U-Model, which can liberate definitional QA systems from heavily depending on a variety of external resources via applying sentence expansion (SE) and SVM classifier. Experimental results from testing on English TREC test sets reveal that the proposed U-Model can not only significantly outperform baseline system but also require no specific external resources. 0 0
Analysing Wikipedia and gold-standard corpora for NER training Joel Nothman
Tara Murphy
James R. Curran
EACL English 0 0
Analysing wikipedia and gold-standard corpora for NER training Joel Nothman
Tara Murphy
Curran J.R.
EACL 2009 - 12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings English Named entity recognition (NER) for English typically involves one of three gold standards: MUC, CoNLL, or BBN, all created by costly manual annotation. Recent work has used Wikipedia to automatically create a massive corpus of named entity annotated text. We present the first comprehensive cross-corpus evaluation of NER. We identify the causes of poor cross-corpus performance and demonstrate ways of making them more compatible. Using our process, we develop a Wikipedia corpus which outperforms gold standard corpora on cross-corpus evaluation by up to 11%. 0 0
Analysis of community structure in Wikipedia Dmitry Lizorkin
Olena Medelyan
Maria Grineva
English We present the results of a community detection analysis of the Wikipedia graph. Distinct communities in Wikipedia contain semantically closely related articles. The central topic of a community can be identified using PageRank. Extracted communities can be organized hierarchically similar to manually created Wikipedia category structure. 0 0
Analysis of community structure in Wikipedia (poster) Dmitry Lizorkin
Olena Medelyan
Maria Grineva
Community detection
Graph analysis
Wikipedia
WWW'09 - Proceedings of the 18th International World Wide Web Conference English We present the results of a community detection analysis of the Wikipedia graph. Distinct communities in Wikipedia contain semantically closely related articles. The central topic of a community can be identified using PageRank. Extracted communities can be organized hierarchically similar to manually created Wikipedia category structure. Copyright is held by the author/owner(s). 0 0
Analysis of tag-based recommendation performance for a semantic wiki Durao F.
Peter Dolog
Adaptation
Performance
Recommendation
Tags
Wiki
CEUR Workshop Proceedings English Recommendations play a very important role for revealing related topics addressed in the wikis beyond the currently viewed page. In this paper, we extend KiWi, a semantic wiki with three different recommendation approaches. The first approach is implemented as a traditional tag-based retrieval, the second takes into account external factors such as tag popularity, tag representativeness and the affinity between user and tag and the third approach recommends pages in grouped by tag. The experiment evaluates the wiki performance in different scenarios regarding the amount of pages, tags and users. The results provide insights for the efficient widget allocation and performance management. 0 0
Analyzing collaborative learning activities in wikis using social network analysis Saskia Janina Kepp
Heidemarie Schorr
CSCL
Collaboration analysis
Social network analysis
Wiki
CHI EA English 0 0
Annotating wikipedia articles with semantic tags for structured retrieval Saravadee Sae Tan
Tang Enya Kong
Gian Chand Sodhy
Semantic markup
Structured retrieval
Wikipedia
SWSM English 0 0
Answering table augmentation queries from unstructured lists on the web Gupta R.
Sarawagi S.
Proceedings of the VLDB Endowment English We present the design of a system for assembling a table from a few example rows by harnessing the huge corpus of information-rich but unstructured lists on the web. We developed a totally unsupervised end to end approach which given the sample query rows - (a) retrieves HTML lists relevant to the query from a pre-indexed crawl of web lists, (b) segments the list records and maps the segments to the query schema using a statistical model, (c) consolidates the results from multiple lists into a unified merged table, (d) and presents to the user the consolidated records ranked by their estimated membership in the target relation. The key challenges in this task include construction of new rows from very few examples, and an abundance of noisy and irrelevant lists that swamp the consolidation and ranking of rows. We propose modifications to statistical record segmentation models, and present novel consolidation and ranking techniques that can process input tables of arbitrary schema without requiring any human supervision. Experiments with Wikipedia target tables and 16 million unstructured lists show that even with just three sample rows, our system is very effective at recreating Wikipedia tables, with a mean runtime of around 20s. 0 0
Análisis de las contribuciones a un wiki para la evaluación web de competencias Juan Manuel Dodero-Beardo
Gregorio Rodríguez-Gómez
María S. Ibarra-Sáiz
Wiki
Evaluación de competencias
Evaluación electrónica
Spanish El uso de wikis y aplicaciones web de autoría compartida es la expresión más actual de una forma de aprendizaje colaborativo donde se asigna a un grupo de estudiantes la elaboración de un documento donde aprendan a realizar análisis de un problema y compartirlos con los compañeros. Este trabajo parte de la hipótesis de que en el proceso de realización de dicho trabajo se ponen de manifiesto una serie de competencias entre los participantes. Para evaluar dichas competencias se describe un método de análisis gráfico que, analizando una serie de patrones propuestos sobre la contribución individual y grupal al wiki, pretende facilitar la detección de ciertas competencias genéricas. 4 2
Application of Wiki technology in literature retrieval of digital library 2009 Joint Conferences on Pervasive Computing, JCPC 2009 English 0 0
Are web-based informational queries changing? Chadwyn Tann
Mark Sanderson
Journal of the American Society for Information Science and Technology This brief communication describes the results of a questionnaire examining certain aspects of the Web-based information seeking practices of university students. The results are contrasted with past work showing that queries to Web search engines can be assigned to one of a series of categories: navigational, informational, and transactional. The survey results suggest that a large group of queries, which in the past would have been classified as informational, have become at least partially navigational. We contend that this change has occurred because of the rise of large Web sites holding particular types of information, such as Wikipedia and the Internet Movie Database. 0 0
Are wikipedia resources useful for discovering answers to list questions within web snippets? Alejandro Figueroa Distinct answers
List questions
Question answering
Web mining
Lecture Notes in Business Information Processing English This paper presents LiSnQA, a list question answering system that extracts answers to list queries from the short descriptions of web-sites returned by search engines, called web snippets. LiSnQA mines Wikipedia resources in order to obtain valuable information that assists in the extraction of these answers. The interesting facet of LiSnQA is, that in contrast to current systems, it does not account for lists in Wikipedia, but for its redirections, categories, sandboxes, and first definition sentences. Results show that these resources strengthen the answering process. 0 0
Arguably the Greatest: Sport Fans and Communities at Work on Wikipedia Meghan M Ferriter Sociology of Sport Journal This article explores the socially constructed space of Wikipedia and how the process and structure of Wikipedia enable it to act both as a vehicle for communication between sport fans and to subtly augment existing public narratives about sport. As users create article narratives, they educate fellow fans in relevant social and sport meanings. This study analyzes two aspects of Wikipedia for sports fans, application of statistical information and connecting athletes with other sports figures and organizations, through a discourse analysis of article content and the discussion pages of ten sample athletes. These pages of retired celebrity athletes provide a means for exploring the multidirectional production processes used by the sport fan community to celebrate recorded events of sporting history in clearly delineated and verifiable ways, thus maintaining the sport fans' community social values. Adapted from the source document. 0 0
Argument based machine learning from examples and text Mozina M.
Claudio Giuliano
Bratko I.
Proceedings - 2009 1st Asian Conference on Intelligent Information and Database Systems, ACIIDS 2009 English We introduce a novel approach to cross-media learning based on argument based machine learning (ABML). ABML is a recent method that combines argumentation and machine learning from examples, and its main idea is to use arguments for some of the learning examples. Arguments are usually provided by a domain expert. In this paper, we present an alternative approach, where arguments used in ABML are automatically extracted from text with a technique for relation extraction. We demonstrate and evaluate the approach through a case study of learning to classify animals by using arguments automatically extracted from Wikipedia. 0 0
Arguments extracted from text in argument based machine learning: A case study Mozina M.
Claudio Giuliano
Bratko I.
CEUR Workshop Proceedings English We introduce a novel approach to cross-media learning based on argument based machine learning (ABML). ABML is a recent method that combines argumentation and machine learning from examples, and its main idea is to provide expert's arguments for some of the learning examples. In this paper, we present an alternative approach, where arguments used in ABML are automatically extracted from text with a technique for relation extraction. We demonstrate and evaluate the approach through a case study of learning to classify animals by using arguments extracted from Wikipedia. 0 0
Art history: a guide to basic research resources Chen
Ching-Jung
Collection Building The purpose of this paper is to present basic resources and practical strategies for undergraduate art history research. The paper is based on the author's experience as both an art librarian and instructor for a core requirement art history course. The plan detailed in this paper covers every step of the research process, from exploring the topic to citing the sources. The resources listed, which include subscription databases as well as public Web sites, are deliberately limited to a manageable number. Additional topics include defining the scope of inquiry and making appropriate use of Internet resources such as Wikipedia. The paper provides the academic librarian with clear guidance on basic research resources in art history. 0 0
Articles as assignments - Modalities and experiences of wikipedia use in university courses Klaus Wannemacher Constructivist Learning
Information Com petency
Media Literacy
Teaching Strategies
Web 2.0
Web-Based Learning
Wikipedia
Lecture Notes in Computer Science English In spite of perceived quality deficits, Wikipedia is a popular information resource among students. Instructors increasingly take advantage of the positive student attitude through actively integrating Wikipedia as a learning tool into university courses. The contribution raises the question if Wikipedia assignments in university courses are suited to make complex research, editing and bibliographic processes through which scholarship is produced transparent to students and to effectively improve their research and writing skills. 0 0
Aspects to motivate users of a design engineering wiki to share their knowledge Proceedings of World Academy of Science, Engineering and Technology English 0 0
Assessing - learning - improving, an integrated approach for self assessment and process improvement systems Malzahn D. Proceedings of the 4th International Conference on Systems, ICONS 2009 English Delivering successful projects and system in a sustaining way becomes more and more the focus of systems and software developing organizations. New approaches in the field of assessment and standardization application led to an increase of assessment and self assessment systems. But these systems are only the first step on a long way. If the assessment system itself is not supported by a learning and improvement approach, the organization will have a system to identify the status but does not have any support for improvement. This gap can be closed by an approach combining assessment tools, wiki-based knowledge platforms and self-learning expert systems (based on ontologies and semantic wikis). Result is a system environment which provides status assessment, learning and continuous improvement services based on different standards and approaches. This approach is already being implemented for the field of project management. In this article we will explain the basics and show the application of a combined system. 0 0
Assessing the quality of Wikipedia articles with lifecycle based metrics Thomas Wöhner
Ralf Peters
WikiSym English The main feature of the free online-encyclopedia Wikipedia is the wiki-tool, which allows viewers to edit the articles directly in the web browser. As a weakness of this openness for example the possibility of manipulation and vandalism cannot be ruled out, so that the quality of any given Wikipedia article is not guaranteed. Hence the automatic quality assessment has been becoming a high active research field. In this paper we offer new metrics for an efficient quality measurement. The metrics are based on the lifecycles of low and high quality articles, which refer to the changes of the persistent and transient contributions throughout the entire life span. 0 3
Aufbau eines wissenschaftlichen Textcorpus auf der Basis der Daten der englischsprachigen Wikipedia Markus Fuchs University of Regensburg German With the growth in popularity over the last eight years, Wikipedia has become a very promising resource in academic studies. Some of its properties make it attractive for a wide range of research fields (information retrieval, information extraction, natural language processing, ...), e.g. free availability and up to date content. However, efficient and structured access to this information is not easy, as most of Wikipedia's contents are encoded in its own markup language (wikitext). And, unfortunately, there is no formal definition of wikitext, which makes parsing very difficult and burdensome. In this thesis, we present a system that lets the researcher automatically build a richly annotated corpus containing the information most commonly used in research projects. To this end, we built our own wikitext parser based on the original converter used by Wikipedia itself to convert wikitext into HTML. The system stores all data in a relational database, which allows for efficient access and extensive retrieval functionality. 0 0
Augmented social cognition: Using social web technology to enhance the ability of groups to remember, think, and reason Chi E.H. Augmented social cognition
Characterization
Computer-Supported Cooperative Work
Delicious
HCI
Modeling
Overview
Research methods
Social system
Social tagging
Social web
Summary
Wikipedia
SIGMOD-PODS'09 - Proceedings of the International Conference on Management of Data and 28th Symposium on Principles of Database Systems English We are experiencing a new Social Web, where people share, communicate, commiserate, and conflict with each other. As evidenced by systems like Wikipedia, twitter, and delicious.com, these environments are turning people into social information foragers and sharers. Groups interact to resolve conflicts and jointly make sense of topic areas from "Obama vs. Clinton" to "Islam." PARC's Augmented Social Cognition researchers -- who come from cognitive psychology, computer science, HCI, CSCW, and other disciplines -- focus on understanding how to "enhance a group of people's ability to remember, think, and reason". Through Social Web systems like social bookmarking sites, blogs, Wikis, and more, we can finally study, in detail, these types of enhancements on a very large scale. Here we summarize recent work and early findings such as: (1) how conflict and coordination have played out in Wikipedia, and how social transparency might affect reader trust; (2) how decreasing interaction costs might change participation in social tagging systems; and (3) how computation can help organize usergenerated content and metadata. 0 0
Augmenting Wiki system for collaborative EFL reading by digital pen annotations Chang C.-K. Annotation
Collaborative learning
Digital pen
English as foreign language
Wiki
Proceedings - 2009 International Symposium on Ubiquitous Virtual Reality, ISUVR 2009 English Wikis are very useful for collaborative learning because of their sharing and flexible nature. Many learning activities can use Wiki to facilitate the processes, such as online glossaries, project reports, and dictionaries. Some EFL (English as a Foreign Language) instructors have paid attention to the popularity of Wiki. Although Wikis are very simple and intuitive for users with information literacy, Wikis need computing environment for each learners to edit Web pages. Generally, an instructor can only conduct a Wiki-based learning activity in a computer classroom. Although mobile learning devices (such as PDAs) for every learner can provide ubiquitous computing environment for a Wiki-based learning activity, this paper suggests another inexpensive way by integrating digital pen with Wiki. Consequently, a learner can annotate an EFL reading with his/her mother tongue by digital pen. After everyone finishes reading, all annotations can be collected into a Wiki system for instruction. Thus, an augmenting Wiki structure is constructed. Finally, learners' satisfactions about annotating in the prototype system are reported in this paper. 0 0
Auto-organização e processos editoriais na Wikipedia: uma análise à luz de Michel Debrun Carlos Frederico de Brito d’Andréa Anais Hipertexto 2009 Portuguese 8 0
Automated seeding of specialised wiki knowledgebases with BioKb Jonathan R. Manning
Ann Hedley
John J. Mullins
Donald R. Dunbar
English BACKGROUND: Wiki technology has become a ubiquitous mechanism for dissemination of information, and places strong emphasis on collaboration. We aimed to leverage wiki technology to allow small groups of researchers to collaborate around a specific domain, for example a biological pathway. Automatically gathered seed data could be modified by the group and enriched with domain specific information. RESULTS: We describe a software system, BioKb, implemented as a plugin for the TWiki engine, and designed to facilitate construction of a field-specific wiki containing collaborative and automatically generated content. Features of this system include: query of publicly available resources such as KEGG, iHOP and MeSH, to generate 'seed' content for topics; simple definition of structure for topics of different types via an administration page; and interactive incorporation of relevant PubMed references. An exemplar is shown for the use of this system, in the creation of the RAASWiki knowledgebase on the renin-angiotensin-aldosterone system (RAAS). RAASWiki has been seeded with data by use of BioKb, and will be the subject of ongoing development into an extensive knowledgebase on the RAAS. CONCLUSION: The BioKb system is available from http://www.bioinf.mvm.ed.ac.uk/twiki/bin/view/TWiki/BioKbPlugin as a plugin for the TWiki engine. 0 0
Automatic Content-based Categorization of Wikipedia Articles Zeno Gantner
Lars Schmidt-Thieme
English 0 0
Automatic acquisition of attributes for ontology construction Gaoying Cui
Lu Q.
Li W.
Yirong Chen
Attribute acquisition
Ontology construction
Wikipedia as resource source
Lecture Notes in Computer Science English An ontology can be seen as an organized structure of concepts according to their relations. A concept is associated with a set of attributes that themselves are also concepts in the ontology. Consequently, ontology construction is the acquisition of concepts and their associated attributes through relations. Manual ontology construction is time-consuming and difficult to maintain. Corpus-based ontology construction methods must be able to distinguish concepts themselves from concept instances. In this paper, a novel and simple method is proposed for automatically identifying concept attributes through the use of Wikipedia as the corpus. The built-in Template:Infobox in Wiki is used to acquire concept attributes and identify semantic types of the attributes. Two simple induction rules are applied to improve the performance. Experimental results show precisions of 92.5% for attribute acquisition and 80% for attribute type identification. This is a very promising result for automatic ontology construction. 0 0
Automatic generation of topic pages using query-based aspect models Balasubramanian N.
Silviu Cucerzan
Aspect models
Query logs
Topic pages
International Conference on Information and Knowledge Management, Proceedings English We investigate the automatic generation of topic pages as an alternative to the current Web search paradigm. We describe a general framework, which combines query log analysis to build aspect models, sentence selection methods for identifying relevant and non-redundant Web sentences, and a technique for sentence ordering. We evaluate our approach on biographical topics both automatically and manually, by using Wikipedia as reference. Copyright 2009 ACM. 0 0
Automatic link detection: A sequence labeling approach Gardner J.J.
Xiong L.
Data mining
Semantic web
Sequence labeling
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English The popularity of Wikipedia and other online knowledge bases has recently produced an interest in the machine learning community for the problem of automatic linking. Automatic hyperlinking can be viewed as two sub problems - link detection which determines the source of a link, and link disambiguation which determines the destination of a link. Wikipedia is a rich corpus with hyperlink data provided by authors. It is possible to use this data to train classifiers to be able to mimic the authors in some capacity. In this paper, we introduce automatic link detection as a sequence labeling problem. Conditional random fields (CRFs) are a probabilistic framework for labeling sequential data. We show that training a CRF with different types of features from the Wikipedia dataset can be used to automatically detect links with almost perfect precision and high recall. Copyright 2009 ACM. 0 0
Automatic multilingual lexicon generation using wikipedia as a resource Shahid A.R.
Kazakov D.
Data mining
Multilingual lexicons
Natural Language Processing
Web crawler
Web mining
Wikipedia
ICAART 2009 - Proceedings of the 1st International Conference on Agents and Artificial Intelligence English This paper proposes a method for creating a multilingual dictionary by taking the titles of Wikipedia pages in English and then finding the titles of the corresponding articles in other languages. The creation of such multilingual dictionaries has become possible as a result of exponential increase in the size of multilingual information on the web. Wikipedia is a prime example of such multilingual source of information on any conceivable topic in the world, which is edited by the readers. Here, a web crawler has been used to traverse Wikipedia following the links on a given page. The crawler takes out the title along with the titles of the corresponding pages in other targeted languages. The result is a set of words and phrases that are translations of each other. For efficiency, the URLs are organized using hash tables. A lexicon has been constructed which contains 7-tuples corresponding to 7 different languages, namely: English, German, French, Polish, Bulgarian, Greek and Chinese. 0 0
Automatic population and updating of a semantic wiki-based configuration management database Frank Kleiner
Andreas Abecker
Liu N.
INFORMATIK 2009 - Im Focus das Leben, Beitrage der 39. Jahrestagung der Gesellschaft fur Informatik e.V. (GI) English This paper describes our work on designing and implementing a component for automatically integrating and updating information about configuration items into a Semantic Wiki-based configuration management database. The presented solution uses technology for information gathering which is built-in or available for most current mainstream operating systems. By using Semantic Wiki technology, e.g., semantic queries and inference, the handling of configuration-management information is simplified and more powerful analyses are possible. 0 0
Automatic quality assessment of content created collaboratively by web communities: a case study of Wikipedia Daniel H. Dalip
Marcos A. Gonçalves
Marco Cristo
Pável Calado
Machine learning
Quality assessment
SVM
Wikipedia
English The old dream of a universal repository containing all the human knowledge and culture is becoming possible through the Internet and the Web. Moreover, this is happening with the direct collaborative, participation of people. Wikipedia is a great example. It is an enormous repository of information with free access and edition, created by the community in a collaborative manner. However, this large amount of information, made available democratically and virtually without any control, raises questions about its relative quality. In this work we explore a significant number of quality indicators, some of them proposed by us and used here for the first time, and study their capability to assess the quality of Wikipedia articles. Furthermore, we explore machine learning techniques to combine these quality indicators into one single assessment judgment. Through experiments, we show that the most important quality indicators are the easiest ones to extract, namely, textual features related to length, structure and style. We were also able to determine which indicators did not contribute significantly to the quality assessment. These were, coincidentally, the most complex features, such as those based on link analysis. Finally, we compare our combination method with state-of-the-art solution and show significant improvements in terms of effective quality prediction. 0 3
Automatically generating Wikipedia articles: A structure-aware approach Christina Sauper
Regina Barzilay
ACL-IJCNLP 2009 - Joint Conf. of the 47th Annual Meeting of the Association for Computational Linguistics and 4th Int. Joint Conf. on Natural Language Processing of the AFNLP, Proceedings of the Conf. English In this paper, we investigate an approach for creating a comprehensive textual overview of a subject composed of information drawn from the Internet. We use the high-level structure of human-authored texts to automatically induce a domainspecific template for the topic structure of a new overview. The algorithmic innovation of our work is a method to learn topicspecific extractors for content selection jointly for the entire template. We augment the standard perceptron algorithm with a global integer linear programming formulation to optimize both local fit of information into each topic and global coherence across the entire overview. The results of our evaluation confirm the benefits of incorporating structural information into the content selection process. 0 0
Automatically generating Wikipedia articles: a structure-aware approach Christina Sauper
Regina Barzilay
ACL English 0 0
BOWiki: an ontology-based wiki for annotation of data and integration of knowledge in biology Robert Hoehndorf
Joshua Bacher
Michael Backhaus
Sergio Gregorio
Frank Loebe
Kay Prüfer
Alexandr Uciteli
Johann Visagie
Heinrich Herre
Janet Kelso
English MOTIVATION:Ontology development and the annotation of biological data using ontologies are time-consuming exercises that currently require input from expert curators. Open, collaborative platforms for biological data annotation enable the wider scientific community to become involved in developing and maintaining such resources. However, this openness raises concerns regarding the quality and correctness of the information added to these knowledge bases. The combination of a collaborative web-based platform with logic-based approaches and Semantic Web technology can be used to address some of these challenges and concerns.RESULTS:We have developed the BOWiki, a web-based system that includes a biological core ontology. The core ontology provides background knowledge about biological types and relations. Against this background, an automated reasoner assesses the consistency of new information added to the knowledge base. The system provides a platform for research communities to integrate information and annotate data collaboratively.AVAILABILITY:The BOWiki and supplementary material is available at http://www.bowiki.net/. The source code is available under the GNU GPL from http://onto.eva.mpg.de/trac/BoWiki. 0 0
Bilingual co-training for monolingual hyponymy-relation acquisition Oh J.-H.
Kiyotaka Uchimoto
Kentaro Torisawa
ACL-IJCNLP 2009 - Joint Conf. of the 47th Annual Meeting of the Association for Computational Linguistics and 4th Int. Joint Conf. on Natural Language Processing of the AFNLP, Proceedings of the Conf. English This paper proposes a novel framework called bilingual co-training for a largescale, accurate acquisition method for monolingual semantic knowledge. In this framework, we combine the independent processes of monolingual semanticknowledge acquisition for two languages using bilingual resources to boost performance. We apply this framework to largescale hyponymy-relation acquisition from Wikipedia. Experimental results show that our approach improved the F-measure by 3.6-10.3%. We also show that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance. 0 0
Binrank: Scaling dynamic authority-based search using materialized subgraphs Heasoo Hwang
Andrey Balmin
Berthold Reinwald
Erik Nijkamp
Proceedings - International Conference on Data Engineering English Dynamic authority-based keyword search algorithms, such as ObjectRank and personalized PageRank, leverage semantic link information to provide high quality, high recall search in databases and on the Web. Conceptually, these algorithms require a query-time PageRank-style iterative computation over the full graph. This computation is too expensive for large graphs, and not feasible at query time. Alternatively, building an index of pre-computed results for some or all keywords involves very expensive preprocessing. We introduce BinRank, a system that approximates ObjectRank results by utilizing a hybrid approach inspired by materialized views in traditional query processing. We materialize a number of relatively small subsets of the data graph in such a way that any keyword query can be answered by running ObjectRank on only one of the sub-graphs. BinRank generates the sub-graphs by partitioning all the terms in the corpus based on their co-occurrence, executing ObjectRank for each partition using the terms to generate a set of random walk starting points, and keeping only those objects that receive nonnegligible scores. The intuition is that a sub-graph that contains all objects and links relevant to a set of related terms should have all the information needed to rank objects with respect to one of these terms. We demonstrate that BinRank can achieve sub-second query execution time on the English Wikipedia dataset, while producing high quality search results that closely approximate the results of ObjectRank on the original graph. The Wikipedia link graph contains about 108 edges, which is at least two orders of magnitude larger than what prior state of the art dynamic authority-based search systems have been able to demonstrate. Our experimental evaluation investigates the trade-off between query execution time, quality of the results, and storage requirements of BinRank. 0 0
Bipartite networks of Wikipedia's articles and authors: A meso-level approach Rut Jesus
Martin Schwartz
Simon Lehmann
Bicliques
Collaboration
Meso-level
Wikipedia
WikiSym English This exploratory study investigates the bipartite network of articles linked by common editors in Wikipedia, 'The Free Encyclopedia that Anyone Can Edit'. We use the articles in the categories (to depth three) of Physics and Philosophy and extract and focus on significant editors (at least 7 or 10 edits per each article). We construct a bipartite network, and from it, overlapping cliques of densely connected articles and editors. We cluster these densely connected cliques into larger modules to study examples of larger groups that display how volunteer editors flock around articles driven by interest, real-world controversies, or the result of coordination in WikiProjects. Our results confirm that topics aggregate editors; and show that highly coordinated efforts result in dense clusters. Copyright 2009 ACM. 0 1
Biwiki-Using a business intelligence wiki to form a virtual community of practice for Portuguese master's students Neto M.
Correia A.M.
Business intelligence
Communities of practice
Knowledge creation
Knowledge sharing
Knowledge transfer management
Wiki
Proceedings of the European Conference on Knowledge Management, ECKM English Web 2.0 software in general and wikis in particular have been receiving growing attention as they constitute new and powerful tools, capable of supporting information sharing, creation of knowledge and a wide range of collaborative processes and learning activities. This paper introduces briefly some of the new opportunities made possible by Web 2.0 or the social Internet, focusing on those offered by the use of wikis as learning spaces. A wiki allows documents to be created, edited and shared on a group basis; it has a very easy and efficient markup language, using a simple Web browser. One of the most important characteristics of wiki technology is the ease with which pages are created and edited. The facility for wiki content to be edited by its users means that its pages and structure form a dynamic entity, in permanent evolution, where users can insert new ideas, supplement previously existing information and correct errors and typos in a document at any time, up to the agreed final version. This paper explores wikis as a collaborative learning and knowledge-building space and its potential for supporting Virtual Communities of Practice (VCoPs). In the academic years (2007/8 and 2008/9), students of the Business Intelligence module at the Master's programme of studies on Knowledge Management and Business Intelligence at Instituto Superior de Estatística e Gestão de Informação of the Universidade Nova de Lisboa, Portugal, have been actively involved in the creation of BIWiki-a wiki for Business Intelligence in the Portuguese language. Based on usage patterns and feedback from students participating in this experience, some conclusions are drawn regarding the potential of this technology to support the emergence of VCoPs; some provisional suggestions will be made regarding the use of wikis to support information sharing, knowledge creation and transfer and collaborative learning in Higher Education. 0 0
BorderFlow: A local graph clustering algorithm for natural language processing Ngomo A.-C.N.
Schumacher F.
Lecture Notes in Computer Science English In this paper, we introduce BorderFlow, a novel local graph clustering algorithm, and its application to natural language processing problems. For this purpose, we first present a formal description of the algorithm. Then, we use BorderFlow to cluster large graphs and to extract concepts from word similarity graphs. The clustering of large graphs is carried out on graphs extracted from the Wikipedia Category Graph. The subsequent low-bias extraction of concepts is carried out on two data sets consisting of noisy and clean data. We show that BorderFlow efficiently computes clusters of high quality and purity. Therefore, BorderFlow can be integrated in several other natural language processing applications. 0 0
Brain awareness week and beyond: Encouraging the next generation McNerney C.D.
Chang E.-J.
Spitzer N.C.
Brain awareness week (BAW)
International brain bee
NERVE
Neuroscience core concepts
Neuroscientist-teacher-partner program
Public education
Science olympiad
Wikipedia
Journal of Undergraduate Neuroscience Education English The field of neuroscience is generating increased public appetite for information about exciting brain research and discoveries. As stewards of the discipline, together with FUN and others, the Society for Neuroscience (SfN) embraces public outreach and education as essential to its mission of promoting understanding of the brain and nervous system. The Society looks to its members, particularly the younger generation of neuroscientists, to inspire, inform and engage citizens of all ages, and most importantly our youth, in this important endeavor. Here we review SfN programs and resources that support public outreach efforts to inform, educate and tell the story of neuroscience. We describe the important role the Brain Awareness campaign has played in achieving this goal and highlight opportunities for FUN members and students to contribute to this growing effort. We discuss specific programs that provide additional opportunities for neuroscientists to get involved with K-12 teachers and students in ways that inspire youth to pursue further studies and possible careers in science. We draw attention to SfN resources that support outreach to broader audiences. Through ongoing partnerships such as that between SfN and FUN, the neuroscience community is well positioned to pursue novel approaches and resources, including harnessing the power of the Internet. These efforts will increase science literacy among our citizens and garner more robust support for scientific research. 0 0
Brede wiki: Neuroscience data structured in a wiki Nielsen F.A. CEUR Workshop Proceedings English Setup in January 2009 the Brede Wiki contains data from neuroscience, particularly from published neuroimaging peer-reviewed papers. Data is stored in simple MediaWiki templates and it can automatically be extracted and represented in an SQL format. Off-wiki Web-scripts can use the SQL database so items in the wiki can be queried efficiently, e.g., to find close brain activations to a given coordinate. Template content is non-nested and without wiki markup making extraction simple and complete. 0 2
Bridging the gap online web development platforms enable all reference staff to work on subject guides Buczynski J.A. LibGuides
Reference service
Web 2.0
Web design
Wiki
Internet Reference Services Quarterly English Information technology has disrupted the careers of many professionals over the past decade by changing both what work is performed and how it is performed. Technical skill set gaps among reference staff are a serious problem in libraries today. The library subject or topic research pathfinder continues to play a large supporting role in a library's reference service, yet few reference departments have a staff compliment in which each and every team member has the same Web development skill sets. Research guides are no longer static webpages with organized lists. Online Web development services like editme.com and LibGuides enable all staff to equally participate in developing webpages to support reference service, without a steep technical learning curve. 0 0
Bringing science and engineering to the classroom using mobile computing and modern cyberinfrastructure Bugallo M.F.
Marx M.
Bynum D.
Takai H.
Hover J.
MARIACHI
Mobile technology
Multidisciplinary education
TabletPC
Ultra-high energy cosmic rays
Wiki
CSEDU 2009 - Proceedings of the 1st International Conference on Computer Supported Education English This paper reports on the creative educational and research program of MARIACHI (Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization) at Stony Brook University, a unique endeavor that detects and studies atmospheric phenomena (lighting, meteors, or cosmic rays) by using a novel detection technique based on radar-like technology and traditional scintillator ground detectors. During the past and current academic year, our program has been effectively modernized and streamlined in both research and educational aspects with the implementation of mobile technologies by the use of TabletPCs and wireless data collection systems as well as emerging cyberinfrastructure based on dynamic services as wiki, blog, and Internet-based video conferencing. 0 0
Building a Networked Environment in Wikis: The Evolving Phases of Collaborative Learning in a Wikibook Project Hong Lin
Kathleen Kelsey
Journal of Educational Computing Research English Wikis, when used as an open editing tool, can have profound and subtle effects on students' collaborative learning process. Hailed as a collaborative learning and writing tool, many questions remain regarding the pedagogical impacts of using wikis in the classroom. Do students feel comfortable editing each others' wiki articles? Do students learn collaboratively and construct knowledge for the community? What challenges did they experience in a networked environment? This study addressed these questions using qualitative methods, including multiple semi-structured interviews and student reflective journals, for analysis. The findings challenge idealistic hypotheses that wiki work, without careful design and implementation, is naturally beneficial. It was also found that collaborative writing and learning were the exception rather than the norm among participants in the early stages of wiki work. It is recommended that instructors provide highly supportive learning experiences to teach students how to use wikis and how to work collaboratively when implementing wikis to maximize the benefits of this emerging tool. 16 0
Building a semantic virtual museum: From wiki to semantic wiki using named entity recognition Alain Plantec
Vincent Ribaud
Vasudeva Varma
Information extraction
Ontology
Semantic wiki
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA English In this paper, we describe an approach for creating semantic wiki pages from regular wiki pages, in the domain of scientific museums, using information extraction methods in general and named entity recognition in particular. We make use of a domain specific ontology called CIDOC-CRM as a base structure for representing and processing knowledge. We have described major components of the proposed approach and a three-step process involving name entity recognition, identifying domain classes using the ontology and establishing the properties for the entities in order to generate semantic wiki pages. Our initial evaluation of the prototype shows promising results in terms of enhanced efficiency and time and cost benefits. 0 0
Building a text classifier by a keyword and Wikipedia knowledge Qiang Qiu
YanChun Zhang
Junping Zhu
Qu W.
Keyword
Text classification
Unlabeled document
Wikipedia
Lecture Notes in Computer Science English Traditional approach for building text classifiers usually require a lot of labeled documents, which are expensive to obtain. In this paper, we propose a new text classification approach based on a keyword and Wikipedia knowledge, so as to avoid labeling documents manually. Firstly, we retrieve a set of related documents about the keyword from Wikipedia. And then, with the help of related Wikipedia pages, more positive documents are extracted from the unlabeled documents. Finally, we train a text classifier with these positive documents and unlabeled documents. The experiment result on 20Newsgroup dataset show that the proposed approach performs very competitively compared with NB-SVM, a PU learner, and NB, a supervised learner. 0 0
Building knowledge base for Vietnamese information retrieval Nguyen T.C.
Le H.M.
Phan T.T.
Knowledge base
SVM
Term extraction
Topic classification
VnKB
IiWAS2009 - The 11th International Conference on Information Integration and Web-based Applications and Services English At present, Vietnamese knowledge base (vnKB) is one of the most important focuses of Vietnamese researchers because of its applications in wide areas such as Information Retrieval (IR), Machine Translation (MT) etc. There have been several separate projects developing vnKB in various domains. The training in vnBK is the most difficulty because of quantity and quality of training data, and lacking of available Vietnamese corpus with acceptable quality. This paper introduces an approach, which first extracts semantic information from Vietnamese Wikipedia (vnWK), then trains the proposed vnKB by applying support vector machine (SVM) technique. The experimentation of the proposed approach shows that it is a potential solution because of its good results and proves that it can provide more valuable benefits when applying to our Vietnamese Semantic Information Retrieval system. 0 0
CMIC@INEX 2008: Link-the-wiki track Lecture Notes in Computer Science English 0 0
CSIR at INEX 2008 link-the-wiki track Lecture Notes in Computer Science English 0 0
CalSWIM: A wiki-based data sharing platform Yasser Ganjisaffar
Sara Javanmardi
Grant S.
Lopes C.V.
Data-sharing
Knowledge management
Wiki
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering English Organizations increasingly create massive internal digital data repositories and are looking for technical advances in managing, exchanging and integrating explicit knowledge. While most of the enabling technologies for knowledge management have been used around for several years, the ability to cost effective data sharing, integration and analysis into a cohesive infrastructure evaded organizations until the advent of Web 2.0 applications. In this paper, we discuss our investigations into using a Wiki as a web-based interactive knowledge management system, which is integrated with some features for easy data access, data integration and analysis. Using the enhanced wiki, it possible to make organizational knowledge sustainable, expandable, outreaching and continually up-to-date. The wiki is currently under use as California Sustainable Watershed Information Manager. We evaluate our work according to the requirements of knowledge management systems. The result shows that our solution satisfies more requirements compared to other tools. 0 0
Campus career collaboration: Do the research. Land the job Dugan M.
Bergstrom G.
Doan T.
Campus collaborations
Career centers
Information literacy
Strategic alignment
Wiki
College and Undergraduate Libraries English Purdue University's Management and Economics Library (MEL) recognized a unique opportunity to build a strong collaboration among campus units in the career development area, enhancing students' interviewing abilities and employment opportunities. The products and services presented in this article focus on MEL's unique campuswide collaboration with twelve campus libraries and thus far eight career units in the creation and implementation of a Career Wiki. The Career Wiki, both a library product and service, offers electronic access to career resources, and it effectively increases use of all library collections and information resources, thereby adding to the services each career unit offers. 0 0
Capturing knowledge during a dynamically evolving R&D project: A particular application of wiki software International Journal of Knowledge, Culture and Change Management English 0 0
China physiome project: A comprehensive framework for anatomical and physiological databases from the China digital human and the visible rat Han D.
Qiaoling Liu
Luo Q.
China Digital Human
Database
Markup language
Physiome Project
Visible Rat phantom
Wiki
Proceedings of the IEEE English The connection study between biological structure and function, as well as between anatomical data and mechanical or physiological models, has been of increasing significance with the rapid advancement in experimental physiology and computational physiology. The China Physiome Project (CPP) is dedicated in optimization of the connection exploration based on standardization and integration of the structural datasets and their derivatives of cryosectional images with various standards, collaboration mechanisms, and online services. The CPP framework hereby incorporates the three-dimensional anatomical models of human and rat anatomy, the finite-element models of whole-body human skeleton, and the multiparticle radiological dosimetry data of both the human and rat computational phantoms. The ontology of CPP was defined using MeSH and, with its all standardized models description implemented by M3L, a multiscale modeling language based on XML. Provided services based on Wiki concept include collaboration research, modeling version control, data sharing, online analysis of M3L documents. As a sample case, a multiscale model for human heart modeling, in which familial hypertrophic cardiomyopathy was studied according to the structure-function relations from genetic level to organ level, is integrated into the framework and given for demonstration of the functionality of multiscale physiological modeling based on CPP. 0 0
Classifying Tags using Open Content Resources Simon Overell
Börkur Sigurbjörnsson
Roelof van Zwol
English 0 0
Classifying web pages by using knowledge bases for entity retrieval Kiritani Y.
Ma Q.
Masatoshi Yoshikawa
Lecture Notes in Computer Science English In this paper, we propose a novel method to classify Web pages by using knowledge bases for entity search, which is a kind of typical Web search for information related to a person, location or organization. First, we map a Web page to entities according to the similarities between the page and the entities. Various methods for computing such similarity are applied. For example, we can compute the similarity between a given page and a Wikipedia article describing a certain entity. The frequency of an entity appearing in the page is another factor used in computing the similarity. Second, we construct a directed acyclic graph, named PEC graph, based on the relations among Web pages, entities, and categories, by referring to YAGO, a knowledge base built on Wikipedia and WordNet. Finally, by analyzing the PEC graph, we classify Web pages into categories. The results of some preliminary experiments validate the methods proposed in this paper. 0 0
Clustering Documents Using a Wikipedia-Based Concept Representation Anna Huang
David N. Milne
Eibe Frank
Ian H. Witten
English This paper shows how Wikipedia and the semantic knowledge it contains can be exploited for document clustering. We first create a concept-based document representation by mapping the terms and phrases within documents to their corresponding articles (or concepts) in Wikipedia. We also developed a similarity measure that evaluates the semantic relatedness between concept sets for two documents. We test the concept-based representation and the similarity measure on two standard text document datasets. Empirical results show that although further optimizations could be performed, our approach already improves upon related techniques. 0 0
Clustering XML documents using frequent subtrees Kutty S.
Thanh Tran
Nayak R.
Yanyan Li
Clustering
Frequent mining
Frequent subtrees
INEX
Structure and content
Wikipedia
XML document mining
Lecture Notes in Computer Science English This paper presents an experimental study conducted over the INEX 2008 Document Mining Challenge corpus using both the structure and the content of XML documents for clustering them. The concise common substructures known as the closed frequent subtrees are generated using the structural information of the XML documents. The closed frequent subtrees are then used to extract the constrained content from the documents. A matrix containing the term distribution of the documents in the dataset is developed using the extracted constrained content. The k-way clustering algorithm is applied to the matrix to obtain the required clusters. In spite of the large number of documents in the INEX 2008 Wikipedia dataset, the proposed frequent subtree-based clustering approach was successful in clustering the documents. This approach significantly reduces the dimensionality of the terms used for clustering without much loss in accuracy. 0 0
Clustering hyperlinks for topic extraction: An exploratory analysis Villarreal S.E.G.
Elizalde L.M.
Viveros A.C.
Graph local clustering
K-means
Principal direction divisive partitioning
Topic detection
Wikipedia
8th Mexican International Conference on Artificial Intelligence - Proceedings of the Special Session, MICAI 2009 English In a Web of increasing size and complexity, a key issue is automatic document organization, which includes topic extraction in collections. Since we consider topics as document clusters with semantic properties, we are concerned with exploring suitable clustering techniques for their identification on hyperlinked environments (where we only regard structural information). For this purpose, three algorithms (PDDP, kmeans, and graph local clustering) were executed over a document subset of an increasingly popular corpus: Wikipedia. Results were evaluated with unsupervised metrics (cosine similarity, semantic relatedness, Jaccard index) and suggest that promising results can be produced for this particular domain. 0 0
Co-Ordinating Peace Research and Education in Australia: A Report on the Canberra Forum of 2 May, 2008. James Page International Review of Education / Internationale Zeitschrift für Erziehungswissenschaft, , 02/03/2011 Information about several papers discussed during the Australian university teachers forum on peace and conflict studies in Canberra, Australian Capital Territory on May 2, 2008 is presented. The forum highlights the discussion on how to better organize and co-ordinate university-level peace education in Australia. It further features the issue concerning peace education through Wikipedia networking and innovative teaching methods. 0 0
Codifying collaborative knowledge: Using Wikipedia as a basis for automated ontology learning Guo T.
Schwartz D.G.
Burstein F.
Linger H.
Collaboration
Ontology
Social knowledge
Web 2.0
Wikipedia
Knowledge Management Research and Practice English In the context of knowledge management, ontology construction can be considered as a part of capturing of the body of knowledge of a particular problem domain. Traditionally, ontology construction assumes a tedious codification of the domain experts knowledge. In this paper, we describe a new approach to ontology engineering that has the potential of bridging the dichotomy between codification and collaboration turning to Web 2.0 technology. We propose to shift the primary source of ontology knowledge from the expert to socially emergent bodies of knowledge such as Wikipedia. Using Wikipedia as an example, we demonstrate how core terms and relationships of a domain ontology can be distilled from this socially constructed source. As an illustration, we describe how our approach achieved over 90% conceptual coverage compared with Gold standard hand-crafted ontologies, such as Cyc. What emerges is not a folksonomy, but rather a formal ontology that has nonetheless found its roots in social knowledge. 0 0
Codifying collaborative knowledge: using Wikipedia as a basis for automated ontology learning Tao Guo
D.G. Schwartz
F. Burstein
H. Linger
Knowledge Management Research \& Practice In the context of knowledge management, ontology construction can be considered as a part of capturing of the body of knowledge of a particular problem domain. Traditionally, ontology construction assumes a tedious codification of the domain experts knowledge. In this paper, we describe a new approach to ontology engineering that has the potential of bridging the dichotomy between codification and collaboration turning to Web 2.0 technology. We propose to shift the primary source of ontology knowledge from the expert to socially emergent bodies of knowledge such as Wikipedia. Using Wikipedia as an example, we demonstrate how core terms and relationships of a domain ontology can be distilled from this socially constructed source. As an illustration, we describe how our approach achieved over 90\% conceptual coverage compared with Gold standard hand-crafted ontologies, such as Cyc. What emerges is not a folksonomy, but rather a formal ontology that has nonetheless found its roots in social knowledge. 0 0
Colaboração, edição, transparência: desafios e possibilidades de uma wikificação do jornalismo Carlos Frederico de Brito d’Andréa Metamorfoses jornalísticas 2: a reconfiguração da forma Portuguese 0 5
Collaboration, editing, transparency: challenges and possibilities of a wikification of journalism Carlos Frederico de Brito d’Andréa Wiki-journalism
Wikipedia
Editing
Collaborative journalism
Brazilian Journalism Research English This article discusses the possibilities and challenges of the incorporation of wikis in journalistic editorial offices, especially in the processes involving the drafting and editing of texts. The reflection takes into consideration a context marked by 1) continuous fragmented publication of information; 2) simplification and horizontal organization of journalistic routines, which directly impacts the figure of the editor (mainly in the media on the web); and 3) increasing public participation in the development of the news, with the mediation of professionals. “Wiki-journalism” practices on the Wikinews and Wikipedia sites and initiatives originating with print publications are presented and discussed, and based on them we distinguish two models, which differ in the degree of control of the process by the journalists. 9 0
Collaborative authoring of biomedical terminologies using a semantic Wiki. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium English 0 0
Collaborative ontology construction using template-based wild for semantic web applications Lim S.-K.
Ko I.-Y.
Collaborative ontology construction
Semantic wiki
Template
Proceedings - 2009 International Conference on Computer Engineering and Technology, ICCET 2009 English Collaborative ontology construction and management have become an important issue for allowing domain experts to build domain knowledge that are needed for Semantic Web applications. However, it is normally a difficult task for domain experts to create an ontology-based model and to produce knowledge elements based on the model. In this paper, we propose a Wiki-based environment where domain experts can easily and collaboratively organize domain knowledge. In this approach, templates can be defined and associated with ontology to enable users to arrange knowledge components in a Wiki pages and to store them based on an ontology-based model. We have developed and tested a Template-based Semantic Wiki for u-health applications. 0 0
Collaborative summarization: When collaborative filtering meets document summarization Qu Y.
Qingcai Chen
Collaborative filtering
Personalized summarization
Single document summarization
Tag recommendation
PACLIC 23 - Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation English We propose a new way of generating personalized single document summary by combining two complementary methods: collaborative filtering for tag recommendation and graph-based affinity propagation. The proposed method, named by Collaborative Summarization, consists of two steps iteratively repeated until convergence. In the first step, the possible tags of one user on a new document are predicted using collaborative filtering which bases on tagging histories of all users. The predicted tags of the new document are supposed to represent both the key idea of the document itself and the special content of interest to that specific user. In the second step, the predicted tags are used to guide graph-based affinity propagation algorithm to generate personalized summarization. The generated summary is in turn used to fine tune the prediction of tags in the first step. The most intriguing advantage of collaborative summarization is that it harvests human intelligence which is in the form of existing tag annotations of webpages, such as delicious.com bookmark tags, to tackle a complex NLP task which is very difficult for artificial intelligence alone. Experiment on summarization of wikipedia documents based on delicious.com bookmark tags shows the potential of this method. 0 0
Collaborative web-publishing with a semantic wiki Studies in Computational Intelligence English 0 0
Collaborative wiki tagging Studies in Computational Intelligence English 0 0
Collective annotation of Wikipedia entities in web text Sayali Kulkarni
Amit Singh
Ganesh Ramakrishnan
Soumen Chakrabarti
English To take the first step beyond keyword-based search toward entity-based search, suitable token spans ("spots") on documents must be identified as references to real-world entities from an entity catalog. Several systems have been proposed to link spots on Web pages to entities in Wikipedia. They are largely based on local compatibility between the text around the spot and textual metadata associated with the entity. Two recent systems exploit inter-label dependencies, but in limited ways. We propose a general collective disambiguation approach. Our premise is that coherent documents refer to entities from one or a few related topics or domains. We give formulations for the trade-off between local spot-to-entity compatibility and measures of global coherence between entities. Optimizing the overall entity assignment is NP-hard. We investigate practical solutions based on local hill-climbing, rounding integer linear programs, and pre-clustering entities followed by local optimization within clusters. In experiments involving over a hundred manually-annotated Web pages and tens of thousands of spots, our approaches significantly outperform recently-proposed algorithms. 0 0
Collective intelligence approach for formulating a BOK of social informatics, an interdisciplinary field of study Yoshifumi Masunaga
Yoshiyuki Shoji
Kazunari Ito
Body of knowledge
BOK
Collaborative document
Collective intelligence
Semantic MediaWiki
Social informatics
Wiki
WikiSym English This presentation shows a collective intelligence approach for formulating a body of knowledge (BOK) of social informatics (SI), a relatively new interdisciplinary field of study, by implementing a BOK constructor based on Semantic MediaWiki. Copyright 0 0
Collective intelligence system engineering Ioanna Lykourentzou
Vergados D.J.
Vassili Loumos
Collective intelligence
System engineering
Proceedings of the International Conference on Management of Emergent Digital EcoSystems, MEDES '09 English Collective intelligence (CI) is an emerging research field which aims at combining human and machine intelligence, to improve community processes usually performed by large groups. CI systems may be collaborative, like Wikipedia, or competitive, like a number of recently established problem-solving companies that attempt to find solutions to difficult R&D or marketing problems drawing on the competition among web users. The benefits that CI systems earn user communities, combined with the fact that they share a number of basic common characteristics, open up the prospect for the design of a general methodology that will allow the efficient development and evaluation of CI. In the present work, an attempt is made to establish the analytical foundations and main challenges for the design and construction of a generic collective intelligence system. First, collective intelligence systems are categorized into active and passive and specific examples of each category are provided. Then, the basic modeling framework of CI systems is described. This includes concepts such as the set of possible user actions, the CI system state and the individual and community objectives. Additional functions, which estimate the expected user actions, the future state of the system, as well as the level of objective fulfillment, are also established. In addition, certain key issues that need to be considered prior to system launch are also described. The proposed framework is expected to promote efficient CI design, so that the benefit gained by the community and the individuals through the use of CI systems, will be maximized. Copyright 2009 ACM. 0 0
Coloring RDF triples to capture provenance Flouris G.
Fundulaki I.
Pediaditis P.
Theoharis Y.
Christophides V.
Lecture Notes in Computer Science English Recently, the W3C Linking Open Data effort has boosted the publication and inter-linkage of large amounts of RDF datasets on the Semantic Web. Various ontologies and knowledge bases with millions of RDF triples from Wikipedia and other sources, mostly in e-science, have been created and are publicly available. Recording provenance information of RDF triples aggregated from different heterogeneous sources is crucial in order to effectively support trust mechanisms, digital rights and privacy policies. Managing provenance becomes even more important when we consider not only explicitly stated but also implicit triples (through RDFS inference rules) in conjunction with declarative languages for querying and updating RDF graphs. In this paper we rely on colored RDF triples represented as quadruples to capture and manipulate explicit provenance information. 0 0
Combining unstructured, fully structured and semi-structured information in semantic wikis Sint R.
Sebastian Schaffert
Stroka S.
Ferstl R.
CEUR Workshop Proceedings English The growing impact of Semantic Wikis deduces the importance of finding a strategy to store textual articles, semantic metadata and management data. Due to their different characteristics, each data type requires a specialized storing system, as inappropriate storing reduces performance, robustness, flexibility and scalability. Hence, it is important to identify a sophisticated strategy for storing and synchronizing different types of data structures in a way they provide the best mix of the previously mentioned properties. In this paper we compare fully structured, semi-structured and unstructured data and present their typical appliance. Moreover, we discuss how all data structures can be combined and stored for one application and consider three synchronization design alternatives to keep the distributed data storages consistent. Furthermore, we present the semantic wiki KiWi, which uses an RDF triplestore in combination with a relational database as basis for the persistence of data, and discuss its concrete implementation and design decisions. 0 0
Comment: The wiki way in a hurry - The ICIS anecdote MIS Quarterly: Management Information Systems English 0 0
Comment: the wiki way in a hurry--the ICIS anecdote Dov Teeni MIS Q. English 0 0
Comment: where is the theory in wikis? Ann Majchrzak MIS Q. English 0 0
Communication process and collaborative work in web 2.0 environment Kim E.
Jinhyun Ahn
Lee D.
Blogs
Channel expansion theory
Collaboration
Communication process
Wiki
ACM International Conference Proceeding Series English Because the higher level of media richness improves the performance of collaborative works such as knowledge sharing, efforts to raise media richness are encouraged. The Channel Expansion Theory argues that individuals' perceptions of media richness vary according to each individual's knowledge base built from prior experiences related to the communication situation. This study explored the channel expansion effects in the new CMC environment, Web 2.0. In particular, we considered communication process modes (i.e., conveyance and convergence) as a factor moderating the effects. The research model was verified by an experiment with student subjects. Copyright 0 0
Community-legitimated e-testing:A basis for a novel, self-organized and sustainable (e)learning culture? Nestle F.
Nestle N.
E-Learning
E-Testing
Educational standards
Evaluation
Open content
Wikipedia
CSEDU 2009 - Proceedings of the 1st International Conference on Computer Supported Education English Based on the assumption that educational standards can be operationally defined by pools of specific testing items properties of such item pools are discussed. The main suggestion of the paper is that pools of testing items defining a standard should be free accessible in internet, that they provide immediate feedback in form of scores and that certified results should be equivalent to results of classroom work. For the development of the item pools, web-2.0-type methods can be much more effective than closed expert groups and item evaluation by statistic methods.. Finally the consequences of such transparent community-legitimated standards for the future role of teachers and future forms of learning environments are discussed. 0 0
Compact full-text indexing of versioned document collections He J.
Yan H.
Suel T.
Inverted index
Inverted index compression
Search engine
Versioned documents
Web archives
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English We study the problem of creating highly compressed full-text index structures for versioned document collections, that is, collections that contain multiple versions of each document. Important examples of such collections are Wikipedia or the web page archive maintained by the Internet Archive. A straightforward indexing approach would simply treat each document version as a separate document, such that index size scales linearly with the number of versions. However, several authors have recently studied approaches that exploit the significant similarities between different versions of the same document to obtain much smaller index sizes. In this paper, we propose new techniques for organizing and compressing inverted index structures for such collections. We also perform a detailed experimental comparison of new techniques and the existing techniques in the literature. Our results on an archive of the English version of Wikipedia, and on a subset of the Internet Archive collection, show significant benefits over previous approaches. Copyright 2009 ACM. 0 0
Comparative analysis of clicks and judgments ir evaluation Jaap Kamps
Marijn Koolen
Andrew Trotman
Transaction log analysis
Information retrieval
Wikipedia
Proceedings of Workshop on Web Search Click Data, WSCD'09 English Queries and click-through data taken from search engine transaction logs is an attractive alternative to traditional test collections, due to its volume and the direct relation to end-user querying. The overall aim of this paper is to answer the question: How does click-through data differ from explicit human relevance judgments in information retrieval evaluation? We compare a traditional test collection with manual judgments to transaction log based test collections- by using queries as topics and subsequent clicks as pseudorelevance judgments for the clicked results. Specifically, we investigate the following two research questions: Firstly, are there significant differences between clicks and relevance judgments. Earlier research suggests that although clicks and explicit judgments show reasonable agreement, clicks are different from static absolute relevance judgments. Secondly, are there significant differences between system ranking based on clicks and based on relevance judgments? This is an open uestion, but earlier research suggests that comparative evaluation in terms of system ranking is remarkably robust. Copyright 2009 ACM. 0 0
Comparing and merging versioned wiki pages Lecture Notes in Business Information Processing English 0 0
Comparing featured article groups and revision patterns correlations in Wikipedia G. Poderi First Monday Collaboratively written by thousands of people, Wikipedia produces entries which are consistent with criteria agreed by Wikipedians and of high quality. This article focuses on Wikipedia's featured articles and shows that not every contribution can be considered as being of equal quality. Two groups of articles are analysed by focusing on the edits distribution and the main editors' contribution. The research shows how these aspects of the revision patterns can change dependent upon the category to which the articles belong. 0 2
Comparison of middle school, high school and community college students' wiki activity in Globaloria-West Virginia (pilot year-two) Rebecca Reynolds
Caperton I.H.
Computer-supported collaborative learning
Constructionism
Digital literacy
Game design
Globaloria
Serious games
Social media
Web 2.0
Wiki
WikiSym English Constructionist-learning researchers have long emphasized the epistemological value of programming games for learning and cognition. This study reports student experiences in a program of game design and Web 2.0 learning offered to disadvantaged West Virginia middle, high school and community college students. Specifically, the poster presents findings on the extent of student use of the Wiki for project management, teamwork and self-presentation of game design attributes, comparing results across 13 school pilot locations. Also presented are students' self-reported recommendations for possible improvements to the wiki. Results indicate that some locations were more active in their wiki use; the poster addresses location-specific implementation context factors that may have played a role in the variant results. 0 0
Completing Wikipedia's hyperlink structure through dimensionality reduction Robert West
Doina Precup
Joelle Pineau
Data mining
Graph mining
Link mining
Principal component analysis
Wikipedia
International Conference on Information and Knowledge Management, Proceedings English Wikipedia is the largest monolithic repository of human knowledge. In addition to its sheer size, it represents a new encyclopedic paradigm by interconnecting articles through hyperlinks. However, since these links are created by human authors, links one would expect to see are often missing. The goal of this work is to detect such gaps automatically. In this paper, we propose a novel method for augmenting the structure of hyperlinked document collections such as Wikipedia. It does not require the extraction of any manually defined features from the article to be augmented. Instead, it is based on principal component analysis, a well-founded mathematical generalization technique, and predicts new links purely based on the statistical structure of the graph formed by the existing links. Our method does not rely on the textual content of articles; we are exploiting only hyperlinks. A user evaluation of our technique shows that it improves the quality of top link suggestions over the state of the art and that the best predicted links are significantly more valuable than the 'average' link already present in Wikipedia. Beyond link prediction, our algorithm can potentially be used to point out topics an article misses to cover and to cluster articles semantically. Copyright 2009 ACM. 0 0
Comprehensive query-dependent fusion using regression-on-folksonomies: A case study of multimodal music search Bin Zhang
Xiang Q.
Lu H.
Shen J.
Yafang Wang
Folksonomy
Multimodal search
Music
Query-dependent fusion
MM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums English The combination of heterogeneous knowledge sources has been widely regarded as an effective approach to boost retrieval accuracy in many information retrieval domains. While various technologies have been recently developed for information retrieval, multimodal music search has not kept pace with the enormous growth of data on the Internet. In this paper, we study the problem of integrating multiple online information sources to conduct effective query dependent fusion (QDF) of multiple search experts for music retrieval. We have developed a novel framework to construct a knowledge space of users' information need from online folksonomy data. With this innovation, a large number of comprehensive queries can be automatically constructed to train a better generalized QDF system against unseen user queries. In addition, our framework models QDF problem by regression of the optimal combination strategy on a query. Distinguished from the previous approaches, the regression model of QDF (RQDF) offers superior modeling capability with less constraints and more efficient computation. To validate our approach, a large scale test collection has been collected from different online sources, such as Last.fm, Wikipedia, and YouTube. All test data will be released to the public for better research synergy in multimodal music search. Our performance study indicates that the accuracy, efficiency, and robustness of the multimodal music search can be improved significantly by the proposed folksonomy-RQDF approach. In addition, since no human involvement is required to collect training examples, our approach offers great feasibility and practicality in system development. Copyright 2009 ACM. 0 0
Computational challenges in e-commerce Feigenbaum J.
Parkes D.C.
Pennock D.M.
Communications of the ACM English Some of the significant challenges that need to be resolved for improving computation in Internet-based commerce or e-commerce. Resource allocation, knowledge integration, peer production and interaction, and security and privacy issues are significant challenges that need special attention for improving computation. Allocating resources involves that a participants declare their perceived value for the resources and the market computes the best allocation and price that a participants need to pay. The knowledge integration can be defined as the aggregation of information from diverse and frequently self-interested sources. Peer production and interaction involves large-scale collaboration of artifacts of information like Wikipedia and Linux. Service providers need to take effective measures to provide security and privacy for digital content distribution. 0 0
Computing semantic relatedness using structured information of Wikipedia Wang
Rui-Qin
Fan Kong-Sheng
Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science) A novel semantic relatedness measurement technique based on the link information of Wikipedia was presented. Comparing with {WordNet} repository, Wikipedia has wider range, more comprehensive knowledge and faster update speed, which makes it become an ideal resource in semantic management. Unlike other Wikipedia based semantic relatedness computing approaches, the new technique uses only Wikipedia's link structures rather than its full text content, which avoids from burdensome text processing. During the process of relatedness computation, the positive effects of incoming links and outcoming links were taken into account, meanwhile the link number adjustment factor was considered to eliminate the bias. Using several widely used test sets of manual defined measures of semantic relatedness as bench-mark, the proposed method resulted in substantial improvement in the correlation of computed relatedness score with the human judgments comparing with the previous {WordNet-based} methods and other Wikipedia-based methods. 0 0
Concept vector extraction from Wikipedia category network Masumi Shirakawa
Kotaro Nakayama
Takahiro Hara
Shojiro Nishio
Wikipedia
Categorization
Concept vector
Web mining
ICUIMC English 0 0
Conceptual image retrieval over a large scale database Adrian Popescu
Le Borgne H.
Moellic P.-A.
Image retrieval
Large-scale database
Query reformulation
Lecture Notes in Computer Science English Image retrieval in large-scale databases is currently based on a textual chains matching procedure. However, this approach requires an accurate annotation of images, which is not the case on the Web. To tackle this issue, we propose a reformulation method that reduces the influence of noisy image annotations. We extract a ranked list of related concepts for terms in the query from WordNet and Wikipedia, and use them to expand the initial query. Then some visual concepts are used to re-rank the results for queries containing, explicitly or implicitly, visual cues. First evaluations on a diversified corpus of 150000 images were convincing since the proposed system was ranked 4 th and 2 nd at the WikipediaMM task of the ImageCLEF 2008 campaign [1]. 0 0
Confessions of a Librarian or: How I Learned to Stop Worrying and Love Google. CLAIRE B. GUNNELS
AMY SISSON
Community \& Junior College Libraries Have you ever stopped to think about life before Google? We will make the argument that Google is the first manifestation of Web 2.0, of the power and promise of social networking and the ubiquitous wiki. We will discuss the positive influence of Google and how Google and other social networking tools afford librarians leading-edge technologies and new opportunities to teach information literacy. Finally, we will include a top seven list of googlesque tools that no librarian should be without. 0 0
Confessions of a librarian or: How i learned to stop worrying and love Google Gunnels C.B.
Sisson A.
Critical thinking
Google
Information literacy
Social networking
Wikipedia
Community and Junior College Libraries English Have you ever stopped to think about life before Google? We will make the argument that Google is the first manifestation of Web 2.0, of the power and promise of social networking and the ubiquitous wiki. We will discuss the positive influence of Google and how Google and other social networking tools afford librarians leading-edge technologies and new opportunities to teach information literacy. Finally, we will include a top seven list of googlesque tools that no librarian should be without. 0 0
Conflict and consensus in the Chinese version of Wikipedia Liao
Han-Teng
IEEE Technology and Society Magazine It is not easy to initiate a new language version of Wikipedia. Although anyone can propose a new language version without financial cost, certain Wikipedia policies for establishing a new language version must be followed 30. Once approved and created, the new language version needs tools to facilitate writing and reading in the new language. Even if a team tackles these technical and linguistic issues, a nascent community has to then develop its own editorial and administrative policies and guidelines, sometimes by translating and ratifying the policies in another language version (usually English). Given that Wikipedia does not impose an universal set of editorial and administrative policies and guidelines, the cultural and political nature of such communities remains open-ended. 0 1
Conflicts in collaboration: a study of tensions in the process of collective writing on Web 2.0 Aline de Campos Federal University of Rio Grande do Sul - UFRGS Portuguese From the context of collaboration as a process of the collective intelligence (LÉVY, 2003) and wisdom of crowds (SUROWIECKI, 2006) this work aims to study the conflict as an important factor of these collective processes. The imbalances are part of human history, however, recurrently we find a purpose to see this issues as negative and of annihilation of relations, leaving aside the aspect of the potential impulsion to beneficial reconfiguration of the process in which it operates. An interesting practice in the study of conflicts in collaboration is the online collective writing. The independence of space and time and the multiplicity of voices that can focus on a project textual, opens space for negotiations, debates and tensions of various kinds. In addition, sometimes there is too optimistic view, which relieves the issues of structural, dynamics and behavior in the production of meaning and the conflict that may arise from it. In this sense, is questioned: what is the influence of conflicts in the process of online collective writing? This project, through theoretical and empirical research, seeks to answer this question in a communicational perspective, which takes into account the relationships and interactions beyond the harmony permanently assigned to these processes for various areas of knowledge. For the empirical verification are presented two collaborative projects of textual production: the Wikipedia, the free encyclopedia, wide used and with popularity and the project of the Laboratory of Computer-mediated Interaction of the Federal University of Rio Grande do Sul, the Co-dex, the social dictionary, environment created for concepts, reviews and biographies of the area of communication and information science. In both observations were made for systematic verification of the tensions from the interaction and collective production so that, together with the developed theoretical contribution, the guiding question of this work could be investigated. It is concluded that the conflicts that permeate the processes of collaborative production of meaning relate to aspects of a textual and relational order and that they are related and influence each other. It is believed that the tensions have strong relevance in the development of relations between the collaborators and building content to introduce imbalances needed for a "majorant reequilibration" (Piaget, 1977). 0 0
Connections between the lines: Augmenting social networks with text Jian Chang
Boyd-Graber J.
Blei D.M.
Graphical models
Social network learning
Statistical topic models
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining English Network data is ubiquitous, encoding collections of relationships between entities such as people, places, genes, or corporations. While many resources for networks of interesting entities are emerging, most of these can only annotate connections in a limited fashion. Although relationships between entities are rich, it is impractical to manually devise complete characterizations of these relationships for every pair of entities on large, real-world corpora. In this paper we present a novel probabilistic topic model to analyze text corpora and infer descriptions of its entities and of relationships between those entities. We develop variational methods for performing approximate inference on our model and demonstrate that our model can be practically deployed on large corpora such as Wikipedia. We show qualitatively and quantitatively that our model can construct and annotate graphs of relationships and make useful predictions. Copyright 2009 ACM. 0 0
Consensus Choice for Reconciling Social Collaborations on Semantic Wikis Jason J. Jung
Ngoc Thanh Nguyen
Conflict resolution
Consensus theory
Semantic wiki
ICCCI English 0 0
Construction of disambiguated Folksonomy ontologies using Wikipedia Noriko Tomuro
Andriy Shepitsen
People's Web English 0 0
Content hole search in community-type content Akiyo Nadamoto
Eiji Aramaki
Takeshi Abekawa
Yohei Murakami
Blogs
Community
Content hole search
SNS
WWW'09 - Proceedings of the 18th International World Wide Web Conference English In community-type content such as blogs and SNSs, we call the user's unawareness of information as a "content hole"and the search for this information as a "content hole search." A content hole search differs from similarity searching and has a variety of types. In this paper, we propose different types of content holes and define each type. We also propose an analysis of dialogue related to community-type content and introduce content hole search by using Wikipedia as an example. Copyright is held by the author/owner(s). 0 0
Content hole search in community-type content using Wikipedia Akiyo Nadamoto
Eiji Aramaki
Takeshi Abekawa
Yohei Murakami
SNS
Blogs
Community
Content hole search
IiWAS English 0 0
Content quality assessment related frameworks for social media Chai K.
Vidyasagar Potdar
Dillon T.
Content quality assessment
Discussion forums
Social media
Weblogs
Wiki
Lecture Notes in Computer Science English The assessment of content quality (CQ) in social media adds a layer of complexity over traditional information quality assessment frameworks. Challenges arise in accurately evaluating the quality of content that has been created by users from different backgrounds, for different domains and consumed by users with different requirements. This paper presents a comprehensive review of 19 existing CQ assessment related frameworks for social media in addition to proposing directions for framework improvements. 0 0
Context based wikipedia linking Michael Granitzer
Seifert C.
Zechner M.
Context Exploitation
INEX
Link-the-Wiki
Lecture Notes in Computer Science English Automatically linking Wikipedia pages can be done either content based by exploiting word similarities or structure based by exploiting characteristics of the link graph. Our approach focuses on a content based strategy by detecting Wikipedia titles as link candidates and selecting the most relevant ones as links. The relevance calculation is based on the context, i.e. the surrounding text of a link candidate. Our goal was to evaluate the influence of the link-context on selecting relevant links and determining a links best-entry-point. Results show, that a whole Wikipedia page provides the best context for resolving link and that straight forward inverse document frequency based scoring of anchor texts achieves around 4% less Mean Average Precision on the provided data set. 0 0
Contextual retrieval of single Wikipedia articles to support the reading of academic abstracts Christopher Jordan Dalhousie University (Canada) English Google style search engines are currently some of the most popular tools that people use when they are looking for information. There are a variety of reasons that people can have for conducting a search, although, these reasons can generally be distilled down to users being engaged in a task and developing an information need that impedes them from completing that task at a level which is satisfactory to them. The Google style search engine, however, is not always the most appropriate tool for every user task. In this thesis, our approach to search differs from the traditional search engine as we focus on providing support to users who are reading academic abstracts. When people do not understand a passage in the abstract they are reading, they often look for more detailed information or a definition. Presenting them with a list of possibly relevant search results, as a Google style search would, may not immediately meet this information need. In the case of reading, it is logical to hypothesize that userswould prefer to receive a single document containing the information that they need. Developed in this thesis are retrieval algorithms that use the abstract being read along with the passage that the user is interested in to retrieve a single highly related article from Wikipedia. The top performing algorithm from the experiments conducted in this thesis is able to retrieve an appropriate article 77\% of the time. This algorithm was deployed in a prototype reading support tool. {LiteraryMark,} in order to investigate the usefulness of such a tool. The results from the user experiment conducted in this thesis indicate that {LiteraryMark} is able to significantly improve the understanding and confidence levels of people reading abstracts. 0 0
Contrails of Learning: Using New Technologies for Vertical Knowledge-building Anson C.M.
Miller-Cochran S.K.
Active learning
Adventurous scholarship
Constructivism
Graduate education
The Responsive Ph.D.
Wiki
Computers and Composition English Higher education is still dominated by objectivist models of learning involving experts who convey information to novices. Educational research has shown that this model is less effective than more active, constructivist approaches that help learners to build new knowledge on existing knowledge. Although to a lesser extent, the objectivist model is perpetuated in graduate education, a context where students are, ironically, assumed to be working alongside their mentors and becoming part of the culture of research in their fields. Using a recent report issued by the Woodrow Wilson National Fellowship Foundation, The Responsive Ph.D.: Innovations in Doctoral Education (2005), we argue that emerging technologies can help to create constructivist learning environments that challenge students to participate more actively in their own education. As illustration, we consider a graduate seminar on educational technologies that uses a wiki not only to engage students in knowledge-building but to link subsequent sections of the course into an ongoing, purposeful activity that functions both within and beyond the classroom. We explore some of the challenges we faced in getting students to take control of the wiki and overcome their existing assumptions about power and authority in graduate education. © 2008 Elsevier Inc. All rights reserved. 0 1
Coordinating Curriculum Implementation Using Wiki-supported Graph Visualization Sonja Kabicher
Renate Motschnig-Pitrik
ICALT English 0 0
Coordinating tasks on the commons: Designing for personal goals, expertise and serendipity Krieger M.
Stark E.
Klemmer S.R.
Crowdsourcing
Task management. social software
Wikipedia
Conference on Human Factors in Computing Systems - Proceedings English How is work created,assigned, and completed on large-scale, crowd-powered systems like Wikipedia? And what design principles might enable these federated online systems to be more effective? This paper reports on a qualitative study of work and task practices on Wikipedia. Despite the availability of tag-based community-wide task assignment mechanisms, informants reported that self-directed goals, within- topic expertise, and fortuitous discovery are more frequently used than community-tagged tasks. We examine how Wikipedia editors organize their actions and the actions of other participants, and what implications this has for understanding. and building tools for. crowd-powered systems, or any web site where the main force of production comes from a crowd of online participants. From these observations and insights, we developed WikiTasks. a tool that integrates with Wikipedia and supports both grassroots creation of site-wide tasks and self-selection of personal tasks, accepted from this larger pool of community tasks. Copyright 2009 ACM. 0 0
Coordination in collective intelligence: The role of team structure and task interdependence Aniket Kittur
Bongshin Lee
Kraut R.E.
Collective intelligence
Coordination
Social collaboration
Social computing
Wiki
Wikipedia
Conference on Human Factors in Computing Systems - Proceedings English The success of Wikipedia has demonstrated the power of peer production in knowledge building. However, unlike many other examples of collective intelligence, tasks in Wikipedia can be deeply interdependent and may incur high coordination costs among editors. Increasing the number of editors increases the resources available to the system, but it also raises the costs of coordination. This suggests that the dependencies of tasks in Wikipedia may determine whether they benefit from increasing the number of editors involved. Specifically, we hypothesize that adding editors may benefit low-coordination tasks but have negative consequences for tasks requiring a high degree of coordination. Furthermore, concentrating the work to reduce coordination dependencies should enable more efficient work by many editors. Analyses of both article ratings and article review comments provide support for both hypotheses. These results suggest ways to better harness the efforts of many editors in social collaborative systems involving high coordination tasks. Copyright 2009 ACM. 0 0
Corpus Exploitation from Wikipedia for Ontology Construction Gaoying Cui
Qin Lu
Wenjie Li
Yirong Chen
English 0 0
Cosmos: A wiki data management system Wu Q.
Calton Pu
Danesh Irani
Version control systems
Wiki
WikiSym English Wiki applications are becoming increasingly important for knowledge sharing between large numbers of users. To prevent against vandalism and recover from damaging edits, wiki applications need to maintain revision histories of all documents. Due to the large amounts of data and traffic, a Wiki application needs to store the data economically on disk and processes them efficiently. Current wiki data management systems make a trade-off between storage requirement and access time for document update and retrieval. We introduce a new data management system, Cosmos, to balance this trade-off. Copyright 0 0
Crawling English-Japanese person-name transliterations from the Web Sato S. Automatic lexicon compilation
Mining transliteration pairs
Person name
WWW'09 - Proceedings of the 18th International World Wide Web Conference English Automatic compilation of lexicon is a dream of lexicon compilers as well as lexicon users. This paper proposes a system that crawls English-Japanese person-name transliterations from the Web, which works a back-end collector for automatic compilation of bilingual person-name lexicon. Our crawler collected 561K transliterations in five months. From them, an English-Japanese person-name lexicon with 406K entries has been compiled by an automatic post processing. This lexicon is much larger than other similar resources including English-Japanese lexicon of HeiNER obtained from Wikipedia. Copyright is held by the author/owner(s). 0 0
Creating "the Wikipedia of pros and cons" Brooks Lindsay Wiki
Debate
Deliberation
Dialogue
Encyclopedia
Politics
Pros and cons
WikiSym English 0 0
Creating User Profiles Using Wikipedia Krishnan Ramanathan
Komal Kapoor
DMOZ
Evaluation
Hierarchy
Personalization
User modeling
User profiles
Wikipedia
ER English 0 0
Creating community through the Use of a class wiki Johnson K.A.
Jamie Bartolino
Classroom
Community building
Wiki
Lecture Notes in Computer Science English This study examines the use of a class wiki in a course offered to incoming freshmen at a college in central Pennsylvania. The wiki was used to supplement instruction in a classroom-based course. The study shows that the wiki was helpful in building community among incoming students, and also helped them to grow academically in the course. The class wiki also helped foster positive feelings toward the course as well as students' first semester at the college. 0 0
Creating dynamic wiki pages with section-tagging D. Helic
A. Us Saeed
C. Trattner
Austria-forum
Section tagging
Wiki systems
Amount of information
New approaches
Online encyclopedia
Social bookmarking
Tag clouds
Tagging systems
CEUR Workshop Proceedings English Authoring and editing processes in wiki systems are often tedious. Sheer amount of information makes it difficult for authors to organize the related information in a way that is easily accessible and retrievable for future reference. Social bookmarking systems provide possibilities to tag and organize related resources that can be later retrieved by navigating in so-called tag clouds. Usually, tagging systems do not offer a possibility to tag sections of resources but only a resource as a whole. However, authors of new wiki pages are typically interested only in certain parts of other wiki pages that are related to their current editing process. This paper describes a new approach applied in a wiki-based online encyclopedia that allows authors to tag interesting wiki pages sections. The tags are then used to dynamically create new wiki pages out of tagged sections for further editing. 0 0
Creating user profiles using Wikipedia Komal Kapoor Krishnan Ramanathan Wikipedia
User profiles
User modeling
DMOZ
The 28th international conference on conceptual modeling (ER 2009), Gramado Brazil, Springer LNCS 5829 Creating user profiles is an important step in personalization. Many methods for user profile creation have been developed to date using different representations such as term vectors and concepts from an ontology like DMOZ. In this paper we propose and evaluate different methods for creating user profiles using Wikipedia as the representation. The key idea in our approach is to map documents to Wikipedia concepts at different levels of resolution: words, key phrases, sentences, paragraphs, the document summary and the entire document itself. We suggest a method for evaluating recall by pooling the relevant results from the different methods and evaluate our results for both precision and recall. We also suggest a novel method for profile evaluation by assessing the recall over a known ontological profile drawn from DMOZ. 0 0
Cross-cultural collaboration Wiki: evolving knowledge about international teamwork Nicole Schadewitz
Norhayati Zakaria
Cross-cultural collaboration
Design patterns
Wiki
IWIC English 0 0
Cross-cultural collaboration wiki - Evolving knowledge about international teamwork Proceedings of the 2009 ACM SIGCHI International Workshop on Intercultural Collaboration, IWIC'09 English 0 0
Cross-lingual Alignment and Completion of Wikipedia Templates Gosse Bouma
Sergio Duarte
Zahurul Islam
English 0 0
Cross-lingual Dutch to english alignment using EuroWordNet and Dutch Wikipedia Gosse Bouma CEUR Workshop Proceedings English This paper describes a system for linking the thesaurus of the Netherlands Institute for Sound and Vision to English WordNet and dbpedia. We used EuroWordNet, a multilingual wordnet, and Dutch Wikipedia as intermediaries for the two alignments. EuroWordNet covers most of the subject terms in the thesaurus, but the organization of the cross-lingual links makes selection of the most appropriate English target term almost impossible. Using page titles, redirects, disambiguation pages, and anchor text harvested from Dutch Wikipedia gives reasonable performance on subject terms and geographical terms. Many person and organization names in the thesaurus could not be located in (Dutch or English) Wikipedia. 0 0
Cross-lingual semantic relatedness using encyclopedic knowledge Hassan S.
Rada Mihalcea
EMNLP 2009 - Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: A Meeting of SIGDAT, a Special Interest Group of ACL, Held in Conjunction with ACL-IJCNLP 2009 English In this paper, we address the task of crosslingual semantic relatedness. We introduce a method that relies on the information extracted from Wikipedia, by exploiting the interlanguage links available between Wikipedia versions in multiple languages. Through experiments performed on several language pairs, we show that the method performs well, with a performance comparable to monolingual measures of relatedness. 0 0
Crossing textual and visual content in different application scenarios Ah-Julien Pine
Marco Bressan
Stephane Clinchant
Gabriela Csurka
Yves Hoppenot
Jean Renders-Michel
Multimedia Tools and Applications This paper deals with multimedia information access. We propose two new approaches for hybrid text-image information processing that can be straightforwardly generalized to the more general multimodal scenario. Both approaches fall in the trans-media pseudo-relevance feedback category. Our first method proposes using a mixture model of the aggregate components, considering them as a single relevance concept. In our second approach, we define trans-media similarities as an aggregation of monomodal similarities between the elements of the aggregate and the new multimodal object. We also introduce the monomodal similarity measures for text and images that serve as basic components for both proposed trans-media similarities. We show how one can frame a large variety of problem in order to address them with the proposed techniques: image annotation or captioning, text illustration and multimedia retrieval and clustering. Finally, we present how these methods can be integrated in two applications: a travel blog assistant system and a tool for browsing the Wikipedia taking into account the multimedia nature of its content. 2008 Springer {Science+Business} Media, {LLC. 0 1
Crosslanguage Retrieval Based on Wikipedia Statistics Andreas Juffinger
Roman Kern
Michael Granitzer
Lecture Notes in Computer Science English In this paper we present the methodology, implementations and evaluation results of the crosslanguage retrieval system we have developed for the Robust WSD Task at CLEF 2008. Our system is based on query preprocessing for translation and homogenisation of queries. The presented preprocessing of queries includes two stages: Firstly, a query translation step based on term statistics of cooccuring articles in Wikipedia. Secondly, different disjunct query composition techniques to search in the CLEF corpus. We apply the same preprocessing steps for the monolingual as well as the crosslingual task and thereby acting fair and in a similar way across these tasks. The evaluation revealed that the similar processing comes at nearly no costs for monolingual retrieval but enables us to do crosslanguage retrieval and also a feasible comparison of our system performance on these two tasks. 0 0
Crosslanguage retrieval based on Wikipedia statistics Andreas Juffinger
Roman Kern
Michael Granitzer
CLEF English 0 0
Customer information management based on semantic web in E-commerce Ji Z. Artificial psychology
E-business
Semantic wiki
User information management
Web personalization
FBIE 2009 - 2009 International Conference on Future BioMedical Information Engineering English This paper aims at the require of Customer' information management demands, and puts forward a information management model oriented Customer demand, and provide the formalized description of the model. On the base of the model, design a semantic web overlay network based information management framework in E-commerce. The framework provides a flat information sharing and management environment by linking the information demands of Customer and information architecture. The framework provides the capability of information resource releasing, discovering and locating by the bottom-up and top-down united information organizing mode. 0 0
Customer knowledge and service development, the Web 2.0 role in co-production Boselli R.
Cesarini M.
Mezzanzanica M.
Service Development Process
Service Interaction Patterns
Services Science
Web 2.0 tools
World Academy of Science, Engineering and Technology English The paper is concerned with relationships between SSME and ICTs and focuses on the role of Web 2.0 tools in the service development process. The research presented aims at exploring how collaborative technologies can support and improve service processes, highlighting customer centrality and value co-production. The core idea of the paper is the centrality of user participation and the collaborative technologies as enabling factors; Wikipedia is analyzed as an example. The result of such analysis is the identification and description of a pattern characterising specific services in which users collaborate by means of web tools with value co-producers during the service process. The pattern of collaborative co-production concerning several categories of services including knowledge based services is then discussed. 0 0
Customized edit interfaces for wikis via semantic annotations Angelo Di Iorio
Duca S.
Alberto Musetti
Righini S.
Rossi D.
Fabio Vitali
Editor
MediaWiki
Metadata
Semantics
Template
Web 2.0
CEUR Workshop Proceedings English Authoring support for semantic annotations represent the wiki way of the Semantic Web, ultimately leading to the wiki version of the Semantic Web's eternal dilemma: why should authors correctly annotate their content? The obvious solution is to make the ratio between the needed effort and the acquired advantages as small as possible. Two are, at least, the specificities that set wikis apart from other Web-accessible content in this respect: social aspects (wikis are often the expression of a community) and technical issues (wikis are edited "on-line"). Being related to a community, wikis are intrinsically associated to the model of knowledge of that community, making the relation between wiki content and ontologies the result of a natural process. Being edited on-line, wikis can benefit from a synergy of Web technologies that support all the information sharing process, from authoring to delivery. In this paper we present an approach to reduce the authoring effort by providing ontology-based tools to integrate models of knowledge with authoring-support technologies, using a functional approach to content fragment creation that plays nicely with the "wiki way" of managing information. 0 0
Cyber engineering co-intelligence digital ecosystem: The GOFASS methodology Leong P.
Siak C.B.
Miao C.
Collaborative
Collective intelligence
Intelligent agent interaction
Service oriented
2009 3rd IEEE International Conference on Digital Ecosystems and Technologies, DEST '09 English Co-intelligence, also known as collective or collaborative intelligence, is the harnessing of human knowledge and intelligence that allows groups of people to act together in ways that seem to be intelligent. Co-intelligence Internet applications such as Wikipedia are the first steps toward developing digital ecosystems that support collective intelligence. Peer-to-peer (P2P) systems are well fitted to co-Intelligence digital ecosystems because they allow each service client machine to act also as a service provider without any central hub in the network of cooperative relationships. However, dealing with server farms, clusters and meshes of wireless edge devices will be the norm in the next generation of computing; but most present P2P system had been designed with a fixed, wired infrastructure in mind. This paper proposes a methodology for cyber engineering an intelligent agent mediated co-intelligence digital ecosystems. Our methodology caters for co-intelligence digital ecosystems with wireless edge devices working with service-oriented information servers. 0 0
Cybersuicide and the adolescent population: challenges of the future? Ria Birbal
Hari D Maharajh
Risa Birbal
Maria Clapperton
Johnathan Jarvis
Anushka Ragoonath
Kali Uppalapati
International Journal of Adolescent Medicine and Health Cybersuicide is a term used in reference to suicide and its ideations on the Internet. Cybersuicide is associated with websites that lure vulnerable members of society and empower them with various methods and approaches to deliberate self-harm. Ease of accessibility to the Internet and the rate at which information is dispersed contribute to the promotion of 'offing' one's self which is particularly appealing to adolescents. This study aims to explore this phenomenon, which seems to be spreading across generations, cultures, and races. Information and articles regarding Internet suicide and other terminology, as well as sub-classifications concerning this new form of suicide, were reviewed. Through search engines such as Google, Yahoo and Wikipedia, we investigated the differentiations between 'web cam' suicide, 'net suicide packs', sites that merely offer advice on how to commit suicide and sites that are essential in providing the means of performing the act. Additionally, materials published in scientific journals and data published by the Public Health Services, Centers for Disease Control, and materials from private media agencies were reviewed. Resources were also sourced from The Faculty of Medical Sciences Library, {UWI} at Mt. Hope. Cybersuicide is a worldwide problem among adolescents and a challenge of the future. 0 0
DBpedia - A crystallization point for the Web of Data Christian Bizer
Jens Lehmann
Georgi Kobilarov
Sören Auer
Christian Becker
Richard Cyganiak
Sebastian Hellmann
Journal of Web Semantics The {DBpedia} project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting {DBpedia} knowledge base currently describes over 2.6 million entities. For each of these entities, {DBpedia} defines a globally unique identifier that can be dereferenced over the Web into a rich {RDF} description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to {DBpedia} resources, making {DBpedia} a central interlinking hub for the emerging Web of Data. Currently, the Web of interlinked data sources around {DBpedia} provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the {DBpedia} knowledge base, the current status of interlinking {DBpedia} with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around {DBpedia.} 2009 Elsevier {B.V.} All rights reserved. 0 0
DBpedia Live Extraction Sebastian Hellmann
Claus Stadler
Jens Lehmann
Sören Auer
English 0 0
DBpedia live extraction Sebastian Hellmann
Claus Stadler
Janette Lehmann
Sören Auer
Lecture Notes in Computer Science English The DBpedia project extracts information from Wikipedia, interlinks it with other knowledge bases, and makes this data available as RDF. So far the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the heavy-weight extraction process has been a drawback. It requires manual effort to produce a new release and the extracted information is not up-to-date. We extended DBpedia with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates. This allows direct modifications of the knowledge base and closer interaction of users with DBpedia. We also show how the Wikipedia community itself is now able to take part in the DBpedia ontology engineering process and that an interactive roundtrip engineering between Wikipedia and DBpedia is made possible. 0 0
DBpedia – A Crystallization Point for the Web of Data Christian Bizer
Jens Lehmann
Georgi Kobilarov
Sören Auer
Christian Becker
Richard Cyganiak
Sebastian Hellmann
Web of data
Linked data
Knowledge Extraction
Wikipedia
RDF
Journal of Web Semantics: Science, Services and Agents on the World Wide Web English The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia. 0 0
Data center hosting services governance portal and Google map-based collaborations Yih J.-S.
Liu Y.-H.
Business Support System
Cloud Computing
Data Center
Google Maps
Group Wisdom
Hosting Services
On Boarding
Operations Support System
Service Delivery Process
Web. 2.0
Wiki
Lecture Notes in Computer Science English In the IT services business, a multi-year enterprise application hosting contract often carries a price tag that is an order of magnitude larger than that of the solution development. For hosting services providers to compete over the revenue stream, the ability to provide rapid application deployment is a critical consideration on top of the price differences. In fact, a data center is tested repeatedly in its responsiveness, as application hosting requires iterations of deployment adjustments due to business condition, IT optimization, security, and compliance reasons. In this paper, we report an enterprise application deployment governance portal, which coordinates service delivery roles, integrates system management tools, and above all keeps the clients involved or at least informed. In the data center operations such as: early engagement, requirement modeling, solution deployment designs, service delivery, steady state management, and close out; this paper illustrates how the Google Map technology can be used in representing both the target deployment architecture and delivery process. The Google map model can then be used in delivery process execution and collaborations. The resulting governance portal has been fully implemented and is in active use for the data center business transformation in IBM. 0 0
Decentralization in Wikipedia Governance Andrea Forte
Vanesa Larco
Amy Bruckman
English How does "self-governance" happen in Wikipedia? Through in-depth interviews with 20 individuals who have held a variety of responsibilities in the English-language Wikipedia, we obtained rich descriptions of how various forces produce and regulate social structures on the site. Although Wikipedia is sometimes portrayed as lacking oversight, our analysis describes Wikipedia as an organization with highly refined policies, norms, and a technological architecture that supports organizational ideals of consensus building and discussion. We describe how governance on the site is becoming increasingly decentralized as the community grows and how this is predicted by theories of commons-based governance developed in offline contexts. We also briefly examine local governance structures called WikiProjects through the example of WikiProject Military History, one of the oldest and most prolific projects on the site. 0 2
Deep thought;web based system for managing and presentation of research and student projects Gregar T.
Pospisilova R.
Pitner T.
Agile programming
Bug-tracking
Extreme programming
Metadata
Ontology
Plug-in
Portal
Project management system
Python
Semantics
Subversion
Tag
Trac
Visualisation
Web 2.0
Wiki
XML
CSEDU 2009 - Proceedings of the 1st International Conference on Computer Supported Education English There are plenty of projects solved each day at academic venues -small in-term students' projects without any real usability, bachelor and diploma thesis, large interdisciplinary or internationally supported projects. Each of them has its own set of requirements how to manage it. Aim of our paper is to describe these requirements, and to show how we tried to satisfy them. As a result of further analysis we designed and implemented system Deep Thought (under development since autumn 2007), which united the management of distinct categories of projects in one portal. System is based on open-source technology, it is modular and hence it is capable to integrate heterogeneous tools such as version control system, wiki, project presenting and managing. This paper also introduces aims of the future development of the system, such as interoperability with other management systems or better connection with the lecture content and teaching process. 0 0
Defining a universal actor content-element model for exploring social and information networks considering the temporal dynamic Muller C.
Benedikt Meuthrath
Sabina Jeschke
Proceedings of the 2009 International Conference on Advances in Social Network Analysis and Mining, ASONAM 2009 English The emergence of the Social Web offers new opportunities for scientists to explore open virtual communities. Various approaches have appeared in terms of statistical evaluation, descriptive studies and network analyses, which pursue an enhanced understanding of existing mechanisms developing from the interplay of technical and social infrastructures. Unfortunately, at the moment, all these approaches are separate and no integrated approach exists. This gap is filled by our proposal of a concept which is composed of a universal description model, temporal network definitions, and a measurement system. The approach addresses the necessary interpretation of Social Web communities as dynamic systems. In addition to the explicated models, a software tool is briefly introduced employing the specified models. Furthermore, a scenario is used where an extract from the Wikipedia database shows the practical application of the software. 0 0
Delft University at the TREC 2009 entity track: Ranking Wikipedia entities Pavel Serdyukov
Arjen De Vries
NIST Special Publication English This paper describes the details of our participation in Entity track of the TREC 2009. 0 0
Demo: Historyviz - Visualizing events and relations extracted from wikipedia Sipos R.
Abhijit Bhole
Blaz Fortuna
Marko Grobelnik
Mladenic D.
Lecture Notes in Computer Science English HistoryViz provides a new perspective on a certain kind of textual data, in particular the data available in the Wikipedia, where different entities are described and put in historical perspective. Instead of browsing through pages each describing a certain topic, we can look at the relations between entities and events connected with the selected entities. The presented solution implemented in HistoryViz provides user with a graphical interface allowing viewing events concerning the selected person on a timeline and viewing relations to other entities as a graph that can be dynamically expanded. 0 0
Deploying PHP applications on IBM DB2 in the cloud: MediaWiki as a case study Leons Petrazickis CASCON English 0 0
Deriving semantic sessions from semantic clusters Safarkhani B.
Talabeigi M.
Mohsenzadeh M.
Meybodi M.R.
Semantic cluster
Semantic sub-session
Semantic vectors
Wikipedia
Proceedings - 2009 International Conference on Information Management and Engineering, ICIME 2009 English A important phase in any web personalization system is transaction identification. Recently a number of researches have been done to incorporate semantics of a web site in representation of transactions. Building a hierarchy of concepts manually is time consuming and expensive. In this paper we intend to address these shortcomings. Our contribution is that we introduce a mechanism to automatically improve the representation of the user in the website using a comprehensive lexical semantic resource and semantic clusters. We utilize Wikipedia, the largest encyclopedia to date, as a rich lexical resource to enhance the automatic construction of vector model representation of user sessions. We cluster web pages based on their content with Hierarchical Unsupervised Fuzzy Clustering algorithms ,are effective methods, for exploring the structure of complex real data where grouping of overlapping and vague elements is necessary. Entries in web server logs are used to identify users and visit sessions, while web page or resources in the site are clustered based on their content and their semantic. Theses clusters of web documents are used to scrutinize the discovered web sessions in order to identify what we call sub-sessions. Each subsession have consistent goal. This process engendered to improving deriving semantic sessions from web site user page views. Our experiments show that proposed system significantly improves the quality of web personalization process. 0 0
Description of Some Spontaneus Species and the Possibilities uf Use Them in the Rocky Gardens B. Erzsebet
C. Maria
Z. Dumitru
D. Adelina
Z. Adrian
S. Georgeta
B. Mihai
Journal of Plant Development, 0 0
Design Alternatives for a MediaWiki to Support Collaborative Writing in Higher Education Classes Sumonta Kasemvilas
Lorne Olfman
Awareness
Collaborative authoring
Constructivist learning
Design science research
Talk page
Evaluation
MediaWiki
Project management
Web 2.0
Issues in Informing Science and Information Technology English Constructivist learning mechanisms such as collaborative writing have emerged as a result of the development of Web 2.0 technologies. We define the term mandatory collaborative writing to describe a writing activity where the group has a firm deadline. Our study focuses on how a wiki can fully support mandatory group writing. The motivation of this design science research study emerges from a graduate Knowledge Management class assignment to write a wiki book. The project outcome shows that the wiki instance used for the project, MediaWiki, could better facilitate the process with a set of extensions that support discussion, evaluation, and project management. We outline designs for these mechanisms: 1) a discussion mechanism that changes the way users discuss content on a wiki page and increases group awareness; 2) an evaluation mechanism that provides a tool for the instructor to monitor and assess students’ performance; and 3) a project management tool that increases awareness of the status of each component of the writing project and provides an overall summary of the project. A demonstration of the principles to a focus group provided a basic proof of the validity of these mechanisms. 16 1
Design patterns in microtechnology Albers A.
Deigendesch T.
Turki T.
Design patterns
Knowledge management
Microtechnology
Pattern languages
Wiki
DS 58-5: Proceedings of ICED 09, the 17th International Conference on Engineering Design English System design in microtechnology requires in-depth knowledge about the manufacturing processes. The known models of micro-specific design comprise the integration of production as a main aspect. Design rules are an established means of support for design constraints from production. However, there is design and production knowledge that cannot be formulated by design rules. The authors propose the application of design patterns - a very successful approach in software engineering - in microtechnology for representing knowledge about successful realized design solutions. Basically, a pattern describes the given context, in which the pattern is supposed to be applied, the frequently occurring design problem and the corresponding abstract solution. The solution part is generic and abstract to prevent the prescription of a concrete solution. A method for pattern derivation by correlating function and shape is proposed. The micro-specific patterns are represented in a wiki-system. 0 0
Designing wikis for collaborative learning and knowledge-building in higher education Swapna Kumar CSCL English 0 0
Detector y corrector automático de ediciones maliciosas en Wikipedia Emilio J. Rodríguez-Posada Wikipedia
Vandalism
Pattern
Spanish El proyecto desarrolla AVBOT (acrónimo de Anti-Vandalism BOT), un programa que detecta y corrige automáticamente ediciones maliciosas en Wikipedia en español. Está programado en Python y utiliza las librerías pywikipediabot y python-irclib. 0 0
Developing consistent and modular software models with ontologies Robert Hoehndorf
Ngomo A.-C.N.
Heinrich Herre
Formal ontology
Ontology-driven design
Software engineering
Proceedings of 8th International Conference on New Trends in Software Methodologies, Tools and Techniques, SoMeT 09 English The development and verification of software models that are applicable across multiple domains remains a difficult problem. We propose a novel approach to model-driven software development based on ontologies and Semantic Web technology. Our approach uses three ontologies to define software models: a task ontology, a domain ontology and a top-level ontology. The task ontology serves as the conceptual model for the software, the domain ontology provides domainspecific knowledge and the top-level ontology integrates the task and domain ontologies. Our method allows the verification of these models both for consistency and ontological adequacy. This verification can be performed both at development and runtime. Domain ontologies are replaceable modules, which enables the comparison and application of the models built using our method across multiple domains. We demonstrate the viability of our approach through the design and implementation of a semantic wiki and a social tagging system, and compare it with model-driven software development to illustrate its benefits. 0 0
Developing semantic web applications with the onto wiki framework Studies in Computational Intelligence English 0 0
Dialogue through wikis: A pilot exploration of dialogic public relations and wiki websites C. A Hickerson
S. R Thompson
PRism Online PR Journal 0 0
Did you put it on the wiki?: information sharing through wikis in interdisciplinary design collaboration Ammy J. Phuwanartnurak Information sharing
Interdisciplinary design
User study
Wiki
Wiki log
SIGDOC'09 - Proceedings of the 27th ACM International Conference on Design of Communication English Interdisciplinary design is challenging, in large measure, because of the difficulty in communicating and coordinating across disciplines. Team members from different disciplines may view and solve the same problem from different perspectives, with their own unique method and language, which may create barriers to information sharing. Wikis, in particular, have gained popularity as a collaborative tool and have been claimed to support collaboration and information sharing. Despite the increasing use of wikis in design projects, there has been little research attention to how wikis are actually used by design teams. This paper describes a field study of two interdisciplinary design teams, seeking to discover how wikis support information sharing in software development projects. The study provides empirical evidence on the use of wikis in interdisciplinary design work, which will be used to develop guidelines on the effective use of wikis to support interdisciplinary design collaboration. 0 0
Digital Literacies. A Tale of Two Tasks: Editing in the Era of Digital Literacies Chandler-Olcott
Kelly
Journal of Adolescent \& Adult Literacy This article argues that editing in the era of digital literacies is a complex, collaborative endeavor that requires a sophisticated awareness of audience and purpose and a knowledge of multiple conventions for conveying meaning and ensuring accuracy. It compares group editing of an article about the New York Yankees baseball team on Wikipedia, the popular online encyclopedia, to the decontextualized proofreading task required of seventh graders on a state-level examination. It concludes that literacy instruction in schools needs to prepare students for the multiple dimensions of editing in both print and online environments, which means teaching them to negotiate meanings with others, not merely to correct surface-feature errors. {(Contains} 1 figure.) 0 0
Directions for exploiting asymmetries in multilingual Wikipedia Elena Filatova CLIAWS3 English 0 0
Discovering influential nodes for SIS models in social networks Saito K.
Kimura M.
Motoda H.
Lecture Notes in Computer Science English We address the problem of efficiently discovering the influential nodes in a social network under the susceptible/infected/susceptible (SIS) model, a diffusion model where nodes are allowed to be activated multiple times. The computational complexity drastically increases because of this multiple activation property. We solve this problem by constructing a layered graph from the original social network with each layer added on top as the time proceeds, and applying the bond percolation with pruning and burnout strategies. We experimentally demonstrate that the proposed method gives much better solutions than the conventional methods that are solely based on the notion of centrality for social network analysis using two large-scale real-world networks (a blog network and a wikipedia network). We further show that the computational complexity of the proposed method is much smaller than the conventional naive probabilistic simulation method by a theoretical analysis and confirm this by experimentation. The properties of the influential nodes discovered are substantially different from those identified by the centrality-based heuristic methods. 0 0
Do we mean the same? Disambiguation of extracted keyword queries for database search Demidova E.
Oelze I.
Fankhauser P.
Extracted keyword queries
Keyword disambiguation
KEYS 2009 - Proceedings of the 1st International Workshop on Keyword Search on Structured Data English Users often try to accumulate information on a topic of interest from multiple information sources. In this case a user's informational need might be expressed in terms of an available relevant document, e.g. a web-page or an e-mail attachment, rather than a query. Database search engines are mostly adapted to the queries manually created by the users. In case a user's informational need is expressed in terms of a document, we need algorithms that map keyword queries automatically extracted from this document to the database content. In this paper we analyze the impact of selected document and database statistics on the effectiveness of keyword disambiguation for manually created as well as automatically extracted keyword queries. Our evaluation is performed using a set of user queries from the AOL query log and a set of queries automatically extracted from Wikipedia articles both executed against the Internet Movie Database (IMDB). Our experimental results show that (1) knowledge of the document context is crucial in order to extract meaningful keyword queries; (2) statistics which enable effective disambiguation of user queries are not sufficient to achieve the same quality for the automatically extracted requests. 0 0
Do wiki-pages have parents?An article-level inquiry into wikipedia's inequalities Nagaraj A.
Amitava Dutta
Priya Seetharaman
Rahul Roy
Corporate wikis
Inequality
Knowledge management
Parenting
Wikipedia
19th Workshop on Information Technologies and Systems, WITS 2009 English We hypothesize that articles on Wikipedia have "parents" who contribute a significant portion of their edits. We establish a notion of inequality based on the Gini Co-efficient for articles on Wikipedia and find support for the existence of this phenomenon of parenting. We base our study on data collected from the Tagalog and Croatian Wikipedias. Ultimately we claim that our research has significant implications for policy for both Corporate Wikis as also for Wikipedia. We state these implications and also suggest directions for future research. 0 0
Document re-ranking via Wikipedia articles for definition/biography type questions Liu M.
Fang F.
Ji D.
Chinese IR4QA
Clustering analysis
Document re-ranking
Wikipedia
PACLIC 23 - Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation English In this paper, we propose a document re-ranking approach based on the Wikipedia articles related to the specific questions to re-order the initial retrieved documents to improve the precision of top retrieved documents in Chinese information retrieval for question answering (IR4QA) system where the questions are definition or biography type. On one hand, we compute the similarity between each document in the initial retrieved results and the related Wikipedia article. On the other hand, we do clustering analysis for the documents based on the K-Means clustering method and compute the similarity between each centroid of the clusters and the Wikipedia article. Then we integrate the two kinds of similarity with the initial ranking score as the last similarity value and re-rank the documents in descending order with this measure. Experiment results demonstrate that this approach can improve the precision of the top relevant documents effectively. 0 0
Documenting models and their relations with semantic wikis Michael Fellmann
Oliver Thomas
Thorsten Dollmann
AAAI Spring Symposium - Technical Report English The unified documentation of models and their relations which vary greatly in regard to the description view, the abstraction level, the language used and the purpose for which they have been built is challenging. The knowledge about the relations which exist between models created with a di-verse set of tools is usually not captured in a systematic way and hence cannot be searched and reused across different projects and stakeholders. Therefore we suggest semantic wikis for the collection of this knowledge. 0 0
Domain Knowledge Wiki for Eliciting Requirements Takanori Ugai
Kouji Aoyama
MARK English 0 0
Domain independent semantic representation of multimedia presentations Angela Fogarolli
Marco Ronchetti
International Conference on Intelligent Networking and Collaborative Systems, INCoS 2009 English This paper describes a domain independent approach for semantically annotating and representing multimedia presentations. It uses a combination of techniques to automatically discover the content of the media and, though supervised or unsupervised methods, it can generate a RDF description out of it. The domain independence is achieved using Wikipedia as a source of knowledge instead of domain Ontologies. The described approach can be relevant for understanding multimedia content which can be used in Information Retrieval, categorization and summarization. 0 0
Domain specific ontology on computer science Salahli M.A.
Gasimzade T.M.
Guliyev A.I.
Ontology
Semantic relatedness
Wikipedia
Wordnet
ICSCCW 2009 - 5th International Conference on Soft Computing, Computing with Words and Perceptions in System Analysis, Decision and Control English In this paper we introduce the application system based on the domain specific ontology. Some design problems of the ontology are discussed. The ontology is based on the WordNet's database and consists of Turkish and English terms on computer science and informatics. Second we present the method for determining a set of words, which are related to a given concept and computing the degree of semantic relatedness between them. The presented method has been used for semantic searching process, which is carried out by our application. 0 0
DuPont Scientist Accused of Stealing Company's Trade Secrets Robert F. Service Science 0 0
DynaTable: A wiki extension for structured data WikiSym English 0 1
Dynamic collaboration: A personal reflection Aron D. Dynamic collaboration
Gartner
Innocentive
Ollaboration
Topcoder
Wikipedia
Zopa
Journal of Information Technology English This paper explores the nature of, and possibilities arising from, dynamic collaboration, where large numbers of people can collaborate on an evolving set of initiatives, without prior knowledge of each other. It references early examples of dynamic collaboration including Topcoder, Innocentive, Zopa, and Wikipedia. It then speculates about the future of dynamic collaboration. © 2009 JIT Palgrave Macmillan. All rights reserved. 0 0
Dynamic collaboration: a personal reflection D. Aron Journal of Information Technology This paper explores the nature of, and possibilities arising from, dynamic collaboration, where large numbers of people can collaborate on an evolving set of initiatives, without prior knowledge of each other. It references early examples of dynamic collaboration including Topcoder, Innocentive, Zopa, and Wikipedia. It then speculates about the future of dynamic collaboration. 0 0
Dynamic policy based model for trust based access control in P2P applications Chatterjee M.
Sivakumar G.
Menezes B.
IEEE International Conference on Communications English Dynamic self-organizing groups like wikipedia, and f/oss have special security requirements not addressed by typical access control mechanisms. An example is the ability to collaboratively modify access control policies based on the evolution of the group and trust and behavior levels. In this paper we propose a new framework for dynamic multi-level access control policies based on trust and reputation. The framework has interesting features wherein the group can switch between policies over time, influenced by the system's state or environment. Based on the behavior and trust level of peers in the group and the current group composition, it is possible for peers to collaboratively modify policies such as join, update and job allocation. We have modeled the framework using the declarative language Prolog. We also performed some simulations to illustrate the features of our framework. 0 0
E-Learning in the Philippines: Trends, Directions, and Challenges Pena-Melinda M. Dela Bandalaria
Pena-Bandalaria
Melinda M. Dela
International Journal on E-Learning 0 0
ENGAGING WITH THE WORLD: STUDENTS OF COMPARATIVE LAW WRITE FOR WIKIPEDIA. Normann Witzleb Legal Education Review Pages = 2009 Monash University Law Research Series 18 Improving students' computer literacy, instilling a critical approach to Internet resources and preparing them for collaborative work are important educational aims today. This article examines how a writing exercise in the style of a Wikipedia article can be used to develop these skills. Students in an elective unit in Comparative Law were asked to create, and review, a Wikipedia entry on an issue, concept or scholar in this field. This article describes the rationale for adopting this writing task, how it was integrated into the teaching and assessment structure of the unit, and how students responded to the exercise. In addition to critically evaluating the potential of this novel teaching tool, the article aims to provide some practical guidance on when Wikipedia assignments might be usefully employed. 0 0
ESSE: Exploring mood on the web Sood S.O.
Vasserman L.
AAAI Fall Symposium - Technical Report English Future machines will connect with users on an emotional level in addition to performing complex computations (Norman 2004). In this article, we present a system that adds an emotional dimension to an activity that Internet users engage in frequently, search. ESSE, which stands for Emotional State Search Engine, is a web search engine that goes beyond facilitating a user's exploration of the web by topic, as search engines such as Google or Yahoo! afford. Rather, it enables the user to browse their topically relevant search results by mood, providing the user with a unique perspective on the topic at hand. Consider a user wishing to read opinions about the new president of the United States. Typing "President Obama" into a Google search box will return (among other results), a few recent news stories about Obama, the Whitehouse's website, as well as a wikipedia article about him. Typing "President Obama" into a Google Blog Search box will bring the user a bit closer to their goal in that all of the results are indeed blogs (typically opinions) about Obama. However, where blog search engines fall short is in providing users with a way to navigate and digest the vastness of the blogosphere, the incredible number of results for the query "President Obama" (approximately 17335307 as of 2/24/09) (Google Blog Search 2009). ESSE provides another dimension by which users can take in the vastness of the web or the blogosphere. This article outlines the contributions of ESSE including a new approach to mood classification. Copyright © 2009, Association for the Advancement of Artificial Intelligence (www.aaai.org). 0 0
… further results