Distributed computer systems
| Distributed computer systems|
(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
Distributed computer systems is included as keyword or extra keyword in 0 datasets, 0 tools and 77 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Shades: Expediting Kademlia's lookup process||Einziger G.
|Lecture Notes in Computer Science||English||2014||Kademlia is considered to be one of the most effective key based routing protocols. It is nowadays implemented in many file sharing peer-to-peer networks such as BitTorrent, KAD, and Gnutella. This paper introduces Shades, a combined routing/caching scheme that significantly shortens the average lookup process in Kademlia and improves its load handling. The paper also includes an extensive performance study demonstrating the benefits of Shades and compares it to other suggested alternatives using both synthetic workloads and traces from YouTube and Wikipedia.||0||0|
|SkWiki: A multimedia sketching system for collaborative creativity||Zhao Z.
|Conference on Human Factors in Computing Systems - Proceedings||English||2014||We present skWiki, a web application framework for collaborative creativity in digital multimedia projects, including text, hand-drawn sketches, and photographs. skWiki overcomes common drawbacks of existing wiki software by providing a rich viewer/editor architecture for all media types that is integrated into the web browser itself, thus avoiding dependence on client-side editors. Instead of files, skWiki uses the concept of paths as trajectories of persistent state over time. This model has intrinsic support for collaborative editing, including cloning, branching, and merging paths edited by multiple contributors. We demonstrate skWiki's utility using a qualitative, sketching-based user study.||0||0|
|Collaborative development of data curation profiles on a wiki platform: Experience from free and open source software projects and communities||Sowe S.K.
|Proceedings of the 9th International Symposium on Open Collaboration, WikiSym + OpenSym 2013||English||2013||Wiki technologies have proven to be versatile and successful in aiding collaborative authoring of web content. Multitude of users can collaboratively add, edit, and revise wiki pages on the fly, with ease. This functionality makes wikis ideal platforms to support research communities curate data. However, without appropriate customization and a model to support collaborative editing of pages, wikis will fall sort in providing the functionalities needed to support collaborative work. In this paper, we present the architecture and design of a wiki platform, as well as a model that allow scientific communities, especially disaster response scientists, collaborative edit and append data to their wiki pages. Our experience in the implementation of the platform on MediaWiki demonstrates how wiki technologies can be used to support data curation, and how the dynamics of the FLOSS development process, its user and developer communities are increasingly informing our understanding about supporting collaboration and coordination on wikis. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; D.2.10 [Software Engineering]: Design-methodologies, representation General Terms Design, Human Factors, Management, Theory. Copyright 2010 ACM.||0||0|
|Extracting knowledge from Wikipedia articles through distributed semantic analysis||Hieu N.T.
Di Francesco M.
|ACM International Conference Proceeding Series||English||2013||Computing semantic word similarity and relatedness requires access to vast amounts of semantic space for effective analysis. As a consequence, it is time-consuming to extract useful information from a large amount of data on a single workstation. In this paper, we propose a system, called Distributed Semantic Analysis (DSA), that integrates a distributed-based approach with semantic analysis. DSA builds a list of concept vectors associated with each word by exploiting the knowledge provided by Wikipedia articles. Based on such lists, DSA calculates the degree of semantic relatedness between two words through the cosine measure. The proposed solution is built on top of the Hadoop MapReduce framework and the Mahout machine learning library. Experimental results show two major improvements over the state of the art, with particular reference to the Explicit Semantic Analysis method. First, our distributed approach significantly reduces the computation time to build the concept vectors, thus enabling the use of larger inputs that is the basis for more accurate results. Second, DSA obtains a very high correlation of computed relatedness with reference benchmarks derived by human judgements. Moreover, its accuracy is higher than solutions reported in the literature over multiple benchmarks.||0||0|
|Lifecycle-based evolution of features in collaborative open production communities: The case of wikipedia||Ziaie P.
|ECIS 2013 - Proceedings of the 21st European Conference on Information Systems||English||2013||In the last decade, collaborative open production communities have provided an effective platform for geographically dispersed users to collaborate and generate content in a well-structured and consistent form. Wikipedia is a prominent example in this area. What is of great importance in production communities is the prioritization and evolution of features with regards to the community lifecycle. Users are the cornerstone of such communities and their needs and attitudes constantly change as communities grow. The increasing amount and versatility of content and users requires modifications in areas ranging from user roles and access levels to content quality standards and community policies and goals. In this paper, we draw on two pertinent theories in terms of the lifecycle of online communities and open collaborative communities in particular by focusing on the case of Wikipedia. We conceptualize three general stages (Rising, Organizing, and Stabilizing) within the lifecycle of collaborative open production communities. The salient factors, features and focus of attention in each stage are provided and the chronology of features is visualized. These findings, if properly generalized, can help designers of other types of open production communities effectively allocate their resources and introduce new features based on the needs of both community and users.||0||0|
|Open2Edit: A peer-to-peer platform for collaboration||Zeilemaker N.
|2013 IFIP Networking Conference, IFIP Networking 2013||English||2013||Peer-to-peer systems owe much of their success to user contributed resources like storage space and bandwidth. At the same time, popular collaborative systems like Wikipedia or StackExchange are built around user-contributed knowledge, judgement, and expertise. In this paper, we combine peer-to-peer and collaborative systems to create Open2Edit, a peer-to-peer platform for collaboration.||0||0|
|Searching for Translated Plagiarism with the Help of Desktop Grids||Pataki M.
|Journal of Grid Computing||English||2013||Translated or cross-lingual plagiarism is defined as the translation of someone else's work or words without marking it as such or without giving credit to the original author. The existence of cross-lingual plagiarism is not new, but only in recent years, due to the rapid development of the natural language processing, appeared the first algorithms which tackled the difficult task of detecting it. Most of these algorithms utilize machine translation to compare texts written in different languages. We propose a different method, which can effectively detect translations between language-pairs where machine translations still produce low quality results. Our new algorithm presented in this paper is based on information retrieval (IR) and a dictionary based similarity metric. The preprocessing of the candidate documents for the IR is computationally intensive, but easily parallelizable. We propose a desktop Grid solution for this task. As the application is time sensitive and the desktop Grid peers are unreliable, a resubmission mechanism is used which assures that all jobs of a batch finish within a reasonable time period without dramatically increasing the load on the whole system. © 2012 Springer Science+Business Media B.V.||0||0|
|The ReqWiki approach for collaborative software requirements engineering with integrated text analysis support||Bahar Sateli
|Proceedings - International Computer Software and Applications Conference||English||2013||The requirements engineering phase within a software project is a heavily knowledge-driven, collaborative process that typically involves the analysis and creation of a large number of textual artifacts. We know that requirements engineering has a large impact on the success of a project, yet sophisticated tool support, especially for small to mid-size enterprises, is still lacking. We present Reqwiki, a novel open source web-based approach based on a semantic wiki that includes natural language processing (NLP) assistants, which work collaboratively with humans on the requirements specification documents. We evaluated Reqwiki with a number of software engineers to investigate the impact of our novel semantic support on software requirements engineering. Our user studies prove that (i) software engineers unfamiliar with NLP can easily leverage these assistants and (ii) semantic assistants can help to significantly improve the quality of requirements specifications.||0||0|
|Three years of teaching using collaborative tools patterns and lessons learned||Trentini A.||CSEDU 2013 - Proceedings of the 5th International Conference on Computer Supported Education||English||2013||The author has taught computer science (Programming 101 and Operating Systems 101) for about fifteen years. He introduced the use of a student-collaborated wiki website for his courses about ten years ago. Then, three years ago, he also began extensively using a collaborative editor (Gobby) in classroom, to let students actively participate during lessons. This paper describes the author's course "workflow", summarizes tools (wiki and collaborative editor) functionalities, collects some context pattern and tries to draw a few conclusions (lessons learned) about the course methodology.||0||0|
|Distributed and collaborative requirements elicitation based on social intelligence||Wen B.
|Proceedings - 9th Web Information Systems and Applications Conference, WISA 2012||English||2012||Requirements is the formal expression of user's needs. Also, requirements elicitation is the process of activity focusing on requirements collection. Traditional acquisition methods, such as interview, observation and prototype, are unsuited for the service-oriented software development featuring in the distributed stakeholders, collective intelligence and behavioral emergence. In this paper, a collaborative requirements elicitation approach based on social intelligence for networked software is put forward, and requirements-semantics concept is defined as the formal requirements description generated by collective participation. Furthermore, semantic wikis technology is chosen as requirements authoring platform to adapt the distributed and collaborative features. Faced to the wide-area distributed Internet, it combines with the Web 2.0 and the semantic web to revise the experts requirements-semantics model through the social classification. At the same time, instantiation of requirements model is finished with semantic tagging and validation. Apart from the traditional documentary specification, requirements-semantics artifacts will be exported from the elicitation process to the subsequent software production process, i.e. services aggregation and services resource customization. Experiment and prototype have proved the feasibility and effectiveness of the proposed approach.||0||0|
|Enterprise wiki application scenarios and their relation to user motivation||Lin D.
|Proceedings of the European Conference on Knowledge Management, ECKM||English||2012||A wide range of companies is already using social software. Enterprise wikis are the most frequently used type. The best-known example of a wiki is the online encyclopedia Wikipedia. Many companies use Wikipedia as an example and thus establish an internal wiki encyclopedia. But there are also other purposes for which enterprise wikis can be used. For example, some companies use the wiki as a news portal, discussion platform or as a project management tool. In research there is still no adequate systematization of the possible uses of enterprise wikis. Therefore, the aim of this paper is to examine the different application scenarios of enterprise wikis and to explore application scenario-related motivational factors. To examine the application scenarios of enterprise wikis, a qualitative-oriented case study on the internal wiki of the company T-Systems Multimedia Solutions GmbH was conducted. The Atlassian Confluence based wiki called 'TeamWeb' was introducedin 2008 and has gradually established itself as a global Intranet. A peculiarity of this wiki is that it is not strictly specified, how the staff has to work with it. In this way, over time many different application scenarios have been established. There are now about 56,000 wiki pages in TeamWeb. These wiki pages were analyzed and categorized to application scenarios. As a result, a classification is proposed which differentiates and characterizes four archetypical application scenarios of enterprise wikis: 'presentation & communication', 'encyclopaedia', 'project organization' and 'collaborative design'. Subsequently semi-structured interviews with selected employees were conducted in order to understand their motivation to wiki-use. It could be determined that the use of wikis is alwaysprimarily for 'egoistic' goals. 'Altruistic' goals, such as 'help others', however, were rarely mentioned as a primary motivation. To promote the use of enterprise wikis, it is therefore necessary to further the 'egoistic' application scenarios.||0||0|
|Safety measures for social computing in Wiki learning environment||Patel A.
|International Journal of Information Security and Privacy||English||2012||Wikis are social networking systems that allow users to freely intermingle at different levels of communication such as collaborative learning, chatting, and group communications. Although a great idea and goal, it's particularly vulnerable due to its features of open medium and lack of clear plan of defense. Personal data can be misused for virtual insulting, resulting in misuse of personal information for financial gains or creating misuses. Wikis are an example of social computing of collaborative learning, joint editing, brain storming, and virtual socializing, which is a ripe environment for hacking, deception, abuse, and misuse. Thus, wiki needs comprehensive security measures which include privacy, trust, security, audit, and digital forensics to protect users and system resources. This paper identifies and explores the needs of secure social computing and supporting information systems as place s for interaction, data collection, and manipulation for wikis. It does this by reviewing the literature and related works in proposing a safety measure framework for a secure and trustworthy medium together with privacy, audit, and digital forensic investigative functions in wiki environments. These then can aid design and usage in social computing environments with the proviso to give comfort and confidence to users without worrying about abuse and cybercrime perpetrated activities. Copyright||0||0|
|Supporting multilingual discussion for collaborative translation||Noriyuki Ishida
|Proceedings of the 2012 International Conference on Collaboration Technologies and Systems, CTS 2012||English||2012||In recent years, collaborative translation has become more and more important for translation volunteers to share knowledge among different languages, among which Wikipedia translation activity is a typical example. During the collaborative translation processes, users with different mother tongues always conduct frequent discussions about certain words or expressions to understand the content of original article and to decide the correct translation. To support such kind of multilingual discussions, we propose an approach to embedding a service-oriented multilingual infrastructure with discussion functions in collaborative translation systems, where discussions can be automatically translated into different languages with machine translators, dictionaries, and so on. Moreover, we propose a Meta Translation Algorithm to adapt the features of discussions for collaborative translation, where discussion articles always consist of expressions in different languages. Further, we implement the proposed approach on LiquidThreads, a BBS on Wikipedia, and apply it for multilingual discussion for Wikipedia translation to verify the effectiveness of this research.||0||0|
|Cooperative WordNet editor for lexical semantic acquisition||Szymanski J.||Communications in Computer and Information Science||English||2011||The article describes an approach for building WordNet semantic dictionary in a collaborative approach paradigm. The presented system system enables functionality for gathering lexical data in a Wikipedia-like style. The core of the system is a user-friendly interface based on component for interactive graph navigation. The component has been used for WordNet semantic network presentation on web page, and it brings functionalities of modification its content by the distributed group of people.||0||0|
|Editing knowledge resources: The wiki way||Francesco Ronzano
|International Conference on Information and Knowledge Management, Proceedings||English||2011||The creation, customization, and maintenance of knowledge resources are essential for fostering the full deployment of Language Technologies. The definition and refinement of knowledge resources are time- and resource-consuming activities. In this paper we explore how the Wiki paradigm for online collaborative content editing can be exploited to gather massive social contributions from common Web users in editing knowledge resources. We discuss the Wikyoto Knowledge Editor, also called Wikyoto. Wikyoto is a collaborative Web environment that enables users with no knowledge engineering background to edit the multilingual network of knowledge resources exploited by KYOTO, a cross-lingual text mining system developed in the context of the KYOTO European Project.||0||0|
|FlowWiki: A Wiki Based Platform for Ad-hoc Collaborative Workflows||Jae-Yoon Jung
|Knowledge-Based Systems||English||2011||Traditional workflow management systems provide rich capabilities for designing, executing, and monitoring well-defined collaborative processes. Yet, for many occasions of collaboration, we do not often have sufficient information about who will participate, what activities people will carry out, and how the entire workflow will change. Accordingly, the problem of managing flexible workflows has been receiving increasing attention during the last decade. This paper presents a novel approach by which collaborative workflows can be configured independently as needed by participants and managed in an ad hoc way. Motivated by the emerging paradigm of collective intelligence, the proposed platform, named FlowWiki, provides a set of useful mechanisms to enable dynamic collaborations without requiring prescribed collaboration model. FlowWiki is an extension of conventional wiki system, and it aims for flexibly managing collaborative workflows by allowing on-demand workflow configuration and event-driven interactions.||0||0|
|How to teach digital library data to swim into research||Schindler C.
|ACM International Conference Proceeding Series||English||2011||Virtual research environments (VREs) aim to enhance research practice and have been identified as drivers for changes in libraries. This paper argues that VREs in combination with Semantic Web technologies offer a range of possibilities to align research with library practices. This main claim of the article is exemplified by a metadata integration process of bibliographic data from libraries to a VRE which is based on Semantic MediaWiki. The integration process rests on three pillars: MediaWiki as a web-based repository, Semantic MediaWiki annotation mechanisms, and semi-automatic workflow management for the integration of digital resources. Thereby, needs of scholarly research practices and capacities for interactions are taken into account. The integration process is part of the design of Semantic MediaWiki for Collaborative Corpora Analysis (SMW-CorA) which uses a concrete research project in the history of education as a reference point for an infrastructural distribution. Semantic MediaWiki thus provides a light-weight environment offering a framework for re-using heterogeneous resources and a flexible collaborative way of conducting research.||0||0|
|Probabilistic quality assessment of articles based on learning editing patterns||Jangwhan Han
|2011 International Conference on Computer Science and Service System, CSSS 2011 - Proceedings||English||2011||As a new model of distributed, collaborative information source, such as Wikipedia, is emerging, its content is constantly being generated, updated and maintained by various users and its data quality varies from time to time. Thus the quality assessment of the content is a pressing concern now. We observe that each article usually goes through a series of editing phases such as building structure, contributing text, discussing text, etc., gradually getting into the final quality state and that the articles of different quality classes exhibit specific edit cycle patterns. We propose a new approach to Assess Quality based on article's Editing History (AQEH) for a specific domain as follows. First, each article's editing history is transformed into a state sequence borrowing HiddenMarkov Model(HMM). Second, edit cycle patterns are first extracted for each quality class and then each quality class is further refined into quality corpora by clustering. Now, each quality class is clearly represented by a series of quality corpora and each quality corpus is described by a group of frequently co-occurring edit cycle patterns. Finally, article quality can be determined in probabilistic sense by comparing the article with the quality corpora. Experimental results demonstrate that our method can capture and predict web article's quality accurately and objectively.||0||0|
|Tribler: P2P media search and sharing||Zeilemaker N.
|MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops||English||2011||Tribler is an open-source software project facilitating search, streaming and sharing content using P2P technology. Over 800 000 people have used Tribler since the project started in 2005. The Tribler P2P core supports BitTorrent-compatible downloading, video on demand and live streaming. Aside from a regular desktop GUI that runs on multiple OSes, it can be installed as a browser plug-in, currently used by Wikipedia. Aditionally, it runs on a 450 MHz processor, showcasing future TV support. We continuously work on extensions and test out novel research ideas within our user base, resulting in sub-second content search, a reputation system for rewarding upload, and channels for content publishing and spam prevention. Presently, 1200 channels have been created, enabling rich multimedia communities without requiring any server. Copyright 2011 ACM.||0||0|
|WikiTeams: How do they achieve success?||Piotr Turek
|IEEE Potentials||English||2011||Web 2.0 technology and so-called social media are among the most popular (among users and researchers alike) Internet technologies today. Among them, Wiki technology - created to simplify HTML editing and enable open, collaborative editing of pages by ordinary Web users - occupies an important place. Wiki is increasingly adopted by businesses as a useful form of knowledge management and sharing, creating "corporate Wikis." However, the most widely known application of Wiki technology - Wikipedia - is, according to many analysts, more than just an open encyclopedia that uses Wiki.||0||0|
|Wikipedia world map: Method and application of map-like wiki visualization||Pang C.-L.
|WikiSym 2011 Conference Proceedings - 7th Annual International Symposium on Wikis and Open Collaboration||English||2011||Wiki are popular platforms for collaborative editing. In volunteer-driven wikis such as Wikipedia, which attracts millions of authors editing articles on a diverse range of topics, contributors' editing activity results in certain semantic coverage of topic areas. Obtaining an understanding of a given wiki's semantic coverage is not easy. To solve this problem, we have devised a method for visualizing a wiki in a way similar to a geographic map. We have applied our method to Wikipedia, and generated visualizations for several Wikipedia language editions. This paper presents our wiki visualization method and its application.||0||0|
|"What I know is...": Establishing credibility on wikipedia talk pages||Meghan Oxley
|WikiSym 2010||English||2010||This poster presents a new theoretical framework and research method for studying the relationship between specific types of authority claims and the attempts of contributors to establish credibility in online, collaborative environments. We describe a content analysis method for coding authority claims based on linguistic and rhetorical cues in naturally occurring, text-based discourse. We present results from a preliminary analysis of a sample of Wikipedia talk page discussions focused on recent news events. This method provides a novel framework for capturing and understanding these persuasion-oriented behaviors, and shows potential as a tool for online communication research, including automated text analysis using trained natural language processing systems.||0||0|
|A Requirements Maturity Measurement Approach based on SKLSEWiki||Peng R.
|Proceedings - International Computer Software and Applications Conference||English||2010||With the development of IT, the scale and complexity of information system has been dramatically increased. Followed is that the related stakeholders' size increases sharply. How to promote the requirements negotiation of large scale stakeholders becomes a focus of attention. Wiki, as a lightweight documentation and distributed collaboration platform, has demonstrated its capability in distributed requirements elicitation and documentation. Most efforts are paid to construct friendly user interface and collaborative editing capabilities. In this paper, a new concept, requirement maturity, is proposed to represent the stable degree of requirement reached through the negotiation process. A Requirement Maturity Measurement Approach based on Wiki uses the requirement maturity as a threshold to select requirements. Thus, the requirements, which reach a stable status through full negotiation, can be found out. A platform SKLSEWiki is developed to validate the approach.||0||0|
|A human and social sciences wiktionary in a peer-to-peer network||Khelifa L.N.
|2010 International Conference on Machine and Web Intelligence, ICMWI 2010 - Proceedings||English||2010||This paper presents an integration of a multicultural and multilingual wiktionary in human and social sciences in a peer-to-peer network. This on-line dictionary was developed as part of the FSP project to allow researchers from both side of Mediterranean Sea to exchange and to share knowledge in the human and social sciences domain. The present extension would allow off-line collaborative edition, scalability, management of inter-wikis link, an advanced search for constructing the global sheet by interrogating specific peers and finally a wiki page replication strategy to ensure data availability. The system architecture and the prototype are presented.||0||0|
|A model for open semantic hyperwikis||Philip Boulain
|Lecture Notes in Computer Science||English||2010||Wiki systems have developed over the past years as lightweight, community-editable, web-based hypertext systems. With the emergence of semantic wikis such as Semantic MediaWiki , these collections of interlinked documents have also gained a dual role as ad-hoc RDF  graphs. However, their roots lie in the limited hypertext capabilities of the World Wide Web : embedded links, without support for features like composite objects or transclusion. Collaborative editing on wikis has been hampered by redundancy; much of the effort spent on Wikipedia is used keeping content synchronised and organised. We have developed a model for a system, which we have prototyped and are evaluating, which reintroduces ideas from the field of hypertext to help alleviate this burden. In this paper, we present a model for what we term an 'open semantic hyperwiki' system, drawing from both past hypermedia models, and the informal model of modern semantic wiki systems. An 'open semantic hyperwiki' is a reformulation of the popular semantic wiki technology in terms of the long-standing field of hypermedia, which then highlights and resolves the omissions of hypermedia technology made by the World Wide Web and the applications built around its ideas. In particular, our model supports first-class linking, where links are managed separately from nodes. This is then enhanced by the system's ability to embed links into other nodes and separate them out again, allowing for a user editing experience similiar to HTML-style embedded links, while still gaining the advantages of separate links. We add to this transclusion, which allows for content sharing by including the content of one node into another, and edit-time transclusion, which allows users to edit pages containing shared content without the need to follow a sequence of indirections to find the actual text they wish to modify. Our model supports more advanced linking mechanisms, such as generic links, which allow words in the wiki to be used as link endpoints. The development of this model has been driven by our prior experimental work on the limitations of existing wikis and user interaction.We have produced a prototype implementation which provides first-class links, transclusion, and generic links.||0||0|
|Deep Diffs: Visually exploring the history of a document||Shannon R.
|Proceedings of the Workshop on Advanced Visual Interfaces AVI||English||2010||Software tools are used to compare multiple versions of a textual document to help a reader understand the evolution of that document over time. These tools generally support the comparison of only two versions of a document, requiring multiple comparisons to be made to derive a full history of the document across multiple versions. We present Deep Diffs, a novel visualisation technique that exposes the multiple layers of history of a document at once, directly in the text, highlighting areas that have changed over multiple successive versions, and drawing attention to passages that are new, potentially unpolished or contentious. These composite views facilitate the writing and editing process by assisting memory and encouraging the analysis of collaboratively-authored documents. We describe how this technique effectively supports common text editing tasks and heightens participants' understanding of the process in collaborative editing scenarios like wiki editing and paper writing. Copyright||0||0|
|Docx2Go: Collaborative editing of fidelity reduced documents on mobile devices||Puttaswamy K.P.N.
|MobiSys'10 - Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services||English||2010||Docx2Go is a new framework to support editing of shared documents on mobile devices. Three high-level requirements influenced its design - namely, the need to adapt content, especially textual content, on the fly according to the quality of the network connection and the form factor of each device; support for concurrent, uncoordinated editing on different devices, whose effects will later be merged on all devices in a convergent and consistent manner without sacrificing the semantics of the edits; and a flexible replication architecture that accommodates both device-to-device and cloudmediated synchronization. Docx2Go supports on-the-go editing for XML documents, such as documents in Microsoft Word and other commonly used formats. It combines the best practices from content adaptation systems, weakly consistent replication systems, and collaborative editing systems, while extending the state of the art in each of these fields. The implementation of Docx2Go has been evaluated based on a workload drawn from Wikipedia. Copyright 2010 ACM.||0||0|
|Enacting argumentative web in semantic wikipedia||Mircea I.S.
|Proceedings - 9th RoEduNet IEEE International Conference, RoEduNet 2010||English||2010||This research advocates the idea of combining argumentation theory with the social web technology, aiming to enact large scale or mass argumentation. The proposed framework allows mass-collaborative editing of structured arguments in the style of semantic wikipedia. The long term goal is to apply the abstract machinery of argumentation theory to more practical applications based on human generated arguments, such as deliberative democracy, business negotiation, or self-care.||0||0|
|Factors impeding Wiki use in the enterprise: A case study||Holtzblatt L.J.
|Conference on Human Factors in Computing Systems - Proceedings||English||2010||Our research explored factors that impacted the use of wikis as a tool to support the dissemination of knowledge within an enterprise. Although we primarily talked to a population of wiki contributors and readers, we discovered two major factors which contributed to staff's unwillingness to share information on a wiki under certain circumstances. First, we uncovered a reluctance to share specific information due to a perceived extra cost, the nature of the information, the desire to share only "finished" content, and sensitivities to the openness of the sharing environment. Second, we discovered a heavy reliance on other, non-wiki tools based on a variety of factors including work practice, lack of guidelines, and cultural sensitivities. Our findings have several implications for how an enterprise may more fully reap the benefits of wiki technology. These include implementation of incentive structures, support for dynamic access control, documenting clear guidelines and policies, and making wikis more usable.||0||0|
|Metadata repository management using the MediaWiki interoperability framework a case study: The keytonature project||Veja C.F.M.
|EChallenges e-2010 Conference||English||2010||In the KeyToNature project a user-centred and collaborative approach for metadata repository management was developed. KeyToNature is an EU project to enhance the knowledge of biodiversity by improving the availability of digital and non-digital media along with digital tools for the identification of living organisms throughout Europe. To improve the ability to search and access information, metadata are provided and integrated into a metadata repository. This paper presents a method utilizing web-based MediaWiki system as part of a low-tech interoperability and repository layer for data providers, end users, developers, and project partners. Because the level of technological expertise of the data providers varies greatly, a solution accessible for non-expert data providers was developed. The main features of this method are the automatic metadata repository management, and an ontological approach with ingestion workflows integrated into MediaWiki collaborative framework. Extensive user testing shows performance advantages of the method and attests usefulness in the application area. This practice-oriented method can be adopted by other projects aiming at collaborative knowledge acquisition and automatic metadata repository management, regardless of domain of discourse. Copyright||0||0|
|Modeling and implementing collaborative editing systems with transactional techniques||Wu Q.
|Proceedings of the 6th International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2010||English||2010||Many collaborative editing systems have been developed for coauthoring documents. These systems generally have different infrastructures and support a subset of interactions found in collaborative environments. In this paper, we propose a transactional framework with two advantages. First, the framework is generic as demonstrated by its capability of modeling four types of existing products: RCS, MediaWiki, Google Docs, and Google Wave. Second, the framework can be layered on the top of a modern database management system to reuse its transaction processing capabilities for data consistency control in both centralized and replicated editing systems. We detail the programming interfaces and the synchronization protocol of our transactional framework and demonstrate its usage through concrete examples. We also describe a prototype implementation of this framework over Oracle Berkeley DB High Availability, a replicated transactional database management system.||0||0|
|Negotiating with angry mastodons: The Wikipedia policy environment as genre ecology||Morgan J.T.
|Proceedings of the 16th ACM International Conference on Supporting Group Work, GROUP'10||English||2010||Groups collaborating in online spaces on complex, extended projects develop behavioral conventions and agreed-upon practices to structure and regulate their interactions and work. Collaborators on Wikipedia have developed a multi-tiered policy environment to document a set of evolving principles, processes, and rules to facilitate productive group collaboration. Previous quantitative studies have noted this hierarchical structure, but have evaluated the policy environment as a singular entity rather than investigating potential differences between the three main regulatory genres that enable it. These studies also excluded essays, the least official regulatory genre, from their analyses. We perform a comparative content analysis of all three genres (policies, guidelines, and essays) and demonstrate that they focus on different areas of community regulation. Drawing on the theory of genre ecologies we discuss the possible role of unofficial genres such as essays in articulating and regulating work practices in online, organized collaborative work.||0||1|
|Online video using BitTorrent and HTML5 applied to wikipedia||Bakker A.
|2010 IEEE 10th International Conference on Peer-to-Peer Computing, P2P 2010 - Proceedings||English||2010||Wikipedia started a project in order to enable users to add video and audio on their Wiki pages. The technical downside of this is that its bandwidth requirements will increase manifold. BitTorrent-based peer-to-peer technology from P2P-Next (a European research project) is explored to handle this bandwidth surge. We discuss the impact on the BitTorrent piece picker and outline our "tribe" protocol for seamless integration of P2P into the HTML5 video and audio elements. Ongoing work on libswift which uses UDP, an enhanced transport protocol and integrated NAT/Firewall puncturing, is also described.||0||0|
|Rating the raters: A reputation system for Wiki-like domains||Pantola A.V.
|SIN'10 - Proceedings of the 3rd International Conference of Security of Information and Networks||English||2010||Collaborative sites like Wikipedia allow the public to contribute contents to a particular domain to ensure a site's growth. However, a major concern with such sites is the credibility of the information posted. Malicious and "lazy" authors can intentionally or accidentally contribute entries that are inaccurate. This paper presents a user-driven reputation system called Rater Rating that encourages authors to review entries in collaborative sites. It uses concepts adopted from reputation systems in mobile ad-hoc networks (MANETs) that promotes cooperation among network nodes. Rater Rating measures the overall reputation of authors based on the quality of their contribution and the "seriousness" of their ratings. Simulations were performed to verify the algorithm's potential in measuring the credibility of ratings made by various raters (i.e. good and lazy). The results show that only 1 out of 4 raters is needed to be a good rater in order to make the algorithm effective. Copyright 2010 ACM.||0||0|
|Scholarly knowledge development and dissemination in an international context: Approaches and tools for higher education||Willis J.
|Computers in the Schools||English||2010||This paper looks at the process of collaboratively creating and disseminating information resources, such as journals, books, papers, and multimedia resources in higher education. This process has been facilitated and encouraged by two relatively new movements, open-source and, especially, open access. The most definitive expression of the principles of open access is the Budapest Open Access Initiative. It calls for the creation of journals that are freely available via the Internet to anyone. The broad principles of open access can be the foundation for creating many types of information resources-from online textbooks to sophisticated instructional videos. What distinguishes such open access resources is that they are distributed without charge to users and that most of the individual and institutional authors give permission for them to be revised, remixed, and reformed by users, who may then distribute the "new" version of the resource. Much of the work on open access information resources is collaborative and involves international teams with diverse experiences and areas of expertise. Such collaboration is not easy, but there is a growing set of electronic tools that support such work. The electronic toolbox for collaboratively creating new information resources includes tools that can serve as "electronic hallways" where potential collaborators can meet and interact informally; gateway Web sites and document repositories that support the exchange of information; Web tools that support groups with special interests; tools for supporting project teams; collaborative writing support systems including file sharing, document exchange, and version control software; wikis where a team can collaboratively write and revise documents, and project management software. There are also many avenues for disseminating information resources. These include open-access journals and the software packages that support them such as the Open Journal Systems package from the Public Knowledge Project, preprint and repository archives and the software for creating such archives (e.g., dspace, Fedora, Joomla, and Drupal), Web resources for indexing and locating relevant information, and international as well as virtual conferences and the software for operating such meetings. This paper explores the different approaches to both creating and disseminating information resources for higher education and evaluates some of the most commonly used software options for supporting these activities.||0||0|
|Semantic MediaWiki interoperability framework from a semantic social software perspective||Cornelia Veja
|2010 9th International Symposium on Electronics and Telecommunications, ISETC'10 - Conference Proceedings||English||2010||This paper presents two collaborative Social-Software-driven approaches for the interoperability of multimedia resources used in KeyToNature project. The first approach, using MediaWiki as a low level interoperability framework is presented in our previous works. The second one, Semantic MediaWiki interoperability framework for multimedia resources is presented in this paper, and is still in progress. We are arguing that different approaches are needed, depending on the context and intention of multimedia resource use.||0||0|
|SocialTrust++: Building community-based trust in social information systems||Caverlee J.
|Proceedings of the 6th International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2010||English||2010||Social information systems - popularized by Facebook, Wikipedia, Twitter, and other social websites - are emerging as a powerful new paradigm for distributed social-powered information management. While there has been growing interest in these systems by businesses, government agencies, and universities, there remain important open challenges that must be addressed if the potential of these social systems is to be fully realized. For example, the presence of poor quality users and users intent on manipulating the system can disrupt the quality of socially-powered information and knowledge sharing applications. In this paper, we outline the SocialTrust++ project at Texas A&M University. The overall research goal of the SocialTrust++ project is to develop, analyze, deploy, and test algorithms for building, enabling, and leveraging community-based trust in Social Information Systems. Concretely, we are developing a trustworthy community-based information platform so that each user in a Social Information System can have transparent access to the community's trust perspective to enable more effective and efficient social information access.||0||0|
|SocialWiki: Bring order to wiki systems with social context||Haifeng Zhao
|Lecture Notes in Computer Science||English||2010||A huge amount of administrative effort is required for large wiki systems to produce and maintain high quality pages with existing naive access control policies. This paper introduces SocialWiki, a prototype wiki system which leverages the power of social networks to automatically manage reputation and trust for wiki users based on the content they contribute and the ratings they receive. SocialWiki also utilizes interests to facilitate collaborative editing. Although a wiki page is visible to everyone, it can only be edited by a group of users who share similar interests and have a certain level of trust with each other. The editing privilege is circulated among these users to prevent/reduce vandalisms and spams, and to encourage user participation by adding social context to the revision process of a wiki page. By presenting the design and implementation of this proof-of-concept system, we show that social context can be used to build an efficient, self-adaptive and robust collaborative editing system.||0||0|
|The wisdom of crowds in government 2.0: Information paradigm evolution toward Wiki-government||Nam T.||16th Americas Conference on Information Systems 2010, AMCIS 2010||English||2010||This essay, exploring the peer-to-peer collaborative atmosphere penetrating Wikivism, crowd-sourcing and open-source movement, identifies a new paradigm of public information as evolution toward Wiki-government. Citizen participants can collectively create public information via various platforms enabled by Web 2.0 technologies. Under the new participatory paradigm that a large number of individual citizens and government cocreate public information, not only do Wiki-oriented government agencies benefit from crowd wisdom, but citizens also learn from their colleague citizens. Crowd-sourcing to collect the wisdom of crowds is categorized into four types by matching between the quantity and the quality of participation: civic-sourcing, mob-sourcing, professionalism, and fiasco. For Wiki-government, a mass of well-informed and concerned participants in civic-sourcing make more desirable outcomes for a society than fewer, poorly-informed and unconcerned people. Thus, civic-sourcing promises greater advantages for government over professionalism and mob-sourcing. Three strategies for civic-sourcing (Wiki/open-sourcing, contest, or social networking) can be employed through different working mechanisms, with different motivators for participation, and under different approaches to human nature of key participants.||0||0|
|Towards social argumentative machines||Indrie S.
|Proceedings - 2010 IEEE 6th International Conference on Intelligent Computer Communication and Processing, ICCP10||English||2010||This research advocates the idea of combining argumentation theory with the social web technology, aiming to enact large scale or mass argumentation. The proposed framework allows mass-collaborative editing of structured arguments in the style of semantic wikipedia. The Argnet system was developed based on the Semantic MediaWiki framework and on the Argument Interchange Format ontology.||0||0|
|Active learning in computer science courses in higher EDUCATION||Serbec I.N.
|IADIS International Conference on Cognition and Exploratory Learning in Digital Age, CELDA 2009||English||2009||Innovative learning activities, based on constructivism, were applied in the courses for students of Computer science at the Faculty of Education. We observed students' learning behaviour as well as their actions, preferences, and learning patterns in different stages of learning process, supported by the e-learning environment. Students engaged in all these activities had an opportunity to develop competences for team work and collaborative learning. Active and collaborative forms of learning were used to facilitate higher order thinking skills and to develop assessment skills. We used Bloom's Digital Taxonomy to analyse the usage of digital tools which facilitate different phases of learning. Active and collaborative forms of learning, such as mini-performances supported by workshop, autonomous learning supported by video-content with interactive questions and answers, collaborative editing of wikis with peer assessment, pair programming, explorative learning, discovery learning, reflections, self-reflections, and creation of exercises for knowledge assessment are used to facilitate higher order thinking skills.||0||0|
|Cyber engineering co-intelligence digital ecosystem: The GOFASS methodology||Leong P.
|2009 3rd IEEE International Conference on Digital Ecosystems and Technologies, DEST '09||English||2009||Co-intelligence, also known as collective or collaborative intelligence, is the harnessing of human knowledge and intelligence that allows groups of people to act together in ways that seem to be intelligent. Co-intelligence Internet applications such as Wikipedia are the first steps toward developing digital ecosystems that support collective intelligence. Peer-to-peer (P2P) systems are well fitted to co-Intelligence digital ecosystems because they allow each service client machine to act also as a service provider without any central hub in the network of cooperative relationships. However, dealing with server farms, clusters and meshes of wireless edge devices will be the norm in the next generation of computing; but most present P2P system had been designed with a fixed, wired infrastructure in mind. This paper proposes a methodology for cyber engineering an intelligent agent mediated co-intelligence digital ecosystems. Our methodology caters for co-intelligence digital ecosystems with wireless edge devices working with service-oriented information servers.||0||0|
|Logoot: A scalable optimistic replication algorithm for collaborative editing on P2P networks||Stephane Weiss
|Proceedings - International Conference on Distributed Computing Systems||English||2009||Massive collaborative editing becomes a reality through leading projects such as Wikipedia. This massive collaboration is currently supported with a costly central service. In order to avoid such costs, we aim to provide a peer-to-peer collaborative editing system. Existing approaches to build distributed collaborative editing systems either do not scale in terms of number of users or in terms of number of edits. We present the Logoot approach that scales in these both dimensions while ensuring causality, consistency and intention preservation criteria. We evaluate the Logoot approach and compare it to others using a corpus of all the edits applied on a set of the most edited and the biggest pages of Wikipedia.||0||0|
|Methopedia - Pedagogical design community for European educators||Ryberg T.
|8th European Conference on eLearning 2009, ECEL 2009||English||2009||The paper will discuss theoretical, methodological and technical aspects of the community based Methopedia wiki (www.methopedia.eu), which has been developed as a part of the EU-funded collaborative research project "Community of Integrated Blended Learning in Europe" (COMBLE; www.comble-project.eu). Methopedia is a wiki and social community aimed at facilitating knowledge transfer between trainers/educators from different institutions or countries through interactive peer-to-peer support, and sharing of learning practices. We describe how Methopedia has been developed though engaging practitioners in workshops with the aim of collecting known learning activities, designs and approaches, and how the models for sharing learning practices have been developed by drawing on practitioners' experiences, ideas and needs. We present and analyse the outcome of the workshops and discuss how practitioners have informed the practical design and theoretical issues regarding the design of Methopedia. The workshops have led to redesigns and also a number of important issues and problems have emerged. In the paper, we therefore present and discuss the socio-technical design of Methopedia, which is based on open source Wiki and Social Networking technologies. We describe the issues, functionalities and needs that have emerged from the workshops, such as metadata (taxonomy & tags), localised versions (multi-lingual) and the need for visual descriptions. Furthermore, we discuss the templates trainers/educators can use to describe and share their learning designs or learning activities, e.g. what categories would be helpful? How much metadata is relevant and how standardised or flexible the templates should be? We also discuss the theoretical considerations underlying the descriptive model of the templates by drawing on research within learning design and the educational pattern design approach. In particular we focus on exploring designs and descriptions of singular or sequences of learning activities. Furthermore, we discuss some of the tools and concepts under development as part of the work on Methopedia, such as a flash based tool to structure learning processes, a pictorial language for visualising learning activities/designs and how we aim to connect to existing networks for educators/trainers and initiatives similar to Methopedia.||0||0|
|Multi-synchronous collaborative semantic wikis||Charbel Rahhal
|Lecture Notes in Computer Science||English||2009||Semantic wikis have opened an interesting way to mix Web 2.0 advantages with the Semantic Web approach. However, compared to other collaborative tools, wikis do not support all collaborative editing mode such as offline work or multi-synchronous editing. The lack of multi-synchronous supports in wikis is a problematic, especially, when working with semantic wikis. In these systems, it is often important to change multiple pages simultaneous in order to refactor the semantic wiki structure. In this paper, we present a new model of semantic wiki called Multi-Synchronous Semantic Wiki (MS2W). This model extends semantic wikis with multi-synchronous support that allows to create a P2P network of semantic wikis. Semantic wiki pages can be replicated on several semantic servers. The MS2W ensures CCI consistency on these pages relying on the Logoot algorithm.||0||0|
|Peer vote: A decentralized voting mechanism for P2P collaboration systems||Bocek T.
|Lecture Notes in Computer Science||English||2009||Peer-to-peer (P2P) systems achieve scalability, fault tolerance, and load balancing with a low-cost infrastructure, characteristics from which collaboration systems, such as Wikipedia, can benefit. A major challenge in P2P collaboration systems is to maintain article quality after each modification in the presence of malicious peers. A way of achieving this goal is to allow modifications to take effect only if a majority of previous editors approve the changes through voting. The absence of a central authority makes voting a challenge in P2P systems. This paper proposes the fully decentralized voting mechanism PeerVote, which enables users to vote on modifications in articles in a P2P collaboration system. Simulations and experiments show the scalability and robustness of PeerVote, even in the presence of malicious peers.||0||0|
|Personal knowledge management for knowledge workers using social semantic technologies||Hyeoncheol Kim
|International Journal of Intelligent Information and Database Systems||English||2009||Knowledge workers have different applications and resources in heterogeneous environments for doing their knowledge tasks and they often need to solve a problem through combining several resources. Typical personal knowledge management (PKM) systems do not provide effective ways for representing knowledge worker's unstructured knowledge or idea. In order to provide better knowledge activity for them, we implement Wiki-based social Network Thin client (WANT) that is a wiki-based semantic tagging system for collaborative and communicative knowledge creation and maintenance for a knowledge worker. And also, we suggest the social semantic cloud of tags (SCOT) ontology to represent tag data at a semantic level and combine this ontology in WANT. WANT supports a wide scope of social activities through online mash-up services and interlink resources with desktop and web environments. Our approach provides basic functionalities such as creating, organising and searching knowledge at individual level, as well as enhances social connections among knowledge workers based on their activities. Copyright||0||0|
|SAVVY Wiki: A context-oriented collaborative knowledge management system||Takafumi Nakanishi
|WikiSym||English||2009||This paper presents a new Wiki called SAVVY Wiki that realizes context-oriented, collective and collaborative knowledge management environments that are able to reflect users' intentions and recognitions. Users can collaboratively organize fragmentary knowledge with the help of the SAVVY Wiki. Fragmentary knowledge, in this case, implies existing Wiki content, multimedia content on the web, and so on. Users select and allocate fragmentary knowledge in different contexts onto the SAVVY Wiki. Owing to this operation, it is ensured that related pages belong to the same contexts. That is, users can find correlations among the pages in a Wiki. The SAVVY Wiki provides new collective knowledge created from fragmentary knowledge, depending on contexts, in accordance with the users' collaborative operations. Various collaborative working environments have been developed for the sharing of collective knowledge. Most current Wikis have a collaborative editing mode to every page, as a platform to enable a collaborative working environment. In order to understand an arbitrary concept thoroughly, it is necessary to find correlations among the various threads of content, depending on the users' purpose, task or interest. In a Wiki system, it is important to realize a collaborative editing environment with correlation among pages depending on the contexts. In this paper, we present a method to realize the SAVVY Wiki, and describe its developing prototype system. Copyright||0||0|
|Supporting personal semantic annotations in P2P semantic wikis||Torres D.
|Lecture Notes in Computer Science||English||2009||In this paper, we propose to extend Peer-to-Peer Semantic Wikis with personal semantic annotations. Semantic Wikis are one of the most successful Semantic Web applications. In semantic wikis, wikis pages are annotated with semantic data to facilitate the navigation, information retrieving and ontology emerging. Semantic data represents the shared knowledge base which describes the common understanding of the community. However, in a collaborative knowledge building process the knowledge is basically created by individuals who are involved in a social process. Therefore, it is fundamental to support personal knowledge building in a differentiated way. Currently there are no available semantic wikis that support both personal and shared understandings. In order to overcome this problem, we propose a P2P collaborative knowledge building process and extend semantic wikis with personal annotations facilities to express personal understanding. In this paper, we detail the personal semantic annotation model and show its implementation in P2P semantic wikis. We also detail an evaluation study which shows that personal annotations demand less cognitive efforts than semantic data and are very useful to enrich the shared knowledge base.||0||0|
|The small worlds of wikipedia: Implications for growth, quality and sustainability of collaborative knowledge networks||Myshkin Ingawale
|15th Americas Conference on Information Systems 2009, AMCIS 2009||English||2009||This work is a longitudinal network analysis of the interaction networks of Wikipedia, a free, user-led collaborativelygenerated online encyclopedia. Making a case for representing Wikipedia as a knowledge network, and using the lens of contemporary graph theory, we attempt to unravel its knowledge creation process and growth dynamics over time. Typical small-world characteristics of short path-length and high clustering have important theoretical implications for knowledge networks. We show Wikipedia's small-world nature to be increasing over time, while also uncovering power laws and assortative mixing. Investigating the process by which an apparently un-coordinated, diversely motivated swarm of assorted contributors, create and maintain remarkably high quality content, we find an association between Quality and Structural Holes. We find that a few key high degree, cluster spanning nodes - 'hubs' - hold the growing network together, and discuss implications for the networks' growth and emergent quality.||0||0|
|Undo in peer-to-peer semantic wikis||Charbel Rahhal
|CEUR Workshop Proceedings||English||2009||The undo mechanism is an essential feature in collaborative editing systems. Most popular semantic wikis support a revert feature, some provide an undo feature to remove any modification at any time. However this undo feature does not always succeed. Supporting the undo mechanism for P2P semantic wikis has never been tackled. In this paper, we present an undo approach for Swooki, the first P2P semantic wiki.We identify the problems to resolve in order to achieve such mechanism for P2P semantic wikis.We give the definition of the undo and the properties that must ensure. This approach allows both a revert and a permanent successful undo.||0||0|
|WordVenture - Cooperative WordNet editor: Architecture for lexical semantic acquisition||Szymanski J.||KEOD 2009 - 1st International Conference on Knowledge Engineering and Ontology Development, Proceedings||English||2009||This article presents architecture for acquiring lexical semantics in a collaborative approach paradigm. The system enables functionality for editing semantic networks in a wikipedia-like style. The core of the system is a user-friendly interface based on interactive graph navigation. It has been used for semantic network presentation, and brings simultaneously modification functionality.||0||0|
|A qualitative analysis on collaborative learning experience of student journalists using Wiki||Ma W.W.K.
|Lecture Notes in Computer Science||English||2008||Education in journalism emphasizes internships, apprenticeships, and other opportunities to learn journalism by doing journalism; however, most computer-mediated communication tools do not have such a provision. The fully open structure of Wiki matches the principles of learning journalism while, from a technical point of view, Wiki provides a very easy way for users to report, write and edit. In a case study, a group of undergraduate journalism students were exposed to a student-written Wiki to jointly compose news reporting. Analysis of student journalists' responses to the open-ended questions revealed revision as the core processing capability of Wiki. The motivational factors to revision include accuracy (fact checking), story enrichment, and personal interest toward the news topic. In addition, learners are also affected by the social interactions among the community users within Wiki. The qualitative data shows students both value the process and face challenges in managing the complexity of shared editing.||0||0|
|Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts||Munteanu C.
|Conference on Human Factors in Computing Systems - Proceedings||English||2008||One challenge in facilitating skimming or browsing through archives of on-line recordings of webcast lectures is the lack of text transcripts of the recorded lecture. Ideally, transcripts would be obtainable through Automatic Speech Recognition (ASR). However, current ASR systems can only deliver, in realistic lecture conditions, a Word Error Rate of around 45% - above the accepted threshold of 25%. In this paper, we present the iterative design of a webcast extension that engages users to collaborate in a wiki-like manner on editing the ASR-produced imperfect transcripts, and show that this is a feasible solution for improving the quality of lecture transcripts. We also present the findings of a field study carried out in a real lecture environment investigating how students use and edit the transcripts. Copyright 2008 ACM.||0||0|
|Exploiting XML structure to improve information retrieval in peer-to-peer systems||Winter J.||ACM SIGIR 2008 - 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Proceedings||English||2008||With the advent of XML as a standard for representation and exchange of structured documents, a growing amount of XML-documents are being stored in Peer-to-Peer (P2P) networks. Current research on P2P search engines proposes the use of Information Retrieval (IR) techniques to perform content-based search, but does not take into account structural features of documents. P2P systems typically have no central index, thus avoiding single-points-of-failures, but distribute all information among participating peers. Accordingly, a querying peer has only limited access to the index information and should select carefully which peers can help answering a given query by contributing resources such as local index information or CPU time for ranking computations. Bandwidth consumption is a major issue. To guarantee scalability, P2P systems have to reduce the number of peers involved in the retrieval process. As a result, the retrieval quality in terms of recall and precision may suffer substantially. In the proposed thesis, document structure is considered as an extra source of information to improve the retrieval quality of XML-documents in a P2P environment. The thesis centres on the following questions: how can structural information help to improve the retrieval of XML-documents in terms of result quality such as precision, recall, and specificity? Can XML structure support the routing of queries in distributed environments, especially the selection of promising peers? How can XML IR techniques be used in a P2P network while minimizing bandwidth consumption and considering performance aspects? To answer these questions and to analyze possible achievements, a search engine is proposed that exploits structural hints expressed explicitly by the user or implicitly by the self-describing structure of XML-documents. Additionally, more focused and specific results are obtained by providing ranked retrieval units that can be either XML-documents as a whole or the most relevant passages of theses documents. XML information retrieval techniques are applied in two ways: to select those peers participating in the retrieval process, and to compute the relevance of documents. The indexing approach includes both content and structural information of documents. To support efficient execution of multi term queries, index keys consist of rare combinations of (content, structure)-tuples. Performance is increased by using only fixed-sized posting lists: frequent index keys are combined with each other iteratively until the new combination is rare, with a posting list size under a pre-set threshold. All posting lists are sorted by taking into account classical IR measures such as term frequency and inverted term frequency as well as weights for potential retrieval units of a document, with a slight bias towards documents on peers with good collections regarding the current index key and with good peer characteristics such as online times, available bandwidth, and latency. When extracting the posting list for a specific query, a re-ordering on the posting list is performed that takes into account the structural similarity between key and query. According to this preranking, peers are selected that are expected to hold information about potentially relevant documents and retrieval units The final ranking is computed in parallel on those selected peers. The computation is based on an extension of the vector space model and distinguishes between weights for different structures of the same content. This allows weighting XML elements with respect to their discriminative power, e.g. a title will be weighted much higher than a footnote. Additionally, relevance is computed as a mixture of content relevance and structural similarity between a given query and a potential retrieval unit. Currently, a first prototype for P2P Information Retrieval of XML-documents called SPIRIX is being implemented. Experiments to evaluate the proposed techniques and use of structural hints will be performed on a distributed version of the INEX Wikipedia Collection.||0||0|
|Flexible concurrency control for real-time collaborative editors||Imine A.||Proceedings - International Conference on Distributed Computing Systems||English||2008||Real-time Collaborative Editors (RCE) provide computer support for modifying simultaneously shared documents, such as articles, wiki pages and programming source code, by dispersed users. Due to data replication, Operational Transformation (OT) is considered as the efficient and safe method for consistency maintenance in the literature of collaborative editors. Indeed, it is aimed at ensuring copies convergence even though the users's updates are executed in any order on different copies. Unfortunately, existing OT algorithms often fail to achieve this objective. Moreover, these algorithms have limited scalability with the number of users as they use vector timestamps to enforce causality dependency. In this paper, we present a novel framework for managing collaborative editing work in a scalable and decentralized fashion. It may be deployed easily on P2P networks as it supports dynamic groups where users can leave and join at any time.||0||0|
|Flood little, cache more: Effective result-reuse in P2P IR systems||Carl Zimmer
|Lecture Notes in Computer Science||English||2008||State-of-the-art Peer-to-Peer Information Retrieval (P2P IR) systems suffer from their lack of response time guarantee especially with scale. To address this issue, a number of techniques for caching of multi-term inverted list intersections and query results have been proposed recently. Although these enable speedy query evaluations with low network overheads, they fail to consider the potential impact of caching on result quality improvements. In this paper, we propose the use of a cache-aware query routing scheme, that not only reduces the response delays for a query, but also presents an opportunity to improve the result quality while keeping the network usage low. In this regard, we make three-fold contributions in this paper. First of all, we develop a cache-aware, multi-round query routing strategy that balances between query efficiency and result-quality. Next, we propose to aggressively reuse the cached results of even subsets of a query towards an approximate caching technique that can drastically reduce the bandwidth overheads, and study the conditions under which such a scheme can retain good result-quality. Finally, we empirically evaluate these techniques over a fully functional P2P IR system, using a large-scale Wikipedia benchmark, and using both synthetic and real-world query workloads. Our results show that our proposal to combine result caching with multi-round, cache-aware query routing can reduce network traffic by more than half while doubling the result quality.||0||0|
|Information literacy: Moving beyond Wikipedia||Welker A.L.
|Geotechnical Special Publication||English||2008||In the past, finding information was the challenge. Today, the challenge our students face is to sift through and evaluate the incredible amount of information available. This ability to find and evaluate information is sometimes referred to as information literacy. Information literacy relates to a student's ability to communicate, but, more importantly, information literate persons are well-poised to learn throughout life because they have learned how to learn. A series of modules to address information literacy were created in a collaborative effort between faculty in the Civil and Environmental Engineering Department at Villanova and the librarians at Falvey Memorial Library. These modules were integrated throughout the curriculum, from sophomore to senior year. Assessment is based on modified ACRL (Association of College and Research Libraries) outcomes. This paper will document the lessons learned in the implementation of this program and provide concrete examples of how to incorporate information literacy into geotechnical engineering classes. Copyright ASCE 2008.||0||0|
|Making More Wikipedians: Facilitating Semantics Reuse for Wikipedia Authoring||Linyun Fu
|The Semantic Web||English||2008||Wikipedia, a killer application in Web 2.0, has embraced the power of collaborative editing to harness collective intelligence. It can also serve as an ideal Semantic Web data source due to its abundance, influence, high quality and well-structuring. However, the heavy burden of up-building and maintaining such an enormous and ever-growing online encyclopedic knowledge base still rests on a very small group of people. Many casual users may still feel difficulties in writing high quality Wikipedia articles. In this paper, we use RDF graphs to model the key elements in Wikipedia authoring, and propose an integrated solution to make Wikipedia authoring easier based on RDF graph matching, expecting making more Wikipedians. Our solution facilitates semantics reuse and provides users with: 1) a link suggestion module that suggests and auto-completes internal links between Wikipedia articles for the user; 2) a category suggestion module that helps the user place her articles in correct categories. A prototype system is implemented and experimental results show significant improvements over existing solutions to link and category suggestion tasks. The proposed enhancements can be applied to attract more contributors and relieve the burden of professional editors, thus enhancing the current Wikipedia to make it an even better Semantic Web data source.||0||0|
|RIKI: A Wiki-based knowledge sharing system for collaborative research projects||Rhee S.K.
|Lecture Notes in Computer Science||English||2008||During a collaborative research project, each member's knowledge and progress need to be managed and shared with other members. For effective knowledge sharing, each member needs to be able to express their own knowledge within the given project context and easily find and understand other members' knowledge. In this paper, we present our RIKI prototype that supports group communication and knowledge sharing in research projects via the Wiki-based platform. The main aim of RIKI implementation is to manage the shared knowledge semantically and to provide users with straightforward access to necessary information.||0||0|
|Robust content-driven reputation||Krishnendu Chatterjee
Luca de Alfaro
|Proceedings of the ACM Conference on Computer and Communications Security||English||2008||In content-driven reputation systems for collaborative content, users gain or lose reputation according to how their contributions fare: authors of long-lived contributions gain reputation, while authors of reverted contributions lose reputation. Existing content-driven systems are prone to Sybil attacks, in which multiple identities, controlled by the same person, perform coordinated actions to increase their reputation. We show that content-driven reputation systems can be made resistant to such attacks by taking advantage of thefact that the reputation increments and decrements depend on content modifications, which are visible to all. We present an algorithm for content-driven reputation that prevents a set of identities from increasing their maximum reputation without doing any useful work. Here, work is considered useful if it causes content to evolve in a direction that is consistent with the actions of high-reputation users. We argue that the content modifications that require no effort, such as the insertion or deletion of arbitrary text, are invariably non-useful. We prove a truthfullness result for the resulting system, stating that users who wish to perform a contribution do not gain by employing complex contribution schemes, compared to simply performing the contribution at once. In particular, splitting the contribution in multiple portions, or employing the coordinated actions of multiple identities, do not yield additional reputation. Taken together, these results indicate that content-driven systems can be made robust with respect to Sybil attacks. Copyright 2008 ACM.||0||0|
|SWOOKI: A peer-to-peer semantic wiki||Charbel Rahhal
|CEUR Workshop Proceedings||English||2008||In this paper, we propose to combine the advantages of semantic wikis and P2P wikis in order to design a peer-to-peer semantic wiki. The main challenge is how to merge wiki pages that embed semantic annotations. Merging algorithms used in P2P wiki systems have been designed for linear text and not for semantic data. In this paper, we evaluate two optimistic replication algorithms to build a P2P semantic wiki.||0||0|
|Scalaris: Reliable transactional P2P key/value store Web 2.0 hosting with erlang and java||Schutt T.
|Erlang'08: Proceedings of the 2008 SIGPLAN Erlang Workshop||English||2008||We present Scalaris, an Erlang implementation of a distributed key/value store. It uses, on top of a structured overlay network, replication for data availability and majority based distributed transactions for data consistency. In combination, this implements the ACID properties on a scalable structured overlay. By directly mapping the keys to the overlay without hashing, arbitrary key-ranges can be assigned to nodes, thereby allowing a better load-balancing than would be possible with traditional DHTs. Consequently, Scalaris can be tuned for fast data access by taking, e.g. the nodes' geographic location or the regional popularity of certain keys into account. This improves Scalaris' lookup speed in datacenter or cloud computing environments. Scalaris is implemented in Erlang. We describe the Erlang software architecture, including the transactional Java interface to access Scalaris. Additionally, we present a generic design pattern to implement a responsive server in Erlang that serializes update operations on a common state, while concurrently performing fast asynchronous read requests on the same state. As a proof-of-concept we implemented a simplified Wikipedia frontend and attached it to the Scalaris data store backend. Wikipedia is a challenging application. It requires - besides thousands of concurrent read requests per seconds - serialized, consistent write operations. For Wikipedia's category and backlink pages, keys must be consistently changed within transactions. We discuss how these features are implemented in Scalaris and show its performance. Copyright||0||0|
|Smarter, better, stronger, together: A closer look at how collaboration is transforming the enterprise||No author name available||EContent||English||2008||Various aspects of the project named We are smarter than me: How to unleash the power of crowds in your business, being set out and completed by Barry Libert and Jon Spector with the support of two American business schools are discussed. The book was written using a wiki-based community that invited 1 million people including student faculty and alumni from the fields of technology and management to contribute ideas. The authors posed question regarding the success of community approaches for marketing, business development, distribution other business practices. The implementation of web-based social communities and services in the enterprises has been made possible with the use of Web 2.0 technologies. Web 2.0 has many potential benefits and inspite of some limitations, it will continue to gain control on the business works as more and more companies will adopt it.||0||0|
|Social software: Fun and games, or business tools?||Warr W.A.||Journal of Information Science||English||2008||This is the era of social networking, collective intelligence, participation, collaborative creation, and borderless distribution. Every day we are bombarded with more publicity about collaborative environments, news feeds, blogs, wikis, podcasting, webcasting, folksonomies, social bookmarking, social citations, collaborative filtering, recommender systems, media sharing, massive multiplayer online games, virtual worlds, and mash-ups. This sort of anarchic environment appeals to the digital natives, but which of these so-called 'Web 2.0' technologies are going to have a real business impact? This paper addresses the impact that issues such as quality control, security, privacy and bandwidth may have on the implementation of social networking in hide-bound, large organizations.||0||0|
|XWiki concerto: A P2P wiki system supporting disconnected work||Gérôme Canals
|Lecture Notes in Computer Science||English||2008||This paper presents the XWiki Concerto system, the P2P version of the XWiki server. This system is based on replicating wiki pages on a network of wiki servers. The approach, based on the Woot algorithm, has been designed to be scalable, to support the dynamic aspect of P2P networks and network partitions. These characteristics make our system capable of supporting disconnected edition and sub-groups, making it very flexible and usable.||0||0|
|Integrity in open collaborative authoring systems||Jensen C.D.||IFIP International Federation for Information Processing||English||2007||Open collaborative authoring systems have become increasingly popular within the past decade. The benefits of such systems is best demonstrated by the Wiki and some of the tremendously popular applications build on Wiki technology, in particular the Wikipedia, which is a free encyclopaedia collaboratively edited by Internet users with a minimum of administration. One of the most serious problems that have emerged in open collaborative authoring systems relates to the quality, especially completeness and correctness of information. Inaccuracies in the Wikipedia have been rumoured to cause students to fail courses, innocent people have been associated with the killing of John F. Kennedy, etc. Improving the correctness, completeness and integrity of information in collaboratively authored documents is therefore of vital importance to the continued success of such systems. In this paper we propose an integrity mechanism for open collaborative authoring systems based on a combination of classic integrity mechanisms from computer security and reputation systems. While the mechanism provides a reputation based assessment of the trustworthiness of the information in a document, the primary purpose is to prevent untrustworthy authors from compromising the integrity of the document.||0||0|
|MSG-052 knowledge network for federation architecture and design||Ohlund G.
|Fall Simulation Interoperability Workshop 2007||English||2007||Development of distributed simulations is a complex process requiring extensive experience, in-depth knowledge and a certain skills set for the Architecture, Design, development and systems integration required for a federation to meet its operational, functional and technical requirements. Federation architecture and design is the blueprint that forms the basis for federation-wide agreements on how to conceive and build a federation. Architecture and design issues are continuously being addressed during federation development. Knowledge of "good design" is gained through hands-on experience, trial-and-error and experimentation. This kind of knowledge however, is seldom reused and rarely shared in an effective way. This paper presents an ongoing effort conducted by MSG-052 "Knowledge Network for Federation Architecture and Design" within the NATO Research and Technology Organisation (NATO/RTO) Modelling and Simulation group (NMSG). The main objective of MSG-052 is to initiate a "Knowledge Network" to promote development and sharing of information and knowledge about common federation architecture and design issues among NATO/PfP (Partnership for Peace) countries. By Knowledge Network, we envision a combination of a Community of Practice (CoP), various organisations and Knowledge Bases. A CoP, consisting of federation development experts from the NATO/PfP nations, will foster the development of state-of-the-art federation architecture and design solutions, and provide a Knowledge Base for the Modelling and Simulation (M&S) community as a whole. As part of the work, existing structures and tools for knowledge capture, management and utilization will be explored, refined and used when appropriate; for instance the work previously done under MSG-027 PATHFINDER Integration Environment provides lessons learned that could benefit this group. The paper will explore the concept of a Community of Practice and reveal the ideas and findings within the MSG-052 Management Group concerning ways of establishing and managing a Federation Architecture and Design CoP. It will also offer several views on the concept of operations for a collaborative effort, combining voluntary contributions as well as assigned tasks. Amongst the preliminary findings was the notion of a Wiki-based Collaborative Environment in which a large portion of our work is conducted and which also represents our current Knowledge Base. Finally, we present some of our main challenges and vision for future work.||0||0|
|Reading and writing with Wikis: Progress and plans||Cliff Kussmaul
|Creativity and Cognition 2007, CC2007 - Seeding Creativity: Tools, Media, and Environments||English||2007||This paper describes an investigation of ways to use wikis to support and improve reading, writing, and related skills. The primary objective is to develop activities that can be adapted to a variety of settings. The paper describes a set of successful activities, and discusses the effects of using a wiki, lessons learned, and future directions.||0||0|
|TWiki and WetPaint: Two wikis in academic environments||Libby Hemphill
|GROUP'07 - Proceedings of the 2007 International ACM Conference on Supporting Group Work||English||2007||This paper describes a community-based effort to preserve organizational knowledge and to orientate newcomers to a graduate school. It presents a very brief review of recent research on wiki use in corporate and organizational environments and initial data from two wiki implementation iterations within our academic community. We contrast use of a TWiki with that of a WetPaint wiki. Our data suggest that with low barriers to participation and a great deal of patience, wikis can be useful stores for community information and knowledge sharing.||0||0|
|WikiNavMap: A visualisation to supplement team-based wikis||Ullman A.J.
|Conference on Human Factors in Computing Systems - Proceedings||English||2007||Wikis are an invaluable tool for quickly and easily creating and editing a collection of web pages. Their use is particularly interesting in small teams to serve as a support for group communication, for co-ordination, as well as for creating collaborative document products. In spite of the very real appeal of the wiki for these purposes, there is a serious challenge due to their complexity. Team members can have difficulty identifying the structure and salient elements of the wiki. This paper describes the design of WikiNavMap, an alternative visual representation for wikis, which provides an overview of the wiki structure. Based on analysis of student wikis, we identified factors that help team members identify which wiki pages are currently relevant to them. We hypothesised that a structural overview coupled with the visual representations of these factors could assist users with wiki navigation decisions. We report a preliminary evaluation with a large group wiki, created over a full university semester by a group of ten users. The results are promising for a small wiki but point to challenges in coping with the complexity of a larger one.||0||1|
|Extracting trust from domain analysis: A case study on the wikipedia project||Pierpaolo Dondio
|Lecture Notes in Computer Science||English||2006||The problem of identifying trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publications. Wikipedia is the most extraordinary example of this phenomenon and, although a few mechanisms have been put in place to improve contributions quality, trust in Wikipedia content quality has been seriously questioned. We thought that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia - i.e. content quality in a collaborative environment - mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. Our evaluation, conducted on about 8,000 articles representing 65% of the overall Wikipedia editing activity, shows that the new trust evidence that we extracted from Wikipedia allows us to transparently and automatically compute trust values to isolate articles of great or low quality.||0||2|
|Fotowiki: Distributed map enhancement service||Chen W.-Y.
|Proceedings of the 14th Annual ACM International Conference on Multimedia, MM 2006||English||2006||Fotowiki (FW) is a wiki-based map service that integrates visual and textual information with map. FW divides a geographical area into sub-areas. An individual responsible for providing information about a sub-area enters collected data into a wiki page. FW uploads distributed wiki-pages, and overlays the information on the map. This demonstration shows FW's architecture and functionalities.||0||0|
|Pocket RikWik: A mobile wiki supporting online and offline collaboration||Huang W.-C.
|AusWeb 2006: 12th Australasian World Wide Web Conference||English||2006||Wikis are a popular collaboration technology. They support the collaborative editing of web pages through a simple mark-up language. The wikipedia site is perhaps the best example of how wikis can be used. There are lots of different wikis, all with their own special extended features over the basic collaborative editing of web pages. In this paper we investigate how wikis can be made mobile; that is how wiki forms of collaborative editing can be achieved through mobile devices such as smart phones. Mobile devices are becoming ubiquitous and powerful. Thus it is advantageous for people to get the benefits of wikis in a mobile setting. However, mobile devices present their own challenges such as limited screen size, bandwidth and battery life; they also have intermittent connectivity. We have investigated and built a prototype mobile wiki which addresses these issues and which enables collaboration through mobile devices. The system comprises a cut down wiki which runs on the mobile device. This communicated with a main central wiki to cache pages for off line use. This hoarding process also enables new pages to be created. On re-connection edited and new pages are synchronized with a main wiki server. Communication and hence hoarding is adaptive depending on the characteristics of the mobile device. When sitting in a powered cradle, eager downloading and synchronization of pages is supported. During mobile operation, pages are cached lazily on demand to minimize power use and to save the limited and expensive bandwidth. Finally a pluggable page rendering engine enables pages to be rendered in different ways to suit different sized screens. This enables simple collaborative working whilst on and offline through smart mobile devices. The prototype system has been implemented using .NET. © 2006. Wei-Che Huang.||0||0|
|Scalable information sharing utilizing decentralized P2P networking integrated with centralized personal and group media tools||Guozhen Z.
|Proceedings - International Conference on Advanced Information Networking and Applications, AINA||English||2006||We proposed a collaborative information sharing environment based on P2P networking technology, to support communication among special groups with given tasks, ensure fast information exchange, increase the productivity of working groups, and reduce maintenance and administration costs in our previous work. However, for a social growing community, not only the information exchange/sharing functions are necessary, but also solutions to support users with idea and knowledge publication tools for private purpose or public use are essential. Some private message (personal idea and experience) posting tools (e.g., weblog) and group collaborative knowledge editing tools (e.g., Wikis) are used in practice; the merits of these tools have been recognized. In this paper, we propose a scalable information sharing solution, which integrates decentralized P2P networking with centralized personal/group media tools. This solution combines the effective tools, such as weblog and Wiki, into P2P-based collaborative groupware system, to facilitate infinite, growing and scalable information management and sharing for individuals and groups.||0||0|
|WikiFactory: An ontology-based application for creating domain-oriented wikis||Angelo Di Iorio
|Lecture Notes in Computer Science||English||2006||Wikis play a leading role among the web publishing environments, being collaborative tools used for fast and easy writing and sharing of content. Although powerful and widely used, wikis do not support users in the aided generation of content specific for a given domain but they still require manual, time-consuming and error-prone interventions. On the other hand, semantic portals support users in browsing, searching and managing content related to a given domain, by exploiting ontologies. In this paper we propose a specific application of web ontologies, applied to the wikis: exploiting an ontological description of a domain in order to deploy a customized wiki for that specific domain. We describe the design of an ontology-based framework, named WikiFactory, that aids users to automatically generate a complex and complete wiki website related to a specific area of interest with few efforts. In order to show the applicability of our framework, we present a specific case study that describes the main WikiFactory capabilities in constructing the wiki website for a Computer Science Department in a University.||0||0|
|Story-lines: A case study of online learning using narrative analysis||Yukawa J.||Computer Supported Collaborative Learning 2005: The Next 10 Years - Proceedings of the International Conference on Computer Supported Collaborative Learning 2005, CSCL 2005||English||2005||Narrative analysis has both research and pedagogical advantages for use in CSCL. Narrative theory provides multidisciplinary perspectives and methods from diverse fields. Stories are a way of thinking, making meaning, and showing constructivism in action. This paper discusses the advantages of narrative analysis for interpreting online discourse; presents features, methodological challenges, and procedures; and presents some findings from a case study of online learning. Narrative analysis uses both text and online "talk" to construct a holistic view of the learning experience involving cognition, affect, and interaction.||0||0|