| Wiki systems|
(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
Wiki systems is included as keyword or extra keyword in 0 datasets, 0 tools and 6 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Opportunities for using Wiki technologies in building digital library models||Mammadov E.C.O.||Library Hi Tech News||English||2014||Purpose: The purpose of this article is to research the open access and encyclopedia structured methodology of building digital libraries. In Azerbaijan Libraries, one of the most challenged topics is organizing digital resources (books, audio-video materials, etc.). Wiki technologies introduce easy, collaborative and open tools opportunities which make it possible to implement in digital library buildings. Design/methodology/approach: This paper looks at current practices, and the ways of organizing information resources to make them more systematized, open and accessible. These activities are valuable for rural libraries which are smaller and less well funded than main and central libraries in cities. Findings: The main finding of this article is how to organize digital resource management in the libraries using Wiki ideology. Originality/value: Wiki technologies determine the ways of building digital library network models which are structurally different from already known models, as well as new directions in forming information society and solving the problems encountered.||0||0|
|Supporting wiki users with natural language processing||Bahar Sateli
|WikiSym 2012||English||2012||We present a "self-aware" wiki system, based on the MediaWiki engine, that can develop and organize its content using state-of-art techniques from the Natural Language Processing (NLP) and Semantic Computing domains. This is achieved with an architecture that integrates novel NLP solutions within the MediaWiki environment to allow wiki users to benefit from modern text mining techniques. As concrete applications, we present how the enhanced MediaWiki engine can be used for biomedical literature curation, cultural heritage data management, and software requirements engineering.||0||0|
|Content neutrality for Wiki systems: From neutral point of view (NPOV) to every point of view (EPOV)||Cap C.H.||Proceedings of the 4th International Conference on Internet Technologies and Applications, ITA 11||English||2011||The neutral point of view (NPOV) cornerstone of Wikipedia (WP) is challenged for next generation knowledge bases. An empirical test is made with two WP articles. A case is built for content neutrality as a new, every point of view (EPOV) guiding principle. The architectural implications of content neutrality are discussed and translated into novel concepts of Wiki architectures. Guidelines for implementing this architecture are presented. Although NPOV is heavily criticized, the contribution avoids ideological controversy but rather focuses on the benefits and characteristics of the novel approach.||0||0|
|Creating dynamic wiki pages with section-tagging||D. Helic
A. Us Saeed
|CEUR Workshop Proceedings||English||29 June 2009||Authoring and editing processes in wiki systems are often tedious. Sheer amount of information makes it difficult for authors to organize the related information in a way that is easily accessible and retrievable for future reference. Social bookmarking systems provide possibilities to tag and organize related resources that can be later retrieved by navigating in so-called tag clouds. Usually, tagging systems do not offer a possibility to tag sections of resources but only a resource as a whole. However, authors of new wiki pages are typically interested only in certain parts of other wiki pages that are related to their current editing process. This paper describes a new approach applied in a wiki-based online encyclopedia that allows authors to tag interesting wiki pages sections. The tags are then used to dynamically create new wiki pages out of tagged sections for further editing.||0||0|
|Knowledge capturing tools for domain experts: Exploiting named entity recognition and n-ary relation discovery for knowledge capturing in e-science||Brocker L.
|CEUR Workshop Proceedings||English||2007||The success of the Semantic Web depends on the availability of content marked up using its description languages. Although the idea has been around for nearly a decade, the amount of Semantic Web content available is still fairly small. This is despite the existence of many digital archives containing lots of high quality collections which would, appropriately marked up, greatly enhance the reach of the Semantic Web. The archives themselves would benefit as well, by improved opportunities for semantic search, navigation and interconnection with other archives. The main challenge lies in the fact that ontology creation at the moment is a very detailed and complicated process. It mostly requires the service of an ontology engineer, who designs the ontology in accordance with domain experts. The software tools available, be it from the text engineering or the ontology creation disciplines, reflect this: they are built for engineers, not for domain experts. In order to really tap the potential of the digital collections, tools are needed that support the domain experts in marking up the content they understand better than anyone else. This paper presents an integrated approach to knowledge capturing and subsequent ontology creation, called WIKINGER, that aims at empowering domain experts to prepare their content for inclusion into the Semantic Web. This is done by largely automating the process through the use of named entity recognition and relation discovery.||0||0|
|HPCBugBase: An experience base for HPC defects||Nakamura T.||Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, SC'06||English||2006||We present the design and implementation of HPCBugBase, an experience base for high performance computing (HPC) software defects. Our goal is to accumulate empirical knowledge about commonly occurring defects in HPC codes using an incremental approach. This knowledge is structured so that HPC practitioners such as programmers and tool builders can use it to reduce debugging costs, as well as provide feedback which becomes incorporated into the system. By building the experience base, we expect to help the process of making explicit the knowledge about recurring defects that otherwise cannot be shared. The current system is built on a Wiki system, which allows incremental accumulation of data at various levels of abstraction. We implement additional analysis functions that do not exist in a generic Wiki system as custom plug-ins. We have populated the system with data collected from software engineering studies from the DARPA High Productivity Computer Systems Project.||0||0|