Version control

From WikiPapers
Jump to: navigation, search

Version control is included as keyword or extra keyword in 0 datasets, 0 tools and 5 publications.

Datasets

There is no datasets for this keyword.

Tools

There is no tools for this keyword.


Publications

Title Author(s) Published in Language DateThis property is a special property in this wiki. Abstract R C
WikiWho: Precise and Efficient Attribution of Authorship of Revisioned Content Fabian Flöck
Maribel Acosta
World Wide Web Conference 2014 English 2014 Revisioned text content is present in numerous collaboration platforms on the Web, most notably Wikis. To track authorship of text tokens in such systems has many potential applications; the identification of main authors for licensing reasons or tracing collaborative writing patterns over time, to name some. In this context, two main challenges arise. First, it is critical for such an authorship tracking system to be precise in its attributions, to be reliable for further processing. Second, it has to run efficiently even on very large datasets, such as Wikipedia. As a solution, we propose a graph-based model to represent revisioned content and an algorithm over this model that tackles both issues effectively. We describe the optimal implementation and design choices when tuning it to a Wiki environment. We further present a gold standard of 240 tokens from English Wikipedia articles annotated with their origin. This gold standard was created manually and confirmed by multiple independent users of a crowdsourcing platform. It is the first gold standard of this kind and quality and our solution achieves an average of 95% precision on this data set. We also perform a first-ever precision evaluation of the state-of-the-art algorithm for the task, exceeding it by over 10% on average. Our approach outperforms the execution time of the state-of-the-art by one order of magnitude, as we demonstrate on a sample of over 240 English Wikipedia articles. We argue that the increased size of an optional materialization of our results by about 10% compared to the baseline is a favorable trade-off, given the large advantage in runtime performance. 0 0
A Linked Data platform for mining software repositories Keivanloo I.
Forbes C.
Hmood A.
Erfani M.
Neal C.
Peristerakis G.
Rilling J.
IEEE International Working Conference on Mining Software Repositories English 2012 The mining of software repositories involves the extraction of both basic and value-added information from existing software repositories. The repositories will be mined to extract facts by different stakeholders (e.g. researchers, managers) and for various purposes. To avoid unnecessary pre-processing and analysis steps, sharing and integration of both basic and value-added facts are needed. In this research, we introduce SeCold, an open and collaborative platform for sharing software datasets. SeCold provides the first online software ecosystem Linked Data platform that supports data extraction and on-the-fly inter-dataset integration from major version control, issue tracking, and quality evaluation systems. In its first release, the dataset contains about two billion facts, such as source code statements, software licenses, and code clones from 18 000 software projects. In its second release the SeCold project will contain additional facts mined from issue trackers and versioning systems. Our approach is based on the same fundamental principle as Wikipedia: researchers and tool developers share analysis results obtained from their tools by publishing them as part of the SeCold portal and therefore make them an integrated part of the global knowledge domain. The SeCold project is an official member of the Linked Data dataset cloud and is currently the eighth largest online dataset available on the Web. 0 0
Wikis in scholarly publishing Daniel Mietchen
Gregor Hagedorn
Konrad U. Förstner
M. Fabiana Kubke
Claudia Koltzenburg
Mark Hahnel
Lyubomir Penev
Information Services and Use English 2011 Scientific research is a process concerned with the creation, collective accumulation, contextualization, updating and maintenance of knowledge. Wikis provide an environment that allows to collectively accumulate, contextualize, update and maintain knowledge in a coherent and transparent fashion. Here, we examine the potential of wikis as platforms for scholarly publishing. In the hope to stimulate further discussion, the article itself was drafted on Species ID – http://species-id.net; a wiki that hosts a prototype for wiki-based scholarly publishing – where it can be updated, expanded or otherwise improved. 0 1
Scholarly knowledge development and dissemination in an international context: Approaches and tools for higher education Willis J.
Baron J.
Lee R.-A.
Gozza-Cohen M.
Currie A.
Computers in the Schools English 2010 This paper looks at the process of collaboratively creating and disseminating information resources, such as journals, books, papers, and multimedia resources in higher education. This process has been facilitated and encouraged by two relatively new movements, open-source and, especially, open access. The most definitive expression of the principles of open access is the Budapest Open Access Initiative. It calls for the creation of journals that are freely available via the Internet to anyone. The broad principles of open access can be the foundation for creating many types of information resources-from online textbooks to sophisticated instructional videos. What distinguishes such open access resources is that they are distributed without charge to users and that most of the individual and institutional authors give permission for them to be revised, remixed, and reformed by users, who may then distribute the "new" version of the resource. Much of the work on open access information resources is collaborative and involves international teams with diverse experiences and areas of expertise. Such collaboration is not easy, but there is a growing set of electronic tools that support such work. The electronic toolbox for collaboratively creating new information resources includes tools that can serve as "electronic hallways" where potential collaborators can meet and interact informally; gateway Web sites and document repositories that support the exchange of information; Web tools that support groups with special interests; tools for supporting project teams; collaborative writing support systems including file sharing, document exchange, and version control software; wikis where a team can collaboratively write and revise documents, and project management software. There are also many avenues for disseminating information resources. These include open-access journals and the software packages that support them such as the Open Journal Systems package from the Public Knowledge Project, preprint and repository archives and the software for creating such archives (e.g., dspace, Fedora, Joomla, and Drupal), Web resources for indexing and locating relevant information, and international as well as virtual conferences and the software for operating such meetings. This paper explores the different approaches to both creating and disseminating information resources for higher education and evaluates some of the most commonly used software options for supporting these activities. 0 0
China physiome project: A comprehensive framework for anatomical and physiological databases from the China digital human and the visible rat Han D.
Qiaoling Liu
Luo Q.
Proceedings of the IEEE English 2009 The connection study between biological structure and function, as well as between anatomical data and mechanical or physiological models, has been of increasing significance with the rapid advancement in experimental physiology and computational physiology. The China Physiome Project (CPP) is dedicated in optimization of the connection exploration based on standardization and integration of the structural datasets and their derivatives of cryosectional images with various standards, collaboration mechanisms, and online services. The CPP framework hereby incorporates the three-dimensional anatomical models of human and rat anatomy, the finite-element models of whole-body human skeleton, and the multiparticle radiological dosimetry data of both the human and rat computational phantoms. The ontology of CPP was defined using MeSH and, with its all standardized models description implemented by M3L, a multiscale modeling language based on XML. Provided services based on Wiki concept include collaboration research, modeling version control, data sharing, online analysis of M3L documents. As a sample case, a multiscale model for human heart modeling, in which familial hypertrophic cardiomyopathy was studied according to the structure-function relations from genetic level to organ level, is integrated into the framework and given for demonstration of the functionality of multiscale physiological modeling based on CPP. 0 0