Model-aware Wiki Analysis Tools: the Case of HistoryFlow
|Model-aware Wiki Analysis Tools: the Case of HistoryFlow|
|Author(s)||Oscar Díaz, Gorka Puente|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
Wikis are becoming mainstream. Studies confirm how wikis are finding their way into organizations. This paper focuses on requirements for analysis tools for corporate wikis. Corporate wikis differ from their grow-up counterparts such as Wikipedia. First, they tend to be much smaller. Second, they require analysis to be customized for their own domains. So far, most analysis tools focus on large wikis where handling efficiently large bulks of data is paramount. This tends to make analysis tools access directly the wiki database. This binds the tool to the wiki engine, hence, jeopardizing customizability and interoperability. However, corporate wikis are not so big while customizability is a desirable feature. This change in requirements advocates for analysis tools to be decoupled from the underlying wiki engines. Our approach argues for characterizing analysis tools in terms of their abstract analysis model (e.g. a graph model, a contributor model). How this analysis model is then map into wiki-implementation terms is left to the wiki administrator. The administrator, as the domain expert, can better assess which is the right terms/granularity to conduct the analysis. This accounts for suitability and interoperability gains. The approach is borne out for HistoryFlow, an IBM tool for visualizing evolving wiki pages and the interactions of multiple wiki authors.
This publication has 8 references. Only those references related to wikis are included here:
- "Analysis of the Wikipedia Category Graph for NLP Applications" (create it!)
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.