(Alternative names for this keyword)
|Related keyword(s)||Information visualization, statistics|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
visualisation is included as keyword or extra keyword in 0 datasets, 21 tools and 94 publications.
There is no datasets for this keyword.
|Tool||Operating System(s)||Language(s)||Programming language(s)||License||Description||Image|
|HistoryFlow||Windows||English||HistoryFlow is a tool for visualizing dynamic, evolving documents and the interactions of multiple collaborating authors. In its current implementation, history flow is being used to visualize the evolutionary history of wiki pages on Wikipedia.|
|StatMediaWiki||GNU/Linux||English||Python||GPLv3||StatMediaWiki is a project that aims to create a tool to collect and aggregate information available in a MediaWiki installation. Results are static HTML pages including tables and graphics that can help to analyze the wiki status and development, or a CSV file for custom processing.|
|Wiki Category Matrix Visualization||Cross-platform||English||Java||Educational Community License||Wiki Category Matrix Visualization is a tool that generates a visual representation of data sizes across topics of a multi-level category hierarchy in matrix form. It provides a "big picture" overview of topics in terms of categorization.|
|Wiki Explorator||Ruby||Wiki Explorator is a ruby library for scientific research on wikis (and other CMS, focus: MediaWiki) for interactive exploration, statistics and visualization of (network) data.|
|Wiki Trip allows to have a trip in the process of creation of any Wikipedia page from any language edition of Wikipedia. WikiTrip is an interactive web tool empowering its users by providing an insightful visualization of two kinds of information about the Wikipedians who edited the selected page: their location in the world and their gender.|
|WikiEvidens||Cross-platform||English||Python||GPLv3||WikiEvidens is a visualization and statistical tool for wikis.|
|WikiNavMap||WikiNavMap visualises the tickets, wiki pages and milestones in the Trac environment.|
|WikiTracer||English||WikiTracer is a web service providing platform-independent analytics and comparative growth statistics for wikis.|
|WikiVis (FH-KL)||Java||GPL||WikiVis (FH-KL) WikiVis is a tool to analyze Wikipedia based on several aspects. The main objective is to visualize the conclusions of this examination, which focuses on the editing frequency and relevance of articles and categories as well as the activity of users.|
|WikiVis (UM)||Cross-platform||English||Java||Educational Community License
|WikiVis (UM) provides an interactive visualization of the Wikipedia information space, primarily as a means of navigating the category hierarchy as well as the article network. The project is implemented in Java, utilizing the Java 3D package.|
|GPL||Wikimedia counter is a near real-time edit counter for all Wikimedia projects.|
|WikipediaVision||Web||English||WikipediaVision is a web-based tool that shows anonymous edits to Wikipedia (almost) in real-time.|
|Wikistream||English||Wikistream is a web that shows the stream of changes in Wikimedia projects, in real-time.|
|Wikiswarm||Cross-platform||English||Java||Wikiswarm generates code_swarm event logs from the Wikipedia API.|
|Wikitweets||Wikitweets is a visualization of how Wikipedia is cited on Twitter.|
|Wmcharts||wmcharts are a set of graphs about Wikimedia projects.|
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|MIGSOM: A SOM algorithm for large scale hyperlinked documents inspired by neuronal migration||Kotaro Nakayama
|Lecture Notes in Computer Science||English||2014||The SOM (Self Organizing Map), one of the most popular unsupervised machine learning algorithms, maps high-dimensional vectors into low-dimensional data (usually a 2-dimensional map). The SOM is widely known as a "scalable" algorithm because of its capability to handle large numbers of records. However, it is effective only when the vectors are small and dense. Although a number of studies on making the SOM scalable have been conducted, technical issues on scalability and performance for sparse high-dimensional data such as hyperlinked documents still remain. In this paper, we introduce MIGSOM, an SOM algorithm inspired by new discovery on neuronal migration. The two major advantages of MIGSOM are its scalability for sparse high-dimensional data and its clustering visualization functionality. In this paper, we describe the algorithm and implementation in detail, and show the practicality of the algorithm in several experiments. We applied MIGSOM to not only experimental data sets but also a large scale real data set: Wikipedia's hyperlink data.||0||0|
|Visualizing large-scale human collaboration in Wikipedia||Biuk-Aghai R.P.
|Future Generation Computer Systems||English||2014||Volunteer-driven large-scale human-to-human collaboration has become common in the Web 2.0 era. Wikipedia is one of the foremost examples of such large-scale collaboration, involving millions of authors writing millions of articles on a wide range of subjects. The collaboration on some popular articles numbers hundreds or even thousands of co-authors. We have analyzed the co-authoring across entire Wikipedias in different languages and have found it to follow a geometric distribution in all the language editions we studied. In order to better understand the distribution of co-author counts across different topics, we have aggregated content by category and visualized it in a form resembling a geographic map. The visualizations produced show that there are significant differences of co-author counts across different topics in all the Wikipedia language editions we visualized. In this article we describe our analysis and visualization method and present the results of applying our method to the English, German, Chinese, Swedish and Danish Wikipedias. We have evaluated our visualization against textual data and found it to be superior in usability, accuracy, speed and user preference. © 2013 Elsevier B.V. All rights reserved.||0||0|
|WikiReviz: An edit history visualization for wiki systems||Wu J.
|Lecture Notes in Computer Science||English||2014||Wikipedia maintains a linear record of edit history with article content and meta-information for each article, which conceals precious information on how each article has evolved. This demo describes the motivation and features of WikiReviz, a visualization system for analyzing edit history in Wikipedia and other Wiki systems. From the official exported edit history of a single Wikipedia article, WikiReviz reconstructs the derivation relationships among revisions precisely and efficiently by revision graph extraction and indicate meaningful article evolution progress by edit summarization.||0||0|
|Art History on Wikipedia, a Macroscopic Observation||Doron Goldfarb
|ArXiv||English||20 April 2013||How are articles about art historical actors interlinked within Wikipedia? Lead by this question, we seek an overview on the link structure of a domain specific subset of Wikipedia articles. We use an established domain-specific person name authority, the Getty Union List of Artist Names (ULAN), in order to externally identify relevant actors. Besides containing consistent biographical person data, this database also provides associative relationships between its person records, serving as a reference link structure for comparison. As a first step, we use mappings between the ULAN and English Dbpedia provided by the Virtual Internet Authority File (VIAF). This way, we are able to identify 18,002 relevant person articles. Examining the link structure between these resources reveals interesting insight about the high level structure of art historical knowledge as it is represented on Wikipedia.||4||1|
|3D Wikipedia: Using online text to automatically label and navigate reconstructed geometry||Russell B.C.
|ACM Transactions on Graphics||English||2013||We introduce an approach for analyzing Wikipedia and other text, together with online photos, to produce annotated 3D models of famous tourist sites. The approach is completely automated, and leverages online text and photo co-occurrences via Google Image Search. It enables a number of new interactions, which we demonstrate in a new 3D visualization tool. Text can be selected to move the camera to the corresponding objects, 3D bounding boxes provide anchors back to the text describing them, and the overall narrative of the text provides a temporal guide for automatically flying through the scene to visualize the world as you read about it. We show compelling results on several major tourist sites.||0||0|
|A novel map-based visualization method based on liquid modelling||Biuk-Aghai R.P.
|ACM International Conference Proceeding Series||English||2013||Many applications produce large amounts of data, and information visualization has been successfully applied to help make sense of this data. Recently geographic maps have been used as a metaphor for visualization, given that most people are familiar with reading maps, and several visualization methods based on this metaphor have been developed. In this paper we present a new visualization method that aims to improve on existing map-like visualizations. It is based on the metaphor of liquids poured onto a surface that expand outwards until they touch each other, forming larger areas. We present the design of our visualization method and an evaluation we have carried out to compare it with an existing visualization. Our new visualization has better usability, leading to higher accuracy and greater speed of task performance.||0||0|
|Analyzing multi-dimensional networks within mediawikis||Brian C. Keegan
|Proceedings of the 9th International Symposium on Open Collaboration, WikiSym + OpenSym 2013||English||2013||The MediaWiki platform supports popular socio-technical systems such as Wikipedia as well as thousands of other wikis. This software encodes and records a variety of rela- Tionships about the content, history, and editors of its arti- cles such as hyperlinks between articles, discussions among editors, and editing histories. These relationships can be an- Alyzed using standard techniques from social network analy- sis, however, extracting relational data from Wikipedia has traditionally required specialized knowledge of its API, in- formation retrieval, network analysis, and data visualization that has inhibited scholarly analysis. We present a soft- ware library called the NodeXL MediaWiki Importer that extracts a variety of relationships from the MediaWiki API and integrates with the popular NodeXL network analysis and visualization software. This library allows users to query and extract a variety of multidimensional relationships from any MediaWiki installation with a publicly-accessible API. We present a case study examining the similarities and dif- ferences between dierent relationships for the Wikipedia articles about \Pope Francis" and \Social media." We con- clude by discussing the implications this library has for both theoretical and methodological research as well as commu- nity management and outline future work to expand the capabilities of the library. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; D.2.8 [Software Engineering]: Metricscomplexity mea- sures, performance measures General Terms System. Copyright 2010 ACM.||0||0|
|Making sense of open data statistics with information from Wikipedia||Hienert D.
|Lecture Notes in Computer Science||English||2013||Today, more and more open data statistics are published by governments, statistical offices and organizations like the United Nations, The World Bank or Eurostat. This data is freely available and can be consumed by end users in interactive visualizations. However, additional information is needed to enable laymen to interpret these statistics in order to make sense of the raw data. In this paper, we present an approach to combine open data statistics with historical events. In a user interface we have integrated interactive visualizations of open data statistics with a timeline of thematically appropriate historical events from Wikipedia. This can help users to explore statistical data in several views and to get related events for certain trends in the timeline. Events include links to Wikipedia articles, where details can be found and the search process can be continued. We have conducted a user study to evaluate if users can use the interface intuitively, if relations between trends in statistics and historical events can be found and if users like this approach for their exploration process.||0||0|
|Visitpedia: Wiki article visit log visualization for event exploration||Sun Y.
|Proceedings - 13th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2013||English||2013||This paper proposes an interactive visualization tool, Visitpedia, to detect and analyze social events based on Wikipedia visit history. It helps users discover real-world events behind the data and study how these events evolve over time. Different from previous work based on on-line news or similar text corpora, we choose Wikipedia visit counts as our data source since the visit count data better reflect user concerns of social events. We tackle the event-based task from a time-series pattern perspective rather than semantic perspective. Various visualization and user interaction techniques are integrated in Visitpedia. Two case studies are conducted to demonstrate the effectiveness of Visitpedia.||0||0|
|Building for social translucence: A domain analysis and prototype system||David W. McDonald
|English||2012||The relationships and work that facilitate content creation in large online contributor system are not always visible. Social translucence is a stance toward the design of systems that allows users to better understand collaborative system participation through awareness of contributions and interactions. Like many socio-technical constructs, social translucence is not something that can be simply added after a system is built; it should be at the core of system design. In this paper, we conduct a domain analysis to understand the space of architectural support required to facilitate social translucence in systems. We describe an instantiation of those requirements as a system architecture that relies on data from Wikipedia and illustrate how translucence can be propagated to some basic visualizations which we have created for Wikipedia users. We close with some reflections on the state of social translucence research and some openings for this important design perspective.||0||0|
|Event-centric search and exploration in document collections||Strotgen J.
|Proceedings of the ACM/IEEE Joint Conference on Digital Libraries||English||2012||Textual data ranging from corpora of digitized historic documents to large collections of news feeds provide a rich source for temporal and geographic information. Such types of information have recently gained a lot of interest in support of different search and exploration tasks, e.g., by organizing news along a timeline or placing the origin of documents on a map. However, for this, temporal and geographic information embedded in documents is often considered in isolation. We claim that through combining such information into (chronologically ordered) event-like features interesting and meaningful search and exploration tasks are possible. In this paper, we present a framework for the extraction, exploration, and visualization of event information in document collections. For this, one has to identify and combine temporal and geographic expressions from documents, thus enriching a document collection by a set of normalized events. Traditional search queries then can be enriched by conditions on the events relevant to the search subject. Most important for our event-centric approach is that a search result consists of a sequence of events relevant to the search terms and not just a document hit-list. Such events can originate from different documents and can be further explored, in particular events relevant to a search query can be ordered chronologically. We demonstrate the utility of our framework by different (multilingual) search and exploration scenarios using a Wikipedia corpus.||0||0|
|Exploration and visualization of administrator network in wikipedia||Yousaf J.
|Lecture Notes in Computer Science||English||2012||Wikipedia has become one of the most widely used knowledge systems on the Web. It contains the resources and information with different qualities contributed by different set of authors. A special group of authors named administrators plays an important role for content quality in Wikipedia. Understanding the behaviors of administrators in Wikipedia can facilitate the management of Wikipedia system, and empower some applications such as article recommendation and expertise administrator finding for given articles. This paper addresses the work of the exploration and visualization of the administrator network in Wikipedia. Administrator network is firstly constructed by using co-editing relationship and six characteristics for administrators are proposed to describe the behaviors of administrators in Wikipedia from different perspectives. Quantified calculation of these characteristics is then put forwarded by using social network analysis techniques. Topic model is used to relate content of Wikipedia to the interest diversity of administrators. Based on the media wiki history records from the January 2010 to January 2011, we develop an administrator exploration prototype system which can rank the selected characteristics for administrators and can be used as a decision support system. Furthermore, some meaningful observations are found to show that the administrator network is a healthy small world community and a strong centralization of the network around some hubs/stars is obtained to mean a considerable nucleus of very active administrators that seems to be omnipresent. These top ranked administrators ranking is found to be consistent with the number of barn stars awarded to them.||0||0|
|Feeling the pulse of a wiki: Visualization of recent changes in Wikipedia||Biuk-Aghai R.P.
|ACM International Conference Proceeding Series||English||2012||Large wikis such as Wikipedia attract large numbers of editors continuously editing content. It is difficult to observe what editing activity goes on at any given moment, what editing patterns can be observed, and which are the currently active editors and articles. We introduce the design and implementation of an information visualization tool for streaming data on recent changes in wikis that aims to address this difficulty, show examples of our visualizations from English Wikipedia, and present several patterns of editing activity that we can visually identify using our tool.||0||0|
|Learning from history: Predicting reverted work at the word level in wikipedia||Jeffrey Rzeszotarski
|English||2012||Wikipedia's remarkable success in aggregating millions of contributions can pose a challenge for current editors, whose hard work may be reverted unless they understand and follow established norms, policies, and decisions and avoid contentious or proscribed terms. We present a machine learning model for predicting whether a contribution will be reverted based on word level features. Unlike previous models relying on editor-level characteristics, our model can make accurate predictions based only on the words a contribution changes. A key advantage of the model is that it can provide feedback on not only whether a contribution is likely to be rejected, but also the particular words that are likely to be controversial, enabling new forms of intelligent interfaces and visualizations. We examine the performance of the model across a variety of Wikipedia articles.||0||0|
|LensingWikipedia: Parsing text for the interactive visualization of human history||Vadlapudi R.
|IEEE Conference on Visual Analytics Science and Technology 2012, VAST 2012 - Proceedings||English||2012||Extracting information from text is challenging. Most current practices treat text as a bag of words or word clusters, ignoring valuable linguistic information. Leveraging this linguistic information, we propose a novel approach to visualize textual information. The novelty lies in using state-of-the-art Natural Language Processing (NLP) tools to automatically annotate text which provides a basis for new and powerful interactive visualizations. Using NLP tools, we built a web-based interactive visual browser for human history articles from Wikipedia.||0||0|
|Pattern for python||De Smedt T.
|Journal of Machine Learning Research||English||2012||Pattern is a package for Python 2.4+ with functionality for web mining (Google + Twitter + Wikipedia, web spider, HTML DOM parser), natural language processing (tagger/chunker, n-gram search, sentiment analysis, WordNet), machine learning (vector space model, k-means clustering, Naive Bayes + k-NN + SVM classifiers) and network analysis (graph centrality and visualization). It is well documented and bundled with 30+ examples and 350+ unit tests. The source code is licensed under BSD and available from http://www.clips.ua.ac.be/pages/ pattern.||0||0|
|Publishing statistical data on the web||Salas P.E.R.
|Proceedings - IEEE 6th International Conference on Semantic Computing, ICSC 2012||English||2012||Statistical data is one of the most important sources of information, relevant for large numbers of stakeholders in the governmental, scientific and business domains alike. In this article, we overview how statistical data can be managed on the Web. With OLAP2 Data Cube and CSV2 Data Cube we present two complementary approaches on how to extract and publish statistical data. We also discuss the linking, repair and the visualization of statistical data. As a comprehensive use case, we report on the extraction and publishing on the Web of statistical data describing 10 years of life in Brazil.||0||0|
|Self organizing maps for visualization of categories||Szymanski J.
|Lecture Notes in Computer Science||English||2012||Visualization of Wikipedia categories using Self Organizing Maps shows an overview of categories and their relations, helping to narrow down search domains. Selecting particular neurons this approach enables retrieval of conceptually similar categories. Evaluation of neural activations indicates that they form coherent patterns that may be useful for building user interfaces for navigation over category structures.||0||0|
|Self-organization with additional learning based on category mapping and its application to dynamic news clustering||Toyota T.
|IEEJ Transactions on Electronics, Information and Systems||Japanese; English||2012||The Internet news are texts which involve from various fields, therefore, when a text data that will show a rapid increase of the number of dimensions of feature vectors of Self-OrganizingMap (SOM) is added, these results cannot be reflected to learning. Furthermore, it is difficult for users to recognize the learning results because SOM can not produce any label information by each cluster. In order to solve these problems, we propose SOM with additional learning and dimensional by category mapping which is based on the category structure of Wikipedia. In this method, input vector is generated from each text and the correspondingWikipedia categories extracted fromWikipedia articles. Input vectors are formed in the common category taking the hierarchical structure of Wikipedia category into consideration. By using the proposed method, the problem of reconfiguration of vector elements caused by dynamic changes in the text can be solved. Moreover, information loss in newly obtained index term can be prevented.||0||0|
|ViDaX: An interactive semantic data visualisation and exploration tool||Dumas B.
|Proceedings of the Workshop on Advanced Visual Interfaces AVI||English||2012||We present the Visual Data Explorer (ViDaX), a tool for visualising and exploring large RDF data sets. ViDaX enables the extraction of information from RDF data sources and offers functionality for the analysis of various data characteristics as well as the exploration of the corresponding ontology graph structure. In addition to some basic data mining features, our interactive semantic data visualisation and exploration tool offers various types of visualisations based on the type of data. In contrast to existing semantic data visualisation solutions, ViDaX also offers non-expert users the possibility to explore semantic data based on powerful automatic visualisation and interaction techniques without the need for any low-level programming. To illustrate some of ViDaX's functionality, we present a use case based on semantic data retrieved from DBpedia, a semantic version of the well-known Wikipedia online encyclopedia, which forms a major component of the emerging linked data initiative.||0||0|
|WikiTrip: Animated visualization over time of geo-location and gender of Wikipedians who edited a page||Paolo Massa
|WikiSym 2012||English||2012||In this short paper, we present WikiTrip, a web tool we created and released as open source which provides a visualization over time of two kinds of information about the Wikipedians who edited a selected page: their location in the world and their gender. We also describe evidence that pages on a language edition of Wikipedia which receive most attention in terms of edits from countries where the language is not primarily spoken are about TV shows and stars, football teams or specific geographic locations.||0||0|
|A self organizing document map algorithm for large scale hyperlinked data inspired by neuronal migration||Kotaro Nakayama
|Proceedings of the 20th International Conference Companion on World Wide Web, WWW 2011||English||2011||Web document clustering is one of the research topics that is being pursued continuously due to the large variety of applications. Since Web documents usually have variety and diversity in terms of domains, content and quality, one of the technical difficulties is to find a reasonable number and size of clusters. In this research, we pay attention to SOMs (Self Organizing Maps) because of their capability of visualized clustering that helps users to investigate characteristics of data in detail. The SOM is widely known as a "scalable" algorithm because of its capability to handle large numbers of records. However, it is effective only when the vectors are small and dense. Although several research efforts on making the SOM scalable have been conducted, technical issues on scalability and performance for sparse high-dimensional data such as hyperlinked documents still remain. In this paper, we introduce MIGSOM, an SOM algorithm inspired by a recent discovery on neuronal migration. The two major advantages of MIGSOM are its scalability for sparse high-dimensional data and its clustering visualization functionality. In this paper, we describe the algorithm and implementation, and show the practicality of the algorithm by applying MIGSOM to a huge scale real data set: Wikipedia's hyperlink data.||0||0|
|CATE: Context-aware timeline for entity illustration||Tuan T.A.
|Proceedings of the 20th International Conference Companion on World Wide Web, WWW 2011||English||2011||Wikipedia has become one of the most authoritative information sources on the Web. Each article in Wikipedia provides a portrait of a certain entity. However, such a portrait is far from complete. An informative portrait of an entity should also reveal the context the entity belongs to. For example, for a person, major historical, political and cultural events that coincide with her life are important and should be included in that person's portrait. Similarly, the person's interactions with other people are also important. All this information should be summarized and presented in an appealing and interactive visual interface that enables users to quickly scan the entity's portrait. We demonstrate CATE which is a system that utilizes Wikipedia to create a portrait of a given entity of interest. We provide a visualization tool that summarizes the important events related to the entity. The novelty of our approach lies in seeing the portrait of an entity in a broader context, synchronous with its time.||0||0|
|Innovation management in enterprises: Collaborative trend analysis using web 2.0 technologies||Kaiser I.
|Proceedings of the IADIS International Conferences - Web Based Communities and Social Media 2011, Social Media 2011, Internet Applications and Research 2011, Part of the IADIS, MCCSIS 2011||English||2011||Through early trend recognition in the business environment and their specific processing within the innovation management, companies can achieve long-term market success. A particular challenge is the systematic identification, gathering, structuration and evaluation of trends. Web 2.0 technologies and especially Wikis, which allow several people to maintain and use content simultaneously, are eminently suitable for an efficient process of continuous collection and analysis of relevant market trends. In this paper, trend management processes are introduced and it is demonstrated how trends can be collected, structured and communicated within the enterprise using a customized wiki. The trend assessment is carried out inter alia on methods of crowd sourcing, resulting in an extensive evaluation basis. In addition, the presented approach includes a visualization of the trends and its assessment for decision support. A case study of global polymer solutions supplier REHAU AG demonstrates the use of the methodology in practice.||0||0|
|Language resources extracted from Wikipedia||Vrandecic D.
|KCAP 2011 - Proceedings of the 2011 Knowledge Capture Conference||English||2011||Wikipedia provides an interesting amount of text for more than hundred languages. This also includes languages where no reference corpora or other linguistic resources are easily available. We have extracted background language models built from the content of Wikipedia in various languages. The models generated from Simple and English Wikipedia are compared to language models derived from other established corpora. The differences between the models in regard to term coverage, term distribution and correlation are described and discussed. We provide access to the full dataset and create visualizations of the language models that can be used exploratory. The paper describes the newly released dataset for 33 languages, and the services that we provide on top of them.||0||0|
|Learning-Oriented Assessment of Wiki Contributions: How to Assess Wiki Contributions in a Higher Education Learning Setting||Emilio J. Rodríguez-Posada
Juan Manuel Dodero-Beardo
|International Conference on Computer Supported Education||English||2011||Computer-Supported Collaborative Learning based on wikis offers new ways of collaboration and encourages participation. When the number of contributions from students increases, traditional assessment procedures of e-learning settings suffer from scalability problems. In a wiki-based learning experience, some automatic tools are required to support the assessment of such great amounts of data. We have studied readily available analysis tools for the MediaWiki platform, that have complementary input, work modes and output. We comment our experience in two Higher Education courses, one using HistoryFlow and another using StatMediaWiki, and discuss the advantages and drawbacks of each system.||0||0|
|Map-like Wikipedia overview visualization||Pang C.-I.
|Proceedings of the 2011 International Conference on Collaboration Technologies and Systems, CTS 2011||English||2011||Wikis, such as Wikipedia, have become increasingly popular in recent years. They allow anyone to easily contribute to collaboratively written content. To better organize content, users in Wikipedia assign categories to articles, or create new categories if needed. The resulting semantic coverage of a wiki's articles over its categories is worth studying but not easy to obtain. To provide a better understanding, we created an approach to visualize an entire wiki by creating a graphical representation that is similar to a geographical map. This enables even untrained users, as well as people outside the field of computer science, to obtain an easily understandable overview of a wiki.||0||0|
|Maximum covariance unfolding: Manifold learning for bimodal data||Mahadevan V.
|Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011||English||2011||We propose maximum covariance unfolding (MCU), a manifold learning algorithm for simultaneous dimensionality reduction of data from different input modalities. Given high dimensional inputs from two different but naturally aligned sources, MCU computes a common low dimensional embedding that maximizes the cross-modal (inter-source) correlations while preserving the local (intra-source) distances. In this paper, we explore two applications of MCU. First we use MCU to analyze EEG-fMRI data, where an important goal is to visualize the fMRI voxels that are most strongly correlated with changes in EEG traces. To perform this visualization, we augment MCU with an additional step for metric learning in the high dimensional voxel space. Second, we use MCU to perform cross-modal retrieval of matched image and text samples from Wikipedia. To manage large applications of MCU, we develop a fast implementation based on ideas from spectral graph theory. These ideas transform the original problem for MCU, one of semidefinite programming, into a simpler problem in semidefinite quadratic linear programming.||0||0|
|Mobile topigraphy: Large-scale tag cloud visualization for mobiles||Matsubayashi T.
|Proceedings of the 20th International Conference Companion on World Wide Web, WWW 2011||English||2011||We introduce a new mobile topigraphy system that uses the contour map metaphor to display large-scale tag clouds. We introduce the technical issues for topigraphy, and recent requirements for and developments in mobile interfaces. We also present some applications for our mobile topigraphy system and describe the assessment on two initial applications.||0||0|
|VisualWikiCurator: A corporate wiki plugin||Nicholas Kong
|Conference on Human Factors in Computing Systems - Proceedings||English||2011||Knowledge workers who maintain corporate wikis face high costs for organizing and updating content on wikis. This problem leads to low adoption rates and compromises the utility of such tools in organizations. We describe a system that seeks to reduce the interactions costs of updating and organizing wiki pages by combining human and machine intelligence. We then present preliminary results of an ongoing lab-based evaluation of the tool with knowledge workers.||0||0|
|VisualWikiCurator: a corporate Wiki plugin||Nicholas Kong
|Visualizing revisions and building semantic network in Wikipedia||Hao L.
|Proceedings - 2011 International Conference on Cloud and Service Computing, CSC 2011||English||2011||Wikipedia, one of the largest online encyclopedias, is competent to Britannica. Articles are subject to day to day changes by authors, and each such change is recorded as a new revision. In this paper, we visualize the article's revisions and build the semantic network between articles. First, we analyze the revisions difference of article and using color to show the revisions change. Second, through the article's classified information, we constructed a semantic network of articles' relationship.||0||0|
|WikiDev 2.0: Facilitating software development teams||Fokaefs M.
|Proceedings of the European Conference on Software Maintenance and Reengineering, CSMR||English||2011||Software development is fundamentally a collaborative task. Developers, sometimes geographically distributed, collectively work on different parts of a project. The challenge of ensuring that their contributions consistently build on one another is a major concern for collaborative development and implies concerns with effective communication, task administration and exchange of documents and information concerning the project. In this demo, we present WikiDev 2.0, a lightweight wiki-based tool suite that enhances collaboration within software development teams. WikiDev 2.0 integrates information from multiple development tools and displays the results through its wikibased front-end. The tool also offers several analysis techniques and visualizations that improve the project-status awareness of the team.||0||0|
|Wikipedia world map: Method and application of map-like wiki visualization||Pang C.-L.
|WikiSym 2011 Conference Proceedings - 7th Annual International Symposium on Wikis and Open Collaboration||English||2011||Wiki are popular platforms for collaborative editing. In volunteer-driven wikis such as Wikipedia, which attracts millions of authors editing articles on a diverse range of topics, contributors' editing activity results in certain semantic coverage of topic areas. Obtaining an understanding of a given wiki's semantic coverage is not easy. To solve this problem, we have devised a method for visualizing a wiki in a way similar to a geographic map. We have applied our method to Wikipedia, and generated visualizations for several Wikipedia language editions. This paper presents our wiki visualization method and its application.||0||0|
|A machine learning approach to link prediction for interlinked documents||Kc M.
|Lecture Notes in Computer Science||English||2010||This paper provides an explanation to how a recently developed machine learning approach, namely the Probability Measure Graph Self-Organizing Map (PM-GraphSOM) can be used for the generation of links between referenced or otherwise interlinked documents. This new generation of SOM models are capable of projecting generic graph structured data onto a fixed sized display space. Such a mechanism is normally used for dimension reduction, visualization, or clustering purposes. This paper shows that the PM-GraphSOM training algorithm "inadvertently" encodes relations that exist between the atomic elements in a graph. If the nodes in the graph represent documents, and the links in the graph represent the reference (or hyperlink) structure of the documents, then it is possible to obtain a set of links for a test document whose link structure is unknown. A significant finding of this paper is that the described approach is scalable in that links can be extracted in linear time. It will also be shown that the proposed approach is capable of predicting the pages which would be linked to a new document, and is capable of predicting the links to other documents from a given test document. The approach is applied to web pages from Wikipedia, a relatively large XML text database consisting of many referenced documents.||0||0|
|Algorithm Visualization: The state of the field||Shaffer C.A.
|ACM Transactions on Computing Education||English||2010||We present findings regarding the state of the field of Algorithm Visualization (AV) based on our analysis of a collection of over 500 AVs. We examine how AVs are distributed among topics, who created them and when, their overall quality, and how they are disseminated. There does exist a cadre of good AVs and active developers. Unfortunately, we found that many AVs are of low quality, and coverage is skewed toward a few easier topics. This can make it hard for instructors to locate what they need. There are no effective repositories of AVs currently available, which puts many AVs at risk for being lost to the community over time. Thus, the field appears in need of improvement in disseminating materials, propagating known best practices, and informing developers about topic coverage. These concerns could be mitigated by building community and improving communication among AV users and developers.||0||0|
|Cognitive abilities and the measurement of world wide web usability||Campbell S.G.
|Proceedings of the Human Factors and Ergonomics Society||English||2010||Usability of an interface is an emergent property of the system and the user; it does not exist independently of either one. For this reason, characteristics of the user which affect his or her performance on a task can affect the apparent usability of the interface in a usability study. We propose and investigate, using a Wikipedia information-seeking task, a model relating spatial abilities and performance measures for system usability. In the context of World Wide Web (WWW) site usability, we found that spatial visualization ability and system experience predicted system effectiveness measures, while spatial orientation ability, spatial visualization ability, and general computer experience predicted system efficiency measures. We suggest possible extensions and further tests of this model. Copyright 2010 by Human Factors and Ergonomics Society, Inc. All rights reserved.||0||0|
|Collaborative educational geoanalytics applied to large statistics temporal data||Jern M.||CSEDU 2010 - 2nd International Conference on Computer Supported Education, Proceedings||English||2010||Recent advances in Web 2.0 graphics technologies have the potential to make a dramatic impact on developing collaborative geovisual analytics that analyse, visualize, communicate and present official statistics. In this paper, we introduce novel "storytelling" means for the experts to first explore large, temporal and multidimensional statistical data, then collaborate with colleagues and finally embed dynamic visualization into Web documents e.g. HTML, Blogs or MediaWiki to communicate essential gained insight and knowledge. The aim is to let the analyst (author) explore data and simultaneously save important discoveries and thus enable sharing of gained insights over the Internet. Through the story mechanism facilitating descriptive metatext, textual annotations hyperlinked through the snapshot mechanism and integrated with interactive visualization, the author can let the reader follow the analyst's way of logical reasoning. This emerging technology could in many ways change the terms and structures for learning.||0||0|
|Deep Diffs: Visually exploring the history of a document||Shannon R.
|Proceedings of the Workshop on Advanced Visual Interfaces AVI||English||2010||Software tools are used to compare multiple versions of a textual document to help a reader understand the evolution of that document over time. These tools generally support the comparison of only two versions of a document, requiring multiple comparisons to be made to derive a full history of the document across multiple versions. We present Deep Diffs, a novel visualisation technique that exposes the multiple layers of history of a document at once, directly in the text, highlighting areas that have changed over multiple successive versions, and drawing attention to passages that are new, potentially unpolished or contentious. These composite views facilitate the writing and editing process by assisting memory and encouraging the analysis of collaboratively-authored documents. We describe how this technique effectively supports common text editing tasks and heightens participants' understanding of the process in collaborative editing scenarios like wiki editing and paper writing. Copyright||0||0|
|Efficient visualization of content and contextual information of an online multimedia digital library for effective browsing||Mishra S.
|Proceedings - 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT 2010||English||2010||In this paper, we present a few innovative techniques for visualization of content and contextual information of a multimedia digital library for effective browsing. A traditional collection visualization portal often depicts some metadata or a short synopsis, which is quite inadequate for assessing the documents. We have designed a novel web portal that incorporates a few preview facilities to disclose an abstract of the contents. Moreover, we place the documents on Google Maps to make its geographical context explicit. A semantic network, created automatically around the collection, brings out other contextual information from external knowledge resources like Wikipedia which is used for navigating collection. This paper also reports economical hosting techniques using Amazon Cloud.||0||0|
|IChase: Supporting exploration and awareness of editing activities on Wikipedia||Riche N.H.
|Proceedings of the Workshop on Advanced Visual Interfaces AVI||English||2010||To increase its credibility and preserve the trust of its readers. Wikipedia needs to ensure a good quality of its articles. To that end, it is critical for Wikipedia administrators to be aware of contributors' editing activity to monitor vandalism, encourage reliable contributors to work on specific articles, or find mentors for new contributors. In this paper, we present iChase, a novel interactive visualization tool to provide administrators with better awareness of editing activities on Wikipedia. Unlike the currently used visualizations that provide only page-centric information, iChase visualizes the trend of activities for two entity types, articles and contributors. iChase is based on two heatmaps (one for each entity type) synchronized to one timeline. It allows users to interactively explore the history of changes by drilling down into specific articles and contributors, or time points to access the details of the changes. We also present a case study to illustrate how iChase can be used to monitor editing activities of Wikipedia authors, as well as a usability study. We conclude by discussing the strengths and weaknesses of iChase. Copyright 2010 ACM.||0||0|
|Interactive visualization and navigation of web search results revealing community structures and bridges||Sallaberry A.
|Proceedings - Graphics Interface||English||2010||With the information overload on the Internet, organization and visualization of web search results so as to facilitate faster access to information is a necessity. The classical methods present search results as an ordered list of web pages ranked in terms of relevance to the searched topic. Users thus have to scan text snippets or navigate through various pages before finding the required information. In this paper we present an interactive visualization system for content analysis of web search results. The system combines a number of algorithms to present a novel layout methodology which helps users to analyze and navigate through a collection of web pages. We have tested this system with a number of data sets and have found it very useful for the exploration of data. Different case studies are presented based on searching different topics on Wikipedia through Exalead's search engine.||0||0|
|Model-aware wiki analysis tools: The case of HistoryFlow||Diaz O.
|WikiSym 2010||English||2010||Wikis are becoming mainstream. Studies confirm how wikis are finding their way into organizations. This paper focuses on requirements for analysis tools for corporate wikis. Corporate wikis differ from their grow-up counterparts such as Wikipedia. First, they tend to be much smaller. Second, they require analysis to be customized for their own domains. So far, most analysis tools focus on large wikis where handling efficiently large bulks of data is paramount. This tends to make analysis tools access directly the wiki database. This binds the tool to the wiki engine, hence, jeopardizing customizability and interoperability. However, corporate wikis are not so big while customizability is a desirable feature. This change in requirements advocates for analysis tools to be decoupled from the underlying wiki engines. Our approach argues for characterizing analysis tools in terms of their abstract analysis model (e.g. a graph model, a contributor model). How this analysis model is then map into wiki-implementation terms is left to the wiki administrator. The administrator, as the domain expert, can better assess which is the right terms/granularity to conduct the analysis. This accounts for suitability and interoperability gains. The approach is borne out for HistoryFlow, an IBM tool for visualizing evolving wiki pages and the interactions of multiple wiki authors.||0||0|
|Pixel-oriented visualization of change in social networks||Klaus Stein
|Proceedings - 2010 International Conference on Advances in Social Network Analysis and Mining, ASONAM 2010||English||2010||We propose a new approach to visualize social networks. Most common network visualizations rely on graph drawing. While without doubt useful, graphs suffer from limitations like cluttering and important patterns may not be realized especially when networks change over time. Our approach adapts pixel-oriented visualization techniques to social networks as an addition to traditional graph visualizations. The visualization is exemplified using social networks based on corporate wikis.||0||0|
|Real-time aggregation of wikipedia data for visual analytics||Boukhelifa N.
|VAST 10 - IEEE Conference on Visual Analytics Science and Technology 2010, Proceedings||English||2010||Wikipedia has been built to gather encyclopedic knowledge using a collaborative social process that has proved its effectiveness. However, the workload required for raising the quality and increasing the coverage of Wikipedia is exhausting the community. Based on several participatory design sessions with activeWikipedia contributors (a.k.a. Wikipedians), we have collected a set of measures related to Wikipedia activity that, if available and visualized effectively, could spare a lot of monitoring time to these Wikipedians, allowing them to focus on quality and coverage of Wikipedia instead of spending their time navigating heavily to track vandals and copyright infringements. However, most of these measures cannot be computed on the fly using the available Wikipedia API. Therefore, we have designed an open architecture called WikiReactive to compute incrementally and maintain several aggregated measures on the French Wikipedia. This aggregated data is available as a Web Service and can be used to overlay information on Wikipedia articles through Wikipedia Skins or for new services for Wikipedians or people studying Wikipedia. This article describes the architecture, its performance and some of its uses.||0||0|
|Talking about data: Sharing richly structured information through blogs and wikis||Benson E.
|Proceedings of the 19th International Conference on World Wide Web, WWW '10||English||2010||The web has dramatically enhanced people's ability to communicate ideas, knowledge, and opinions. But the authoring tools that most people understand, blogs and wikis, primarily guide users toward authoring text. In this work, we show that substantial gains in expressivity and communication would accrue if people could easily share richly structured information in meaningful visualizations. We then describe several extensions we have created for blogs and wikis that enable users to publish, share, and aggregate such structured information using the same workflows they apply to text. In particular, we aim to preserve those attributes that make blogs and wikis so effective: one-click access to the information, one-click publishing of content, natural authoring interfaces, and the ability to easily copy-and-paste information and visualizations from other sources.||0||0|
|The afterlife of 'living deliverables': Angels or zombies?||Wild F.
|CEUR Workshop Proceedings||English||2010||Within the STELLAR project, we provide the possibility to use living documents for the collaborative writing work on deliverables. Compared to 'normal' deliverables, 'living' deliverables come into existence much earlier than their delivery deadline and are expected to 'live on' after their official delivery to the European Commission. They are expected to foster collaboration. Within this contribution we investigate, how these deliverables have been used over the first 16 months of the project. We therefore propose a set of new analysis methods facilitating social network analysis on publicly available revision history data. With this instrumentarium, we critically look at whether the living deliverables have been successfully used for collaboration and whether their 'afterlife' beyond the contractual deadline had turned them into 'zombies' (still visible, but no or little live editing activities). The results show that the observed deliverables show signs of life, but often in connection with a topical change and in conjunction with changes in the pattern of collaboration.||0||0|
|ThinkFree: Using a visual wiki for IT knowledge management in a tertiary institution||Christian Hirsch
|WikiSym 2010||English||2010||We describe ThinkFree, an industrial Visual Wiki application which provides a way for end users to better explore knowledge of IT Enterprise Architecture assets that is held within a large enterprise wiki. The application was motivated by the difficulty users were facing navigating and understanding enterprise architecture information in a large corporate wiki. ThinkFree provides a graph based interactive visualization of IT assets which are described using the Freebase semantic wiki. It is used to visualize relationships between those assets and navigate between them. We describe the motivation for the development of ThinkFree, its design and implementation. Our experiences in corporate rollout of the application are discussed, together with the strengths of weaknesses of the approach we have taken and lessons learned from ThinkFree's development and deployment.||0||0|
|ThinkFree: using a visual Wiki for IT knowledge management in a tertiary institution||Christian Hirsch
|Timely YAGO: Harvesting, querying, and visualizing temporal knowledge from Wikipedia||Yafang Wang
|Advances in Database Technology - EDBT 2010 - 13th International Conference on Extending Database Technology, Proceedings||English||2010||Recent progress in information extraction has shown how to automatically build large ontologies from high-quality sources like Wikipedia. But knowledge evolves over time; facts have associated validity intervals. Therefore, ontologies should include time as a first-class dimension. In this paper, we introduce Timely YAGO, which extends our previously built knowledge base YAGO with temporal aspects. This prototype system extracts temporal facts from Wikipedia infoboxes, categories, and lists in articles, and integrates these into the Timely YAGO knowledge base. We also support querying temporal facts, by temporal predicates in a SPARQL-style language. Visualization of query results is provided in order to better understand of the dynamic nature of knowledge. Copyright 2010 ACM.||0||0|
|VikiBuilder: End-user specification and generation of visual wikis||Christian Hirsch
|ASE'10 - Proceedings of the IEEE/ACM International Conference on Automated Software Engineering||English||2010||With the need to make sense out of large and constantly growing information spaces, tools to support information management are becoming increasingly valuable. In prior work we proposed the "Visual Wiki" concept to describe and implement web-based information management applications. By focusing on the integration of two promising approaches, visualizations and collaboration tools, our Visual Wiki work explored synergies and demonstrated the value of the concept. Building on this, we introduce "VikiBuilder" , a Visual Wiki meta-tool, which provides end-user supported modeling and automatic generation of Visual Wiki instances. We describe the design and implementation of the VikiBuilder including its architecture, a domain specific visual language for modeling Visual Wikis, and automatic generation of those. To demonstrate the utility of the tool, we have used it to construct a variety of different Visual Wikis. We describe the construction of Visual Wikis and discuss the strengths and weaknesses of our meta-tool approach.||0||0|
|VikiBuilder: end-user specification and generation of visual wikis||Christian Hirsch
|Visual Semantic Client a visualization tool for semantic content||Wahl H.
|ICETC 2010 - 2010 2nd International Conference on Education Technology and Computer||English||2010||The University of Applied Sciences Technikum Wien is a fastgrowing education organization that actually offers a set of 12 bachelor and 14 master degree programs. Coordination of lectures and therefore quality management has become more and more difficult. Knowledge management in terms of lecture contents and professional skills of lecturers seems to be an unsolvable task. As a matter of f act, nobody is able to overlook all teaching details of the whole university. Although information is available in several databases and documents even getting an overview of all detail content of a single degree program turns out to be impossible. To overcome this problem the Technikum Wien started a project to extract selected information from documents and store it in a Semantic Wiki by automatically setting up entities and their relations. To improve usage of semantic content a software tool to browse the information categories and their relations has being developed. The "Visual Semantic Client" visualizes entities with their attributes and allows following or searching their relations. The paper shows the concepts behind, the system architecture and the current state of development.||0||0|
|Visualizing co-evolution of individual and collective knowledge||Joachim Kimmerle
|Information Communication and Society||English||2010||This paper describes how processes of knowledge building with wikis may be visualized, citing the user-generated online encyclopedia Wikipedia as an example. The underlying theoretical basis is a framework for collaborative knowledge building with wikis that describes knowledge building as a co-evolution of individual and collective knowledge. These co-evolutionary processes may be visualized graphically, applying methods from social network analysis, especially those methods that take dynamic changes into account. For this purpose, we have undertaken to analyse, on the one hand, the temporal development of a Wikipedia article and related articles that are linked to this core article. On the other hand, we analysed the temporal development of those users who worked on these articles. The resulting graphics show an analogous process, both with regard to the articles that refer to the core article and to the users involved. These results provide empirical support for the co-evolution model.||0||0|
|Visualizing empires decline||Cruz P.
|ACM SIGGRAPH 2010 Posters, SIGGRAPH '10||English||2010||This is an information visualization project that narrates the decline of the British, French, Portuguese and Spanish empires during the 19th and 20th centuries. These empires were the main maritime empires in terms of land area during the referred centuries [Wikipedia]. The land area of the empires and its former colonies is continuously represented in the simulation. The size of the empires varies during the simulation as they gain, or lose, territories. The graphic representation forms were selected to attain a narrative that depicts the volatility, instability and dynamics of the expansion and decline of the empires. Furthermore, the graphic representation also aims at emphasizing the contrast between their maximum and current size, and portraying the contemporary heritage and legacy of the empires.||0||0|
|Visualizing large-scale RDF data using subsets, summaries, and sampling in oracle||Sundara S.
|Proceedings - International Conference on Data Engineering||English||2010||The paper addresses the problem of visualizing large scale RDF data via a 3-S approach, namely, by using, 1) Subsets: to present only relevant data for visualisation; both static and dynamic subsets can be specified, 2) Summaries: to capture the essence of RDF data being viewed; summarized data can be expanded on demand thereby allowing users to create hybrid (summary-detail) fisheye views of RDF data, and 3) Sampling: to further optimize visualization of large-scale data where a representative sample suffices. The visualization scheme works with both asserted and inferred triples (generated using RDF(S) and OWL semantics). This scheme is implemented in Oracle by developing a plug-in for the Cytoscape graph visualization tool, which uses functions defined in a Oracle PL/SQL package, to provide fast and optimized access to Oracle Semantic Store containing RDF data. Interactive visualization of a synthesized RDF data set (LUBM 1 million triples), two native RDF datasets (Wikipedia 47 million triples and UniProt 700 million triples), and an OWL ontology (eClassOwl with a large class hierarchy including over 25,000 OWL classes, 5,000 properties, and 400,000 class-properties) demonstrates the effectiveness of our visualization scheme.||0||0|
|Where would you go on your next vacation? A framework for visual exploration of attractive places||Kisilevich S.
|2nd International Conference on Advanced Geographic Information Systems, Applications, and Services, GEOProcessing 2010||English||2010||Tourists face a great challenge when they gather information about places they want to visit. Geographically tagged information in the form of Wikipedia pages, local tourist information pages, dedicated web sites and the massive amount of information provided by Google Earth is publicly available and commonly used. But the processing of this information involves a time consuming activity. Our goal is to make search for attractive places simpler for the common user and provide researchers with methods for exploration and analysis of attractive areas. We assume that an attractive place is characterized by large amounts of photos taken by many people. This paper presents a framework in which we demonstrate a systematic approach for visualization and exploration of attractive places as a zoomable information layer. The presented technique utilizes density-based clustering of image coordinates and smart color scaling to produce an interactive visualizations using Google Earth Mashup 1. We show that our approach can be used as a basis for detailed analysis of attractive areas. In order to demonstrate our method, we use real-world geo-tagged photo data obtained from Flickr 2 and Panoramio 3 to construct interactive visualizations of virtually every region of interest in the world.||0||0|
|WikipediaViz: Conveying article quality for casual wikipedia readers||Fanny Chevalier
|IEEE Pacific Visualization Symposium 2010, PacificVis 2010 - Proceedings||English||2010||As Wikipedia has become one of the most used knowledge bases worldwide, the problem of the trustworthiness of the information it disseminates becomes central. With WikipediaViz, we introduce five visual indicators integrated to the Wikipedia layout that can keep casual Wikipedia readers aware of important meta-information about the articles they read. The design of WikipediaViz was inspired by two participatory design sessions with expert Wikipedia writers and sociologists who explained the clues they used to quickly assess the trustworthiness of articles. According to these results, we propose five metrics for Maturity and Quality assessment ofWikipedia articles and their accompanying visualizations to provide the readers with important clues about the editing process at a glance. We also report and discuss about the results of the user studies we conducted. Two preliminary pilot studies show that all our subjects trust Wikipedia articles almost blindly. With the third study, we show that WikipediaViz significantly reduces the time required to assess the quality of articles while maintaining a good accuracy.||0||0|
|Deep thought;web based system for managing and presentation of research and student projects||Gregar T.
|CSEDU 2009 - Proceedings of the 1st International Conference on Computer Supported Education||English||2009||There are plenty of projects solved each day at academic venues -small in-term students' projects without any real usability, bachelor and diploma thesis, large interdisciplinary or internationally supported projects. Each of them has its own set of requirements how to manage it. Aim of our paper is to describe these requirements, and to show how we tried to satisfy them. As a result of further analysis we designed and implemented system Deep Thought (under development since autumn 2007), which united the management of distinct categories of projects in one portal. System is based on open-source technology, it is modular and hence it is capable to integrate heterogeneous tools such as version control system, wiki, project presenting and managing. This paper also introduces aims of the future development of the system, such as interoperability with other management systems or better connection with the lecture content and teaching process.||0||0|
|FolksoViz: A Semantic Relation-Based Folksonomy Visualization Using the Wikipedia Corpus||Kangpyo Lee
|FolksoViz: A semantic relation-based folksonomy visualization using the Wikipedia corpus||Kangpyo Lee
|10th ACIS Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, SNPD 2009, In conjunction with IWEA 2009 and WEACR 2009||English||2009||Tagging is one of the most popular services in Web 2.0 and folksonomy is a representation of collaborative tagging. Tag cloud has been the one and only visualization of the folksonomy. The tag cloud, however, provides no information about the relations between tags. In this paper, targeting del.icio.us tag data, we propose a technique, FolksoViz, for automatically deriving semantic relations between tags and for visualizing the tags and their relations. In order to find the equivalence, subsumption, and similarity relations, we apply various rules and models based on the Wikipedia corpus. The derived relations are visualized effectively. The experiment shows that the FolksoViz manages to find the correct semantic relations with high accuracy.||0||0|
|Interactive visualization tools for exploring the semantic graph of large knowledge spaces||Christian Hirsch
|CEUR Workshop Proceedings||English||2009||While the amount of available information on the Web is increasing rapidly, the problem of managing it becomes more difficult. We present two applications, Thinkbase and Thinkpedia, which aim to make Web content more accessible and usable by utilizing visualizations of the semantic graph as a means to navigate and explore large knowledge repositories. Both of our applications implement a similar concept: They extract semantically enriched contents from a large knowledge spaces (Freebase and Wikipedia respectively), create an interactive graph-based representation out of it, and combine them into one interface together with the original text based content. We describe the design and implementation of our applications, and provide a discussion based on an informal evaluation.||0||0|
|KaitoroBase: Visual exploration of software architecture documents||Su M.T.
|ASE2009 - 24th IEEE/ACM International Conference on Automated Software Engineering||English||2009||This paper describes a software architecture documentation tool (KaitoroBase) built within the Thinkbase Visual Wiki to provide support for non-linear navigation and visualization of Software Architecture Documents (SADs) produced using the Attribute-Driven Design (ADD) method. This involves constructing the meta-model for the SAD in Freebase which provides the foundation for the graph-based interactive visualization enabled by Thinkbase. The resulting tool displays a graphical, high-level structure of SAD, allows for exploratory search, non-linear navigation, and at the same time connects to low-level details of SADs in a wiki.||0||0|
|Lightweight document semantics processing in e-learning||Gregar T.
|Proceedings of I-KNOW 2009 - 9th International Conference on Knowledge Management and Knowledge Technologies and Proceedings of I-SEMANTICS 2009 - 5th International Conference on Semantic Systems||English||2009||There are plenty of projects aimed at incorporating semantic information into present day document processing. The main problem is their real-world usability. E-learning is one of the areas which can take advantage of the semantically described documents. In this paper we would like to introduce a framework of cooperating tools which can help extract, store, visualize semantics in this area.||0||0|
|Rv you're dumb: Identifying discarded work in wiki article history||Ekstrand M.D.
|Proceedings of the 5th International Symposium on Wikis and Open Collaboration, WiKiSym 2009||English||2009||Wiki systems typically display article history as a linear sequence of revisions in chronological order. This representation hides deeper relationships among the revisions, such as which earlier revision provided most of the content for a later revision, or when a revision effectively reverses the changes made by a prior revision. These relationships are valuable in understanding what happened between editors in conflict over article content. We present methods for detecting when a revision discards the work of one or more other revisions, a means of visualizing these relationships in-line with existing history views, and a computational method for detecting discarded work. We show through a series of examples that these tools can aid mediators of wiki content disputes by making salient the structure of the ongoing conflict. Further, the computational tools provide a means of determining whether or not a revision has been accepted by the community of editors surrounding the article. Copyright 2009 ACM.||0||1|
|Rv you're dumb: identifying discarded work in Wiki article history||Michael D. Ekstrand
John T. Riedl
|SSnetViz: A visualization engine for heterogeneous semantic social networks||Lim E.-P.
|ACM International Conference Proceeding Series||English||2009||SSnetViz is an ongoing research to design and implement a visualization engine for heterogeneous semantic social networks. A semantic social network is a multi-modal network that contains nodes representing different types of people or object entities, and edges representing relationships among them. When multiple heterogeneous semantic social networks are to be visualized together, SSnetViz provides a suite of functions to store heterogeneous semantic social networks, to integrate them for searching and analysis. We will illustrate these functions using social networks related to terrorism research, one crafted by domain experts and another from Wikipedia. Copyright||0||0|
|Vispedia: On-demand data integration for interactive visualization and exploration||Bryan Chan
|SIGMOD-PODS'09 - Proceedings of the International Conference on Management of Data and 28th Symposium on Principles of Database Systems||English||2009||Wikipedia is an example of the large, collaborative, semi-structured data sets emerging on the Web. Typically, before these data sets can be used, they must transformed into structured tables via data integration. We present Vispedia, a Web-based visualization system which incorporates data integration into an iterative, interactive data exploration and analysis process. This reduces the upfront cost of using heterogeneous data sets like Wikipedia. Vispedia is driven by a keyword-query-based integration interface implemented using a fast graph search. The search occurs interactively over DBpedia's semantic graph of Wikipedia, without depending on the existence of a structured ontology. This combination of data integration and visualization enables a broad class of non-expert users to more effectively use the semi-structured data available on the Web.||0||0|
|Visualizing Intellectual Connections among Philosophers Using the Hyperlink & Semantic Data from Wikipedia||Sofia J. Athenikos
|Visualizing cooperative activities with ellimaps: The case of wikipedia||Otjacques B.
|Lecture Notes in Computer Science||English||2009||Cooperation has become a key word in the emerging Web 2.0 paradigm. The nature and motivations of the various behaviours related to this type of cooperative activities remain however incompletely understood. The information visualization tools can play a crucial role from this perspective to analyse the collected data. This paper presents a prototype allowing visualizing some data about the Wikipedia history with a technique called ellimaps. In this context the recent CGD algorithm is used in order to increase the scalability of the ellimaps approach.||0||0|
|Visualizing intellectual connections among philosophers using the hyperlink & semantic data from Wikipedia||Athenikos S.J.
|Proceedings of the 5th International Symposium on Wikis and Open Collaboration, WiKiSym 2009||English||2009||Wikipedia, with its unique structural features and rich usergenerated content, is being increasingly recognized as a valuable knowledge source that can be exploited for various applications. The objective of the ongoing project reported in this paper is to create a Web-based knowledge portal for digital humanities based on the data extracted from Wikipedia (and other data sources). In this paper we present the interesting results we have obtained by extracting and visualizing various connections among 300 major philosophers using the structured data available in Wikipedia. Copyright||0||0|
|What's in Wikipedia?: mapping topics and conflict using socially annotated category structure||Aniket Kittur
Ed H. Chi
|Conference on Human Factors in Computing Systems||English||2009||0||1|
|What's in wikipedia? Mapping topics and conflict using socially annotated category structure||Aniket Kittur
|Conference on Human Factors in Computing Systems - Proceedings||English||2009||Wikipedia is an online encyclopedia which has undergone tremendous growth. However, this same growth has made it difficult to characterize its content and coverage. In this paper we develop measures to map Wikipedia using its socially annotated, hierarchical category structure. We introduce a mapping technique that takes advantage of socially-annotated hierarchical categories while dealing with the inconsistencies and noise inherent in the distributed way that they are generated. The technique is demonstrated through two applications: mapping the distribution of topics in Wikipedia and how they have changed over time; and mapping the degree of conflict found in each topic area. We also discuss the utility of the approach for other applications and datasets involving collaboratively annotated category hierarchies. Copyright 2009 ACM.||0||1|
|A visual-analytic toolkit for dynamic interaction graphs||Yang X.
|Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining||English||2008||In this article we describe a visual-analytic tool for the interrogation of evolving interaction network data such as those found in social, bibliometric, WWW and biological applications. The tool we have developed incorporates common visualization paradigms such as zooming, coarsening and filtering while naturally integrating information extracted by a previously described event-driven framework for characterizing the evolution of such networks. The visual front-end provides features that are specifically useful in the analysis of interaction networks, capturing the dynamic nature of both individual entities as well as interactions among them. The tool provides the user with the option of selecting multiple views, designed to capture different aspects of the evolving graph from the perspective of a node, a community or a subset of nodes of interest. Standard visual templates and cues are used to highlight critical changes that have occurred during the evolution of the network. A key challenge we address in this work is that of scalability - handling large graphs both in terms of the efficiency of the back-end, and in terms of the efficiency of the visual layout and rendering. Two case studies based on bibliometric and Wikipedia data are presented to demonstrate the utility of the toolkit for visual knowledge discovery.||0||0|
|Applying Web 2.0 design principles in the design of cooperative applications||Pinkwart N.||Lecture Notes in Computer Science||English||2008||"Web 2.0" is a term frequently mentioned in media - apparently, applications such as Wikipedia, Social Network Services, Online Shops with integrated recommender systems, or Sharing Services like flickr, all of which rely on user's activities, contributions, and interactions as a central factor, are fascinating for the general public. This leads to a success of these systems that seemingly exceeds the impact of most "traditional" groupware applications that have emerged from CSCW research. This paper discusses differences and similarities between novel Web 2.0 tools and more traditional CSCW application in terms of technologies, system design and success factors. Based on this analysis, the design of the cooperative learning application LARGO is presented to illustrate how Web 2.0 success factors can be considered for the design of cooperative environments.||0||0|
|Can you ever trust a wiki? Impacting perceived trustworthiness in wikipedia||Aniket Kittur
|English||2008||Wikipedia has become one of the most important information resources on the Web by promoting peer collaboration and enabling virtually anyone to edit anything. However, this mutability also leads many to distrust it as a reliable source of information. Although there have been many attempts at developing metrics to help users judge the trustworthiness of content, it is unknown how much impact such measures can have on a system that is perceived as inherently unstable. Here we examine whether a visualization that exposes hidden article information can impact readers' perceptions of trustworthiness in a wiki environment. Our results suggest that surfacing information relevant to the stability of the article and the patterns of editor behavior can have a significant impact on users' trust across a variety of page types. Copyright 2008 ACM.||0||0|
|FolksoViz: A subsumption-based folksonomy visualization using wikipedia texts||Kangpyo L.
|Proceeding of the 17th International Conference on World Wide Web 2008, WWW'08||English||2008||In this paper, targeting del.icio.us tag data, we propose a method, FolksoViz, for deriving subsumption relationships between tags by using Wikipedia texts, and visualizing a folksonomy. To fulfill this method, we propose a statistical model for deriving subsumption relationships based on the frequency of each tag on the Wikipedia texts, as well as the TSD (Tag Sense Disambiguation) method for mapping each tag to a corresponding Wikipedia text. The derived subsumption pairs are visualized effectively on the screen. The experiment shows that the FolksoViz manages to find the correct subsumption pairs with high accuracy.||0||0|
|Folksoviz: a subsumption-based folksonomy visualization using wikipedia texts||Kangpyo Lee
|World Wide Web||English||2008||0||0|
|On visualizing heterogeneous semantic networks from multiple data sources||Maureen
|Lecture Notes in Computer Science||English||2008||In this paper, we focus on the visualization of heterogeneous semantic networks obtained from multiple data sources. A semantic network comprising a set of entities and relationships is often used for representing knowledge derived from textual data or database records. Although the semantic networks created for the same domain at different data sources may cover a similar set of entities, these networks could also be very different because of naming conventions, coverage, view points, and other reasons. Since digital libraries often contain data from multiple sources, we propose a visualization tool to integrate and analyze the differences among multiple social networks. Through a case study on two terrorism-related semantic networks derived from Wikipedia and Terrorism Knowledge Base (TKB) respectively, the effectiveness of our proposed visualization tool is demonstrated.||0||0|
|Semi-automated map generation for concept gaming||Lauri Lahti
|MCCSIS'08 - IADIS Multi Conference on Computer Science and Information Systems; Proceedings of Computer Graphics and Visualization 2008 and Gaming 2008: Design for Engaging Experience Soc. Interaction||English||2008||Conventional learning games have often limited flexibility to address individual needs of a learner. The concept gaming approach provides a frame for handling conceptual structures that are defined by a concept map. A single concept map can be used to create many alternative games and these can be chosen so that personal learning goals can be taken well into account. However, the workload of creating new concept maps and sharing them effectively seems to easily hinder adoption of concept gaming. We now propose a new semi-automated map generation method for concept gaming. Due to fast increase in the open access knowledge available in the Web, the articles of the Wikipedia encyclopedia were chosen to serve as a source for concept map generation. Based on a given entry name the proposed method produces hierarchical concept maps that can be freely explored and modified. Variants of this approach could be successfully implemented in the wide range of educational tasks. In addition, ideas for further development of concept gaming are proposed.||0||0|
|The ViskiMap toolkit: Extending mediawiki with topic maps||Espiritu C.
|Lecture Notes in Business Information Processing||English||2008||In this paper, we present our ViskiMap systems, ENWiC (EduNuggets Wiki Crawler) and Annoki (Annotation wiki), for intelligent visualization of Wikis. In recent years, e-Learning has emerged as an appealing extension to traditional teaching. To some extent, the appeal of e-Learning derives from the great potential of information and knowledge sharing on the web, which has become a de-facto library to be used by students and instructors for educational purposes. Wiki's collaborative authoring nature makes it a very attractive tool to use for e-Learning purposes. Unfortunately, the web's text-based navigational structure becomes insufficient as the Wiki grows in size, and this backlash can hinder students from taking full advantage of the information available. The objective behind ViskiMap is to provide students with an intelligent interface for navigating Wikis and other similar large-scale websites. ViskiMap makes use of graphic organizers to visualize the relationships between content pages, so that students can easily get an understanding of the content elements and their relations, as they navigate through the Wiki pages. We describe ViskiMap's automated visualization process, and its user interfaces for students to view and navigate the Wiki in a meaningful manner, and for instructors to further enhance the visualization. We also discuss our usability study for evaluating the effectiveness of ENWiC as a Wiki Interface.||0||0|
|Thinkbase: A visual semantic Wiki||Christian Hirsch
|CEUR Workshop Proceedings||English||2008||Thinkbase is a visual navigation and exploration tool for Freebase, an open, shared database of the world's knowledge. Thinkbase extracts the contents, including semantic relationships, from Freebase and visualizes them using an interactive visual representation. Providing a focus plus context view the visualization is displayed along with the Freebase article. Thinkbase provides a proof of concept of how visualizations can improve and support Semantic Web applications. The application is available via http://thinkbase.cs.auckland.ac. nz.||0||0|
|Vispedia*: Interactive visual exploration of wikipedia data via search-based integration||Bryan Chan
|IEEE Transactions on Visualization and Computer Graphics||English||2008||Wikipedia is an example of the collaborative, semi-structured data sets emerging on the Web. These data sets have large, non-uniform schema that require costly data integration into structured tables before visualization can begin. We present Vispedia, a Web-based visualization system that reduces the cost of this data integration. Users can browse Wikipedia, select an interesting data table, then use a search interface to discover, integrate, and visualize additional columns of data drawn from multiple Wikipedia articles. This interaction is supported by a fast path search algorithm over DBpedia, a semantic graph extracted from Wikipedia's hyperlink structure. Vispedia can also export the augmented data tables produced for use in traditional visualization systems. We believe that these techniques begin to address the "long tail" of visualization by allowing a wider audience to visualize a broader class of data. We evaluated this system in a first-use formative lab study. Study participants were able to quickly create effective visualizations for a diverse set of domains, performing data integration as needed.||0||0|
|Visualizing Wiki-supported knowledge building: Co-evolution of individual and collective knowledge||Andreas Harrer
|WikiSym 2008 - The 4th International Symposium on Wikis, Proceedings||English||2008||It is widely accepted that wikis are valuable tools for successful collaborative knowledge building. In this paper, we describe how processes of knowledge building with wikis may be visualized, citing Wikipedia as an example. The underlying theoretical basis of our paper is the framework for collaborative knowledge building with wikis, as introduced by Cress and Kimmerle , , . This model describes collaborative knowledge building as a co-evolution of individual and collective knowledge, or of cognitive and social systems respectively. These co-evolutionary processes may be visualized graphically, applying methods from social network analysis, especially those methods that take dynamic changes into account , . For this purpose, we have undertaken to analyze, on the one hand, the temporal development of an article in the German version of Wikipedia and related articles that are linked to this core article. On the other hand, we analyzed the temporal development of those users who worked on these articles. The resulting graphics show an analogous process, both with regard to the articles that refer to the core article and to the users involved. These results provide empirical support for the co-evolution model. Some implications of our findings and the potential for future research on collaborative knowledge building with wikis and on the application of social network analysis are discussed at the end of the article.||0||3|
|XWiki concerto: A P2P wiki system supporting disconnected work||Gérôme Canals
|Lecture Notes in Computer Science||English||2008||This paper presents the XWiki Concerto system, the P2P version of the XWiki server. This system is based on replicating wiki pages on a network of wiki servers. The approach, based on the Woot algorithm, has been designed to be scalable, to support the dynamic aspect of P2P networks and network partitions. These characteristics make our system capable of supporting disconnected edition and sub-groups, making it very flexible and usable.||0||0|
|ZAME: Interactive large-scale graph visualization||Elmqvis N.
|IEEE Pacific Visualisation Symposium 2008, PacificVis - Proceedings||English||2008||We present the Zoomable Adjacency Matrix Explorer (ZAME), a visualization tool for exploring graphs at a scale of millions of nodes and edges. ZAME is based on an adjacency matrix graph representation aggregated at multiple scales. It allows analysts to explore a graph at many levels, zooming and panning with interactive performance from an overview to the most detailed views. Several components work together in the ZAME tool to make this possible. Efficient matrix ordering algorithms group related elements. Individual data cases are aggregated into higher-order meta-representations. Aggregates are arranged into a pyramid hierarchy that allows for on-demand paging to GPU shader programs to support smooth multiscale browsing. Using ZAME, we are able to explore the entire French. Wikipedia - over 500,000 articles and 6,000,000 links - with interactive performance on standard consumer-level computer hardware.||0||0|
|Us vs. Them: Understanding social dynamics in wikipedia with revert graph visualizations||Bongwon Suh
|VAST IEEE Symposium on Visual Analytics Science and Technology 2007, Proceedings||English||2007||Wikipedia is a wiki-based encyclopedia that has become one of the most popular collaborative on-line knowledge systems. As in any large collaborative system, as Wikipedia has grown, conflicts and coordination costs have increased dramatically. Visual analytic tools provide a mechanism for addressing these issues by enabling users to more quickly and effectively make sense of the status of a collaborative environment. In this paper we describe a model for identifying patterns of conflicts in Wikipedia articles. The model relies on users' editing history and the relationships between user edits, especially revisions that void previous edits, known as "reverts". Based on this model, we constructed Revert Graph, a tool that visualizes the overall conflict patterns between groups of users. It enables visual analysis of opinion groups and rapid interactive exploration of those relationships via detail drill-downs. We present user patterns and case studies that show the effectiveness of these techniques, and discuss how they could generalize to other systems.||0||4|
|Visualizing Activity on Wikipedia with Chromograms||Martin Wattenberg
Fernanda B. Viégas
|English||2007||To investigate how participants in peer production systems allocate their time, we examine editing activity on Wikipedia, the well-known online encyclopedia. To analyze the huge edit histories of the site’s administrators we introduce a visualization technique, the chromogram, that can display very long textual sequences through a simple color coding scheme. Using chromograms we describe a set of characteristic editing patterns. In addition to confirming known patterns, such reacting to vandalism events, we identify a distinct class of organized systematic activities. We discuss how both reactive and systematic strategies shed light on self-allocation of effort in Wikipedia, and how they may pertain to other peer-production systems.||0||1|
|WikiNavMap: A visualisation to supplement team-based wikis||Ullman A.J.
|Conference on Human Factors in Computing Systems - Proceedings||English||2007||Wikis are an invaluable tool for quickly and easily creating and editing a collection of web pages. Their use is particularly interesting in small teams to serve as a support for group communication, for co-ordination, as well as for creating collaborative document products. In spite of the very real appeal of the wiki for these purposes, there is a serious challenge due to their complexity. Team members can have difficulty identifying the structure and salient elements of the wiki. This paper describes the design of WikiNavMap, an alternative visual representation for wikis, which provides an overview of the wiki structure. Based on analysis of student wikis, we identified factors that help team members identify which wiki pages are currently relevant to them. We hypothesised that a structural overview coupled with the visual representations of these factors could assist users with wiki navigation decisions. We report a preliminary evaluation with a large group wiki, created over a full university semester by a group of ten users. The results are promising for a small wiki but point to challenges in coping with the complexity of a larger one.||0||1|
|Analyzing the effectiveness of collaborative condition monitoring using adaptive measure||Han H.-S.
|WSEAS Transactions on Information Science and Applications||English||2006||New opened community systems such as wiki support the entire freedom of expression for all users. Wiki is a powerful hypertext-based collaborative system and a conversational knowledge management system. The method of group knowledge construction in wiki depends on the social interaction. However, some factors interrupt the social interaction needed for collaboration in wiki. At first the linked-structure is hidden and continuously changeable. Furthermore, the linked structure has become more and more complex. The linked structure's complexity interrupts collaborative condition monitoring and group collaboration. We develop and test new adaptive measures proposed in this paper to decide whether or not they satisfy a group's collaborative condition. The visualization of these scores will support a basis to examine the collaborative condition and to decide the starting point of next activity in process-oriented learning. The result of experiment shows that changes in score values go with Collaboration Status. And we analyze the effectiveness of collaborative condition monitoring in the viewpoint of pedagogy.||0||0|
|Focused Access to Wikipedia||Sigurbjornsson
|Proceedings DIR-2006||2006||Wikipedia is a "free" online encyclopedia. It contains millions of entries in many languages and is growing at a fast pace. Due to its volume, search engines play an important role in giving access to the information in Wikipedia. The "free" availability of the collection makes it an attractive corpus for in formation retrieval experiments. In this paper we describe the evaluation of a searchengine that provides focused search access to Wikipedia, i.e., a search engine which gives direct access to individual sections of Wikipedia pages. The main contributions of this paper are twofold. First, we introduce Wikipedia as a test corpus for information retrieval experiments in general and for semi-structured retrieval in particular. Second, we demonstrate that focused XML retrieval methods can be applied to a wider range of problems than searching scientific journals in XML format, including accessing reference works.||0||0|
|Graphingwiki - A semantic wiki extension for visualising and inferring protocol dependency||Juhani Eronen
|CEUR Workshop Proceedings||English||2006||This paper introduces the Graphingwiki extension toMoinMoin Wiki. Graphingwiki enables the deepened analysis of the Wiki data by augmenting it with semantic data in a simple, practical and easy-to-use manner. Visualisation tools are used to clarify the resulting body of knowledge so that only the data essential for an usage scenario is displayed. Logic inference rules can be applied to the data to perform automated reasoning based on the data. Perceiving dependencies among network protocols presents an example use case of the framework. The use case was applied in practice in mapping effects of software vulnerabilities on critical infrastructures.||0||0|
|Graphingwiki - a Semantic Wiki extension for visualising and inferring protocol dependency||Juhani Eronen
|SemWiki||English||2006||This paper introduces the Graphingwiki extension to MoinMoin Wiki. Graphingwiki enables the deepened analysis of the Wiki data by augmenting it with semantic data in a simple, practical and easy-to-use manner. Visualisation tools are used to clarify the resulting body of knowledge so that only the data essential for an usage scenario is displayed. Logic inference rules can be applied to the data to perform automated reasoning based on the data. Perceiving dependencies among network protocols presents an example use case of the framework. The use case was applied in practice in mapping effects of software vulnerabilities on critical infrastructures.||8||0|
|Studying cooperation and conflict between authors with history flow visualizations||Fernanda B. Viégas
|Conference on Human Factors in Computing Systems||English||2004||The Internet has fostered an unconventional and powerful style of collaboration: “wiki” web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.||3||23|
- See also: List of visualization tools.