| Network analysis|
(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
Network analysis is included as keyword or extra keyword in 0 datasets, 0 tools and 18 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Explaining authors' contribution to pivotal artifacts during mass collaboration in the Wikipedia's knowledge base||Iassen Halatchliyski
|International Journal of Computer-Supported Collaborative Learning||English||2014||This article discusses the relevance of large-scale mass collaboration for computer-supported collaborative learning (CSCL) research, adhering to a theoretical perspective that views collective knowledge both as substance and as participatory activity. In an empirical study using the German Wikipedia as a data source, we explored collective knowledge as manifested in the structure of artifacts that were created through the collaborative activity of authors with different levels of contribution experience. Wikipedia's interconnected articles were considered at the macro level as a network and analyzed using a network analysis approach. The focus of this investigation was the relation between the authors' experience and their contribution to two types of articles: central pivotal articles within the artifact network of a single knowledge domain and boundary-crossing pivotal articles within the artifact network of two adjacent knowledge domains. Both types of pivotal articles were identified by measuring the network position of artifacts based on network analysis indices of topological centrality. The results showed that authors with specialized contribution experience in one domain predominantly contributed to central pivotal articles within that domain. Authors with generalized contribution experience in two domains predominantly contributed to boundary-crossing pivotal articles between the knowledge domains. Moreover, article experience (i.e., the number of articles in both domains an author had contributed to) was positively related to the contribution to both types of pivotal articles, regardless of whether an author had specialized or generalized domain experience. We discuss the implications of our findings for future studies in the field of CSCL. © 2013 International Society of the Learning Sciences, Inc. and Springer Science+Business Media New York.||0||0|
|La connaissance est un réseau: Perspective sur l’organisation archivistique et encyclopédique||Martin Grandjean||Les Cahiers du Numérique||French||2014||Network analysis is not revolutionizing our objects of study, it revolutionizes the perspective of the researcher on the latter. Organized as a network, information becomes relational. It makes potentially possible the creation of new information, as with an encyclopedia which links between records weave a web which can be analyzed in terms of structural characteristics or with an archive directory which sees its hierarchy fundamentally altered by an index recomposing the information exchange network within a group of people. On the basis of two examples of management, conservation and knowledge enhancement tools, the online encyclopedia Wikipedia and the archives of the Intellectual Cooperation of the League of Nations, this paper discusses the relationship between the researcher and its object understood as a whole.
Abstract (french)L’analyse de réseau ne transforme pas nos objets d’étude, elle transforme le regard que le chercheur porte sur ceux-ci. Organisée en réseau, l’information devient relationnelle. Elle rend possible en puissance la création d’une nouvelle connaissance, à l’image d’une encyclopédie dont les liens entre les notices tissent une toile dont on peut analyser les caractéristiques structurelles ou d’un répertoire d’archives qui voit sa hiérarchie bouleversée par un index qui recompose le réseau d’échange d’information à l’intérieur d’un groupe de personnes. Sur la base de deux exemples d’outils de gestion, conservation et valorisation de la connaissance, l’encyclopédie en ligne Wikipédia et les archives de la coopération intellectuelle de la Société des Nations, cet article questionne le rapport entre le chercheur et son objet compris dans sa globalité. [Version preprint disponible].
|A Malicious Bot Capturing System using a Beneficial Bot and Wiki||Takashi Yamanoue
|Journal of Information Processing||English||February 2013||Locating malicious bots in a large network is problematic because the internal firewalls and network address translation (NAT) routers of the network unintentionally contribute to hiding the bots’ host address and malicious packets. However, eliminating firewalls and NAT routers merely for locating bots is generally not acceptable. In the present paper, we propose an easy to deploy, easy to manage network security control system for locating a malicious host behind internal secure gateways. The proposed network security control system consists of a remote security device and a command server. The remote security device is installed as a transparent link (implemented as an L2 switch), between the subnet and its gateway in order to detect a host that has been compromised by a malicious bot in a target subnet, while minimizing the impact of deployment. The security device is controlled remotely by 'polling' the command server in order to eliminate the NAT traversal problem and to be firewall friendly. Since the remote security device exists in transparent, remotely controlled, robust security gateways, we regard this device as a beneficial bot. We adopt a web server with wiki software as the command server in order to take advantage of its power of customization, ease of use, and ease of deployment of the server.||5||2|
|Analyzing multi-dimensional networks within mediawikis||Brian C. Keegan
|Proceedings of the 9th International Symposium on Open Collaboration, WikiSym + OpenSym 2013||English||2013||The MediaWiki platform supports popular socio-technical systems such as Wikipedia as well as thousands of other wikis. This software encodes and records a variety of rela- Tionships about the content, history, and editors of its arti- cles such as hyperlinks between articles, discussions among editors, and editing histories. These relationships can be an- Alyzed using standard techniques from social network analy- sis, however, extracting relational data from Wikipedia has traditionally required specialized knowledge of its API, in- formation retrieval, network analysis, and data visualization that has inhibited scholarly analysis. We present a soft- ware library called the NodeXL MediaWiki Importer that extracts a variety of relationships from the MediaWiki API and integrates with the popular NodeXL network analysis and visualization software. This library allows users to query and extract a variety of multidimensional relationships from any MediaWiki installation with a publicly-accessible API. We present a case study examining the similarities and dif- ferences between dierent relationships for the Wikipedia articles about \Pope Francis" and \Social media." We con- clude by discussing the implications this library has for both theoretical and methodological research as well as commu- nity management and outline future work to expand the capabilities of the library. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; D.2.8 [Software Engineering]: Metricscomplexity mea- sures, performance measures General Terms System. Copyright 2010 ACM.||0||0|
|Monitoring network structure and content quality of signal processing articles on wikipedia||Lee T.C.
|ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings||English||2013||Wikipedia has become a widely-used resource on signal processing. However, the freelance-editing model of Wikipedia makes it challenging to maintain a high content quality. We develop techniques to monitor the network structure and content quality of Signal Processing (SP) articles on Wikipedia. Using metrics to quantify the importance and quality of articles, we generate a list of SP articles on Wikipedia arranged in the order of their need for improvement. The tools we use include the HITS and PageRank algorithms for network structure, crowdsourcing for quantifying article importance and known heuristics for article quality.||0||0|
|Network analysis of user generated content quality in Wikipedia||Myshkin Ingawale
|Online Information Review||English||2013||Purpose - Social media platforms allow near-unfettered creation and exchange of user generated content (UGC). Drawing from network science, the purpose of this paper is to examine whether high and low quality UGC differ in their connectivity structures in Wikipedia (which consists of interconnected user generated articles). Design/methodology/approach - Using Featured Articles as a proxy for high quality, a network analysis was undertaken of the revision history of six different language Wikipedias, to offer a network-centric explanation for the emergence of quality in UGC. Findings - The network structure of interactions between articles and contributors plays an important role in the emergence of quality. Specifically the analysis reveals that high-quality articles cluster in hubs that span structural holes. Research limitations/implications - The analysis does not capture the strength of interactions between articles and contributors. The implication of this limitation is that quality is viewed as a binary variable. Extensions to this research will relate strength of interactions to different levels of quality in UGC. Practical implications - The findings help harness the "wisdom of the crowds" effectively. Organisations should nurture users and articles at the structural hubs from an early stage. This can be done through appropriate design of collaborative knowledge systems and development of organisational policies to empower hubs. Originality/value - The network centric perspective on quality in UGC and the use of a dynamic modelling tool are novel. The paper is of value to researchers in the area of social computing and to practitioners implementing and maintaining such platforms in organisations. Copyright||0||0|
|Staying in the Loop: Structure and Dynamics of Wikipedia's Breaking News Collaborations||Brian Keegan
|WikiSym||English||August 2012||Despite the fact that Wikipedia articles about current events are more popular and attract more contributions than typical articles, canonical studies of Wikipedia have only analyzed articles about pre-existing information. We expect the co-authoring of articles about breaking news incidents to exhibit high-tempo coordination dynamics which are not found in articles about historical events and information. Using 1.03 million revisions made by 158,384 users to 3,233 English Wikipedia articles about disasters, catastrophes, and conflicts since 1990, we construct “article trajectories” of editor interactions as they coauthor an article. Examining a subset of this corpus, our analysis demonstrates that articles about current events exhibit structures and dynamics distinct from those observed among articles about non-breaking events. These findings have implications for how collective intelligence systems can be leveraged to process and make sense of complex information.||0||0|
|Breaking news on Wikipedia: Dynamics, structures, and roles in high-tempo collaboration||Brian C. Keegan||English||2012||The goal of my research is to evaluate how distributed virtual teams are able to use socio-technical systems like Wikipedia to self-organize and respond to complex tasks. I examine the roles Wikipedians adopt to synthesize content about breaking news events out of a noisy and complex information space. Using data from Wikipedia's revision histories as well as from other sources like IRC logs, I employ methods in content analysis, statistical network analysis, and trace ethnography to illuminate the multilevel processes which sustain these temporary collaborations as well as the dynamics of how they emerge and dissolve.||0||0|
|Do editors or articles drive collaboration? Multilevel statistical network analysis of wikipedia coauthorship||Brian Keegan
|English||2012||Prior scholarship on Wikipedia's collaboration processes has examined the properties of either editors or articles, but not the interactions between both. We analyze the coauthorship network of Wikipedia articles about breaking news demanding intense coordination and compare the properties of these articles and the editors who contribute to them to articles about contemporary and historical events. Using p*/ERGM methods to test a multi-level, multi-theoretical model, we identify how editors' attributes and editing patterns interact with articles' attributes and authorship history. Editors' attributes like prior experience have a stronger influence on collaboration patterns, but article attributes also play significant roles. Finally, we discuss the implications our findings and methods have for understanding the socio-material duality of collective intelligence systems beyond Wikipedia.||0||1|
|Network Analysis of User Generated Content Quality in Wikipedia||Myshkin Ingawale
|Online Information Review||2012||Social media platforms allow near-unfettered creation and exchange of User Generated Content (UGC). We use Wikipedia, which consists of interconnected user generated articles. Drawing from network science, we examine whether high and low quality UGC in Wikipedia differ in their connectivity structures. Using featured articles as a proxy for high quality, we undertake a network analysis of the revision history of six different language Wikipedias to offer a network-centric explanation for the emergence of quality in UGC. The network structure of interactions between articles and contributors plays an important role in the emergence of quality. Specifically, the analysis reveals that high quality articles cluster in hubs that span structural holes. The analysis does not capture the strength of interactions between articles and contributors. The implication of this limitation is that quality is viewed as a binary variable. Extensions to this research will relate strength of interactions to different levels of quality in user generated content. Practical implications Our findings help harness the ‘wisdom of the crowds’ effectively. Organizations should nurture users and articles at the structural hubs, from an early stage. This can be done through appropriate design of collaborative knowledge systems and development of organizational policies to empower hubs. Originality The network centric perspective on quality in UGC and the use of a dynamic modeling tool are novel. The paper is of value to researchers in the area of social computing and to practitioners implementing and maintaining such platforms in organizations.||0||0|
|Staying in the loop: Structure and dynamics of Wikipedia's breaking news collaborations||Brian Keegan
|WikiSym 2012||English||2012||Despite the fact that Wikipedia articles about current events are more popular and attract more contributions than typical articles, canonical studies of Wikipedia have only analyzed articles about pre-existing information. We expect the co-authoring of articles about breaking news incidents to exhibit high-tempo coordination dynamics which are not found in articles about historical events and information. Using 1.03 million revisions made by 158,384 users to 3,233 English Wikipedia articles about disasters, catastrophes, and conflicts since 1990, we construct "article trajectories" of editor interactions as they coauthor an article. Examining a subset of this corpus, our analysis demonstrates that articles about current events exhibit structures and dynamics distinct from those observed among articles about non-breaking events. These findings have implications for how collective intelligence systems can be leveraged to process and make sense of complex information.||0||0|
|Wikis: Transactive memory systems in digital form||Jackson P.||ACIS 2012 : Proceedings of the 23rd Australasian Conference on Information Systems||English||2012||Wikis embed information about authors, tags, hyperlinks and other metadata into the information they create. Wiki functions use this metadata to provide pointers which allow users to track down, or be informed of, the information they need. In this paper we provide a firm theoretical conceptualization for this type of activity by showing how this metadata provides a digital foundation for a Transactive Memory System (TMS). TMS is a construct from group psychology which defines directory-based knowledge sharing processes to explain the phenomenon of "group mind". We analyzed the functions and data of two leading Wiki products to understand where and how they support the TMS. We then modeled and extracted data from these products into a network analysis product. The results confirmed that Wikis are a TMS in digital form. Network analysis highlights its characteristics as a "knowledge map", suggesting useful extensions to the internal "TMS" functions of Wikis. Jackson||0||0|
|Hot off the Wiki: Dynamics, Practices, and Structures in Wikipedia’s Coverage of the Tōhoku Catastrophes||Brian Keegan
|WikiSym||English||2011||Wikipedia editors are uniquely motivated to collaborate around current and breaking news events. However, the speed, urgency, and intensity with which these collaborations unfold also impose a substantial burden on editors’ abilities to effectively coordinate tasks and process information. We analyze the patterns of activity on Wikipedia following the 2011 Tōhoku earthquake and tsunami to understand the dynamics of editor attention and participation, novel practices employed to collaborate on these articles, and the resulting coauthorship structures which emerge between editors and articles. Our findings have implications for supporting future coverage of breaking news articles, theorizing about motivations to participate in online community, and illuminating Wikipedia’s potential role in storing cultural memories of catastrophe.||0||0|
|Defining a universal actor content-element model for exploring social and information networks considering the temporal dynamic||Muller C.
|Proceedings of the 2009 International Conference on Advances in Social Network Analysis and Mining, ASONAM 2009||English||2009||The emergence of the Social Web offers new opportunities for scientists to explore open virtual communities. Various approaches have appeared in terms of statistical evaluation, descriptive studies and network analyses, which pursue an enhanced understanding of existing mechanisms developing from the interplay of technical and social infrastructures. Unfortunately, at the moment, all these approaches are separate and no integrated approach exists. This gap is filled by our proposal of a concept which is composed of a universal description model, temporal network definitions, and a measurement system. The approach addresses the necessary interpretation of Social Web communities as dynamic systems. In addition to the explicated models, a software tool is briefly introduced employing the specified models. Furthermore, a scenario is used where an extract from the Wikipedia database shows the practical application of the software.||0||0|
|Analyzing Wiki-based Networks to Improve Knowledge Processes in Organizations||Müller
|Journal of Universal Computer Science, 14(4)||2008||Increasingly wikis are used to support existing corporate knowledge exchange processes. They are an appropriate software solution to support knowledge processes. However, it is not yet proven whether wikis are an adequate knowledge management tool or not. This paper presents a new approach to analyze existing knowledge exchange processes in wikis based on network analysis. Because of their dynamic characteristics four perspectives on wiki networks are introduced to investigate the interrelationships between people, information, and events in a wiki information space. As an analysis method the Social Network Analysis (SNA) is applied to uncover existing structures and temporal changes. A scenario data set of an analysis conducted with a corporate wiki is presented. The outcomes of this analysis were utilized to improve the existing corporate knowledge processes.||0||0|
|Collaborative knowledge semantic graph image search||Shieh J.-R.
|Proceeding of the 17th International Conference on World Wide Web 2008, WWW'08||English||2008||In this paper, we propose a Collaborative Knowledge Semantic Graphs Image Search (CKSGIS) system. It provides a novel way to conduct image search by utilizing the collaborative nature in Wikipedia and by performing network analysis to form semantic graphs for search-term expansion. The collaborative article editing process used by Wikipedia's contributors is formalized as bipartite graphs that are folded into networks between terms. When a user types in a search term, CKSGIS automatically retrieves an interactive semantic graph of related terms that allow users to easily find related images not limited to a specific search term. Interactive semantic graph then serve as an interface to retrieve images through existing commercial search engines. This method significantly saves users' time by avoiding multiple search keywords that are usually required in generic search engines. It benefits both naive users who do not possess a large vocabulary and professionals who look for images on a regular basis. In our experiments, 85% of the participants favored CKSGIS system rather than commercial search engines.||0||0|
|Using Semantic Graphs for Image Search||Shieh J.-R.
|2008 IEEE International Conference on Multimedia and Expo, ICME 2008 - Proceedings||English||2008||In this paper, we propose a Semantic Graphs for Image Search (SGIS) system, which provides a novel way for image search by utilizing collaborative knowledge in Wikipedia and network analysis to form semantic graphs for search-term suggestion. The collaborative article editing process of Wikipedia's contributors is formalized as bipartite graphs that are folded into networks between terms. When user types in a search term, SGIS automatically retrieves an interactive semantic graph of related terms that allow users easily find related images not limited to a specific search term. Interactive semantic graph then serves as an interface to retrieve images through existing commercial search engines. This method significantly saves users' time by avoiding multiple search keywords that are usually required in generic search engines. It benefits both naive user who does not possess a large vocabulary (e.g., students) and professionals who look for images on a regular basis. In our experiments, 85% of the participants favored SGIS system than commercial search engines.||0||0|
|The mathematical structure of cyberworlds||Ohmori K.
|Proceedings - 2007 International Conference on Cyberworlds, CW'07||English||2007||The mathematical structure of cyberworlds is clarified based on the duality of homology lifting property and homotopy extension property. The duality gives bottom-up and top-down methods to model, design and analyze the structure of cyberworlds. The set of homepages representing a cyberworld is transformed into a state finite machine. In development of the cyberworld, a sequence of finite state machines is obtained. This sequence has homotopic property. This property is clarified to map a finite state machine to a simplicial complex. Wikipedia, bottom-up network construction and top-down network analysis are described as examples.||0||0|