Stephen Barrett

From WikiPapers
Jump to: navigation, search

Stephen Barrett is an author.

Publications

Only those publications related to wikis are shown here.
Title Keyword(s) Published in Language DateThis property is a special property in this wiki. Abstract R C
The importance of Human Mental Workload in Web design Human factors
Human Mental Workload
Interaction Design
Usability
Web Design
WEBIST 2012 - Proceedings of the 8th International Conference on Web Information Systems and Technologies English 2012 The focus of this study is the introduction of the construct of Human Mental Workload (HMW) in Web design, aimed at supporting current interaction design practices. An experiment has been conducted using the original Wikipedia and Google web-interfaces, and using two slightly different versions. Three subjective psychological mental workload assessment techniques (NASA-TLX, Workload Profile and SWAT) with a well-established assessments usability tool (SUS) have been adopted. T-tests have been performed to study the statistical significance of the original and modified web-pages, in terms of workload required by typical tasks and perceived usability. Preliminary results show that, in one ideal case, increments of usability correspond to decrements of generated workload, confirming the negative impact of the structural changes on the interface. In another case, changes are significant in terms of usability but not in terms of generated workloads, thus raising research questions and underlying the importance of Human Mental Workload in Interaction Design. 0 0
Computational trust in web content quality: A comparative evalutation on the Wikipedia project Computational trust
Content-quality
Wikipedia
Informatica (Ljubljana) English 2007 The problem of identifying useful and trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publication. It is not hard to predict that in the future the direct reliance on this material will expand and the problem of evaluating the trustworthiness of this kind of content become crucial. The Wikipedia project represents the most successful and discussed example of such online resources. In this paper we present a method to predict Wikipedia articles trustworthiness based on computational trust techniques and a deep domain-specific analysis. Our assumption is that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia ñ i.e. content quality in a collaborative environment ñmapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. We present a series of experiment. The first is a study-case over a specific category of articles; the second is an evaluation over 8 000 articles representing 65% of the overall Wikipedia editing activity. We report encouraging results on the automated evaluation of Wikipedia content using our domain-specific expertise method. Finally, in order to appraise the value added by using domain-specific expertise, we compare our results with the ones obtained with a pre-processed cluster analysis, where complex expertise is mostly replaced by training and automatic classification of common features. 0 0
Presumptive selection of trust evidence Computational trust
Presumptive reasoning
Wikipedia
Proceedings of the International Conference on Autonomous Agents English 2007 1. This paper proposes a generic method for identifying elements in a domain that can be used as trust evidences. As an alternative to external infrastructured approaches based on certificates or user recommendations we propose a computation based on evidences gathered directly from application elements that have been recognized to have a trust meaning. However, when the selection of evidences is done using a dedicated infrastructure or user's collaboration it remains a well-bounded problem. Instead, when evidences must be selected directly from domain activity selection is generally unsystematic and subjective, typically resulting in an unbounded problem. To address these issues, our paper proposes a general methodology for selecting trust evidences among elements of the domain under analysis. The method uses presumptive reasoning combined with a human-based and intuitive notion of Trust. Using the method the problem of evidence selection becomes the critical analysis of identified evidences plausibility against the situation and their logical consistency. We present an evaluation, in the context of the Wikipedia project, in which trust predictions based on evidences identified by our method are compared to a computation based on domain-specific expertise. 0 0
Extracting Trust from Domain Analysis: A Case Study on the Wikipedia Project Autonomic and Trusted Computing English 2006 The problem of identifying trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publications. Wikipedia is the most extraordinary example of this phenomenon and, although a few mechanisms have been put in place to improve contributions quality, trust in Wikipedia content quality has been seriously questioned. We thought that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia â i.e. content quality in a collaborative environment â mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. Our evaluation, conducted on about 8,000 articles representing 65% of the overall Wikipedia editing activity, shows that the new trust evidence that we extracted from Wikipedia allows us to transparently and automatically compute trust values to isolate articles of great or low quality. 0 2
Extracting trust from domain analysis: A case study on the wikipedia project Lecture Notes in Computer Science English 2006 The problem of identifying trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publications. Wikipedia is the most extraordinary example of this phenomenon and, although a few mechanisms have been put in place to improve contributions quality, trust in Wikipedia content quality has been seriously questioned. We thought that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia - i.e. content quality in a collaborative environment - mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. Our evaluation, conducted on about 8,000 articles representing 65% of the overall Wikipedia editing activity, shows that the new trust evidence that we extracted from Wikipedia allows us to transparently and automatically compute trust values to isolate articles of great or low quality. 0 2