(Alternative names for this keyword)
|Related keyword(s)||authorship, quality|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
accuracy is included as keyword or extra keyword in 0 datasets, 0 tools and 18 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Analysis of the accuracy and readability of herbal supplement information on Wikipedia||Phillips J.
|Journal of the American Pharmacists Association||English||2014||Objective: To determine the completeness and readability of information found in Wikipedia for leading dietary supplements and assess the accuracy of this information with regard to safety (including use during pregnancy/lactation), contraindications, drug interactions, therapeutic uses, and dosing. Design: Cross-sectional analysis of Wikipedia articles. Interventions: The contents of Wikipedia articles for the 19 top-selling herbal supplements were retrieved on July 24, 2012, and evaluated for organization, content, accuracy (as compared with information in two leading dietary supplement references) and readability. Main Outcome Measures: Accuracy of Wikipedia articles. Results: No consistency was noted in how much information was included in each Wikipedia article, how the information was organized, what major categories were used, and where safety and therapeutic information was located in the article. All articles in Wikipedia contained information on therapeutic uses and adverse effects but several lacked information on drug interactions, pregnancy, and contraindications. Wikipedia articles had 26%-75% of therapeutic uses and 76%-100% of adverse effects listed in the Natural Medicines Comprehensive Database and/or Natural Standard. Overall, articles were written at a 13.5-grade level, and all were at a ninth-grade level or above. Conclusion: Articles in Wikipedia in mid-2012 for the 19 top-selling herbal supplements were frequently incomplete, of variable quality, and sometimes inconsistent with reputable sources of information on these products. Safety information was particularly inconsistent among the articles. Patients and health professionals should not rely solely on Wikipedia for information on these herbal supplements when treatment decisions are being made.||0||0|
|Wikipedia Usage Estimates Prevalence of Influenza-Like Illness in the United States in Near Real-Time||McIver D.J.
|PLoS Computational Biology||English||2014||Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27% over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012-2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17% more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools.||0||0|
|Development and evaluation of an ensemble resource linking medications to their indications||Wei W.-Q.
|Journal of the American Medical Informatics Association||English||2013||Objective: To create a computable MEDication Indication resource (MEDI) to support primary and secondary use of electronic medical records (EMRs). Materials and methods: We processed four public medication resources, RxNorm, Side Effect Resource (SIDER) 2, MedlinePlus, and Wikipedia, to create MEDI. We applied natural language processing and ontology relationships to extract indications for prescribable, single-ingredient medication concepts and all ingredient concepts as defined by RxNorm. Indications were coded as Unified Medical Language System (UMLS) concepts and International Classification of Diseases, 9th edition (ICD9) codes. A total of 689 extracted indications were randomly selected for manual review for accuracy using dual-physician review. We identified a subset of medication-indication pairs that optimizes recall while maintaining high precision. Results: MEDI contains 3112 medications and 63 343 medication-indication pairs. Wikipedia was the largest resource, with 2608 medications and 34 911 pairs. For each resource, estimated precision and recall, respectively, were 94% and 20% for RxNorm, 75% and 33% for MedlinePlus, 67% and 31% for SIDER 2, and 56% and 51% for Wikipedia. The MEDI high-precision subset (MEDI-HPS) includes indications found within either RxNorm or at least two of the three other resources. MEDI-HPS contains 13 304 unique indication pairs regarding 2136 medications. The mean±SD number of indications for each medication in MEDI-HPS is 6.22±6.09. The estimated precision of MEDI-HPS is 92%. Conclusions: MEDI is a publicly available, computable resource that links medications with their indications as represented by concepts and billing codes. MEDI may benefit clinical EMR applications and reuse of EMR data for research.||0||0|
|Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias||Imogen Casebourne
|English||2 August 2012||8||0|
|Reverts Revisited: Accurate Revert Detection in Wikipedia||Fabian Flöck
|Hypertext and Social Media 2012||English||June 2012||Wikipedia is commonly used as a proving ground for research in collaborative systems. This is likely due to its popularity and scale, but also to the fact that large amounts of data about its formation and evolution are freely available to inform and validate theories and models of online collaboration. As part of the development of such approaches, revert detection is often performed as an important pre-processing step in tasks as diverse as the extraction of implicit networks of editors, the analysis of edit or editor features and the removal of noise when analyzing the emergence of the con-tent of an article. The current state of the art in revert detection is based on a rather naïve approach, which identifies revision duplicates based on MD5 hash values. This is an efficient, but not very precise technique that forms the basis for the majority of research based on revert relations in Wikipedia. In this paper we prove that this method has a number of important drawbacks - it only detects a limited number of reverts, while simultaneously misclassifying too many edits as reverts, and not distinguishing between complete and partial reverts. This is very likely to hamper the accurate interpretation of the findings of revert-related research. We introduce an improved algorithm for the detection of reverts based on word tokens added or deleted to adresses these drawbacks. We report on the results of a user study and other tests demonstrating the considerable gains in accuracy and coverage by our method, and argue for a positive trade-off, in certain research scenarios, between these improvements and our algorithm’s increased runtime.||13||0|
|Doctors use, but don’t rely totally on, Wikipedia||English||2012||0||0|
|Quality of Internet information in pediatric otolaryngology: A comparison of three most referenced websites||Volsky P.G.
|International Journal of Pediatric Otorhinolaryngology||English||2012||Objective: Patients commonly refer to Internet health-related information. To date, no quantitative comparison of the accuracy and readability of common diagnoses in Pediatric Otolaryngology exist. Study aims: (1) identify the three most frequently referenced Internet sources; (2) compare the content accuracy and (3) ascertain user-friendliness of each site; (4) inform practitioners and patients of the quality of available information. Methods: Twenty-four diagnoses in pediatric otolaryngology were entered in Google and the top five URLs for each were ranked. Articles were accessed for each topic in the three most frequently referenced sites. Standard rubrics were developed to include proprietary scores for content, errors, navigability, and validated metrics of readability. Results: Wikipedia, eMedicine, and NLM/NIH MedlinePlus were the most referenced sources. For content accuracy, eMedicine scored highest (84%; p<0.05) over MedlinePlus (49%) and Wikipedia (46%). The highest incidence of errors and omissions per article was found in Wikipedia (0.98 ± 0.19), twice more than eMedicine (0.42 ± 0.19; p<0.05). Errors were similar between MedlinePlus and both eMedicine and Wikipedia. On ratings for user interface, which incorporated Flesch-Kinkaid Reading Level and Flesch Reading Ease, MedlinePlus was the most user-friendly (4.3 ± 0.29). This was nearly twice that of eMedicine (2.4 ± 0.26) and slightly greater than Wikipedia (3.7 ± 0.3). All differences were significant (p<0.05). There were 7 topics for which articles were not available on MedlinePlus. Conclusions: Knowledge of the quality of available information on the Internet improves pediatric otolaryngologists' ability to counsel parents. The top web search results for pediatric otolaryngology diagnoses are Wikipedia, MedlinePlus, and eMedicine. Online information varies in quality, with a 46-84% concordance with current textbooks. eMedicine has the most accurate, comprehensive content and fewest errors, but is more challenging to read and navigate. Both Wikipedia and MedlinePlus have lower content accuracy and more errors, however MedlinePlus is simplest of all to read, at a 9th Grade level.||0||0|
|Topic pages: Plos computational biology meets wikipedia||Wodak S.J.
|PLoS Computational Biology||English||2012||[No abstract available]||0||0|
|Accuracy and completeness of drug information in Wikipedia: an assessment||Natalie Kupferberg
Bridget McCrate Protus
|Journal of the Medical Library Association||English||October 2011||8||2|
|How Accurate is Wikipedia?||Natalie Wolchover||LiveScience||English||24 January 2011||Numerous studies have rated Wikipedia's accuracy. On the whole, the web encyclopedia is fairly reliable, but Life's Little Mysteries own small investigation produced mixed results.||0||1|
|Factual accuracy and Trust in Information: The Role of Expertise||Jan Maarten Schraagen Teun Lucassen||Journal of the American Society for Information Science & Technology||2011||In the past few decades, the task of judging the credibility of information has shifted from trained professionals (e.g., editors) to end users of information (e.g., casual Internet users). Lacking training in this task, it is highly relevant to research the behavior of these end users. In this article, we propose a new model of trust in information, in which trust judgments are dependent on three user characteristics: source experience, domain expertise, and information skills. Applying any of these three characteristics leads to different features of the information being used in trust judgments; namely source, semantic, and surface features (hence, the name 3S-model). An online experiment was performed to validate the 3S-model. In this experiment, Wikipedia articles of varying accuracy (semantic feature) were presented to Internet users. Trust judgments of domain experts on these articles were largely influenced by accuracy whereas trust judgments of novices remained mostly unchanged. Moreover, despite the influence of accuracy, the percentage of trusting participants, both experts and novices, was high in all conditions. Along with the rationales provided for such trust judgments, the outcome of the experiment largely supports the 3S-model, which can serve as a framework for future research on trust in information.||0||0|
|Patient-oriented cancer information on the internet: A comparison of wikipedia and a professionally maintained database||Rajagopalan M.S.
|Journal of Oncology Practice||English||2011||Purpose: A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. Methods: For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. Results: Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. Conclusion: Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention. Copyright||0||1|
|The effects of wikis on foreign language students writing performance||Alshumaimeri Y.||Procedia - Social and Behavioral Sciences||English||2011||This study investigated the use of wikis in improving writing skills among 42 male students at the Preparatory Year (PY) in King Saud University in Saudi Arabia. Research questions investigated writing accuracy and quality. Performance results on pre- and post-tests revealed that both groups improved significantly overtime in both accuracy and quality. However, the experimental group significantly outperformed the control group in both accuracy and quality of writing in the post-test. The implications of the results are that wikis can benefit teachers and students by improving their writing skills in accuracy and quality in a collaborative environment.||0||0|
|Wikipedia as a Data Source for Political Scientists: Accuracy and Completeness of Coverage||Adam R. Brown||PS: Political Science & Politics||English||2011||In only 10 years, Wikipedia has risen from obscurity to become the dominant information source for an entire generation. However, any visitor can edit any page on Wikipedia, which hardly fosters confidence in its accuracy. In this article, I review thousands of Wikipedia articles about candidates, elections, and officeholders to assess both the accuracy and the thoroughness of Wikipedia's coverage. I find that Wikipedia is almost always accurate when a relevant article exists, but errors of omission are extremely frequent. These errors of omission follow a predictable pattern. Wikipedia's political coverage is often very good for recent or prominent topics but is lacking on older or more obscure topics.||10||2|
|A tale of information ethics and encyclopædias; Or, is Wikipedia just another internet scam?||Gorman G.E.||Online Information Review||English||2007||Purpose - This paper seeks to look at the question of accuracy of content regarding Wikipedia and other internet encyclopædias. Design/methodology/ approach - By looking at other sources, the paper considers whether the information contained within Wikipedia can be relied on to be accurate. Findings - Wikipedia poses as an encyclopædia when by no stretch of the definition can it be termed such; therefore, it should be subject to regulation. Originality/value - The paper highlights the issue that, without regulation, content cannot be relied on to be accurate.||0||2|
|A wiki that knows where it is being used: Social hazard or social service?||Maria Plummer
|Association for Information Systems - 13th Americas Conference on Information Systems, AMCIS 2007: Reaching New Heights||English||2007||This study assesses reactions to a wiki enhanced with context-aware features that enable users to learn about people, places, and events in their proximity. In a physically compact enclave such as the urban university in which this wiki is being implemented, context-aware applications can support a hybrid community in which individuals develop and sustain physical and virtual social ties. Participants in this study were first-time users. They were given a guided tour of the wiki and then their impressions, concerns and intention to use were elicited through a semi-structured interview. Participants were enthusiastic about the prospects of the wiki in assisting them in learning about events and interesting places on campus, and in exchanging information. However, they were concerned about issues such as privacy, accuracy, and the potential for intentional misuse of the system. Privacy concerns were based primarily on a misconception of the location-aware feature of the wiki. These findings can guide designers and implementers on the desirable and possibly undesirable features of such a system.||0||0|
|Wikipedia vs The Old Guard||PC Pro||English||2007||0||0|
|Internet encyclopaedias go head to head||Jim Giles||Nature||English||14 December 2005||Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds.||0||50|