Communications in Computer and Information Science

From WikiPapers
Jump to: navigation, search


Publications

Only those publications related to wikis already available at WikiPapers are shown here.
Title Author(s) Keyword(s) Language DateThis property is a special property in this wiki. Abstract R C
Exploiting Wikipedia for Evaluating Semantic Relatedness Mechanisms Ferrara F.
Tasso C.
English 2014 The semantic relatedness between two concepts is a measure that quantifies the extent to which two concepts are semantically related. In the area of digital libraries, several mechanisms based on semantic relatedness methods have been proposed. Visualization interfaces, information extraction mechanisms, and classification approaches are just some examples of mechanisms where semantic relatedness methods can play a significant role and were successfully integrated. Due to the growing interest of researchers in areas like Digital Libraries, Semantic Web, Information Retrieval, and NLP, various approaches have been proposed for automatically computing the semantic relatedness. However, despite the growing number of proposed approaches, there are still significant criticalities in evaluating the results returned by different methods. The limitations evaluation mechanisms prevent an effective evaluation and several works in the literature emphasize that the exploited approaches are rather inconsistent. In order to overcome this limitation, we propose a new evaluation methodology where people provide feedback about the semantic relatedness between concepts explicitly defined in digital encyclopedias. In this paper, we specifically exploit Wikipedia for generating a reliable dataset. 0 0
Tagging Scientific Publications Using Wikipedia and Natural Language Processing Tools Lopuszynski M.
Bolikowski L.
Natural Language Processing
Tagging document collections
Wikipedia
English 2014 In this work, we compare two simple methods of tagging scientific publications with labels reflecting their content. As a first source of labels Wikipedia is employed, second label set is constructed from the noun phrases occurring in the analyzed corpus. We examine the statistical properties and the effectiveness of both approaches on the dataset consisting of abstracts from 0.7 million of scientific documents deposited in the ArXiv preprint collection. We believe that obtained tags can be later on applied as useful document features in various machine learning tasks (document similarity, clustering, topic modelling, etc.). 0 0
A distributed ontology repository management approach based on semantic wiki Rao G.
Feng Z.
Xiaolong Wang
Liu R.
Distributed ontolog
Ontology inconsistency
Semantic wiki
English 2013 As the foundation of Semantic Web, the size of ontologies on the Web has developed into tens of billions. Furthermore, the creation process of the repository takes place through open collaboration. However, the problem of inconsistent repository is made even worse because of openness and collaboration. Semantic wiki provides a new approach to build large-scale, unified semantic knowledge base. This paper focuses on the relevant problems, technologies and applications of semantic wiki based ontology repository with the combination of semantic wiki technologies and distributed ontology repository. A distributed ontology repository management approach and platform based on semantic wiki is presented. The platform is divided into three layers, including distributed ontology management layer, business logic layer, and application performance layer. Self-maintenance and optimization of distributed ontology repository is implemented by the management module with technology of ontology reasoning, ontology view extraction and ontology segmentation. The unified interface of the repository to provide knowledge storage and query services to application of semantic web is provided through knowledge bus mechanism with distributed ontology encapsulated. In the business logic layer, the operations of wiki and ontology are mapped to manage the wiki pages and ontology resources through mapping the wiki entries and ontology resources. In the application performance layer, a friendly interface is provided to build repository through combining the entry information display and the semantic information extraction. 0 0
Evaluating article quality and editor reputation in Wikipedia Lu Y.
Lei Zhang
Jing-Woei Li
Editor reputation
Factor graph
Quality evaluation
English 2013 We study a novel problem of quality and reputation evaluation for Wikipedia articles. We propose a difficult and interesting question: How to generate reasonable article quality score and editor reputation in a framework at the same time? In this paper, We propose a dual wing factor graph(DWFG) model, which utilizes the mutual reinforcement between articles and editors to generate article quality and editor reputation. To learn the proposed factor graph model, we further design an efficient algorithm. We conduct experiments to validate the effectiveness of the proposed model. By leveraging the belief propagation between articles and editors, our approach obtains significant improvement over several alternative methods(SVM, LR, PR, CRF). 0 0
Measuring the Compositionality of Arabic Multiword Expressions Saif A.
Ab Aziz M.J.
Omar N.
Multiword expression
Semantic compositionality
Semantic similarity
Wikipedia
English 2013 This paper presents a method for measuring the compositionality score of multiword expressions (MWEs). Based on Wikipedia (WP) as a lexicon resource, the multiword expressions are identified using the title of Wikipedia articles that are made up of more than one word without further process. Through the semantic representation, this method exploits the hierarchical taxonomy in Wikipedia to represent the concept (single word or multiword) as a feature vector containing the WP articles that belong to concept of categories and sub-categories. The literality and the multiplicative function composition scores are used for measuring the compositionality score of an MWE utilizing the semantic similarity. The proposed method is evaluated by comparing the compositionality score against human judgments (dataset) containing 100 Arabic noun-noun compounds. 0 0
Metadata management of models for resources and environment based on Web 2.0 technology Lu Y.M.
Sheng L.
Wu S.
Yue T.X.
Extended-metadata
Model metadata
Tag
Wiki
XSD
English 2013 The papers firstly introduce the standard framework of model metadata as well as its composition, content and meaning. It is held that model metadata should consist of an identifier, sphere of application, model parameters, principles, performances, run conditions, management information, references, and case information. Then we explained the virtual community for model metadata publishing, sharing and maintaining. Finally, we expatiated on the expression of model metadata standard based on XML schema Definition (XSD), the extended-metadata based on Tag and the publishing of model metadata based on Wiki. Based on Web 2.0 technology, the traditional model metadata which just create by modeler was extended to support extended-metadata created by model users or domain experts, which includes the feedback on model evaluation and suggestion. The user-metadata was refined from massive messy individual tags, which reflect the implicit knowledge of models. 0 0
Publishing CLOD of dangerous chemicals based on semantic MediaWiki Deng H.
Gu J.
Zheng X.
CLOD
DBpedia
Semantic MediaWiki
English 2013 For the problem of integrating massive distributed information about dangerous chemicals and the blank in Chinese semantic knowledge base, this paper proposed a method of constructing CLOD (Chinese Linked Open Data) on that field based on Semantic MediaWiki, making use of the Chinese and English Wikipedia version where each page corresponds to an instance in DBpedia. To make information semantic and machine-readable, we extract instances and metadata from Baidu Baike, add annotations, link to LOD and publish them on the web. So they can be shared and interconnected with other domains. According to experiment, the method proposed in the paper gains benefits of answering rich queries. 0 0
Talking topically to artificial dialog partners: Emulating humanlike topic awareness in a virtual agent Alexa Breuing
Ipke Wachsmuth
Automatic topic awareness
Embodied conversational agents
Human-agent interaction
Topic detection and tracking
Wikipedia
English 2013 During dialog, humans are able to track ongoing topics, to detect topical shifts, to refer to topics via labels, and to decide on the appropriateness of potential dialog topics. As a result, they interactionally produce coherent sequences of spoken utterances assigning a thematic structure to the whole conversation. Accordingly, an artificial agent that is intended to engage in natural and sophisticated human-agent dialogs should be endowed with similar conversational abilities. This paper presents how to enable topically coherent conversations between humans and interactive systems by emulating humanlike topic awareness in the virtual agent Max. Therefore, we firstly realized automatic topic detection and tracking on the basis of contextual knowledge provided by Wikipedia and secondly adapted the agent's conversational behavior by means of the gained topic information. As a result, we contribute to improve human-agent dialogs by enabling topical talk between human and artificial interlocutors. This paper is a revised and extended version of [1]. 0 0
Agent for mining of significant concepts in DBpedia Boo V.K.
Anthony P.
Concept ranking
DBpedia
PageRank
English 2012 DBpedia.org is a community effort that tries to extract structured information from Wikipedia such that the extracted information can be queried just like a database. This information is opened to public in the form of RDF triple which is compatible with the semantic web standard. Various applications are developed for the purpose of utilizing the structured data in DBpedia. This paper makes an attempt to apply PageRank analysis on the link structure of DBpedia using a mining agent to mine significant concepts in DBpedia. Based on the result, popular concepts have the tendency to be ranked higher than the less popular ones. This paper also proposes an alternative view on how PageRank analysis can be applied to DBpedia link structure based on special characteristics of Wikipedia. The result shows that even concepts with a low PageRank value can be used as a valuable resource for recommending pages in Wikipedia. 0 0
An efficient voice enabled web content retrieval system for limited vocabulary Bharath Ram G.R.
Jayakumaur R.
Narayan R.
Shahina A.
Khan A.N.
Content Retrieval
Regular Expressions
Speech to Text
Sphinx 4
Wikipedia
English 2012 Retrieval of relevant information is becoming increasingly difficult owing to the presence of an ocean of information in the World Wide Web. Users in need of quick access to specific information are sub-jected to a series of web re-directions before finally arriving at the page that contains the required information. In this paper, an optimal voice based web content retrieval system is proposed that makes use of an open source speech recognition engine to deal with voice inputs. The proposed system performs a quicker retrieval of relevant content from Wikipedia and instantly presents the textual information along with the related image to the user. This search is faster than the conventional web content retrieval technique. The current system is built with limited vocabulary but can be extended to support a larger vocabulary. Additionally, the system is also scalable to retrieve content from few other sources of information apart from Wikipedia. 0 0
Chinese named entity recognition and disambiguation based on wikipedia Yajie Miao
Yajuan L.
Qun L.
Jinsong S.
Hao X.
Named Entity Disambiguation
Named entity recognition
Swikipedia
English 2012 This paper presents a method for named entity recognition and disambiguation based on Wikipedia. First, we establish Wikipedia database using open source tools named JWPL. Second, we extract the definition term from the first sentence of Wikipedia page and use it as external knowledge in named entity recognition. Finally, we achieve named entity disambiguation using Wikipedia disambiguation pages and contextual information. The experiments show that the use of Wikipedia features can improve the accuracy of named entity recognition. 0 0
Choosing better seeds for entity set expansion by leveraging wikipedia semantic knowledge Qi Z.
Kang Liu
Jun Zhao
Information extraction
Seed set refinement
Semantic knowledge
English 2012 Entity Set Expansion, which refers to expanding a human-input seed set to a more complete set which belongs to the same semantic category, is an important task for open information extraction. Because human-input seeds may be ambiguous, sparse etc., the quality of seeds has a great influence on expansion performance, which has been proved by many previous researches. To improve seeds quality, this paper proposes a novel method which can choose better seeds from original input ones. In our method, we leverage Wikipedia semantic knowledge to measure semantic relatedness and ambiguity of each seed. Moreover, to avoid the sparseness of the seed, we use web corpus to measure its population. Lastly, we use a linear model to combine these factors to determine the final selection. Experimental results show that new seed sets chosen by our method can improve expansion performance by up to average 13.4% over random selected seed sets. 0 0
Linking folksonomies to knowledge organization systems Jakob Voss Crowdsourcing
Digital libraries (cs.DL)
Information retrieval (cs.IR)
Linked data
Mapping
SKOS
Social tagging
English 2012 This paper demonstrates enrichment of set-model folksonomies with hierarchical links and mappings to other knowledge organization systems. The process is exemplified with social tagging practice in Wikipedia and in Stack Exchange. The extended folksonomies are created by crowdsourcing tag names and descriptions to translate them to linked data in SKOS. 0 0
Predicting user tags using semantic expansion Chandramouli K.
Piatrik T.
Izquierdo E.
Evaluation
Speech recognition
Tag prediction
User-contributed metadata
Video indexing
English 2012 Manually annotating content such as Internet videos, is an intellectually expensive and time consuming process. Furthermore, keywords and community-provided tags lack consistency and present numerous irregularities. Addressing the challenge of simplifying and improving the process of tagging online videos, which is potentially not bounded to any particular domain, we present an algorithm for predicting user-tags from the associated textual metadata in this paper. Our approach is centred around extracting named entities exploiting complementary textual resources such as Wikipedia and Wordnet. More specifically to facilitate the extraction of semantically meaningful tags from a largely unstructured textual corpus we developed a natural language processing framework based on GATE architecture. Extending the functionalities of the in-built GATE named entities, the framework integrates a bag-of-articles algorithm for effectively searching through the Wikipedia articles for extracting relevant articles. The proposed framework has been evaluated against MediaEval 2010 Wild Wild Web dataset, which consists of large collection of Internet videos. 0 0
Summarizing definition from wikipedia articles Zeyu Zheng
Zhu X.
Definition
Summary
Wikipedia
English 2012 Definitional questions are quite important, since users often want to get a brief overview of a specific topic. It is a more challenging task to answer definitional questions than factoid questions. Since Wikipedia provides a wealth of structural or semi-structural information which covers a large number of topics, such sources will benefit the generation of definitions. In this paper, we propose a method to summarize definition from multiple related Wikipedia articles. First, we introduce the Wikipedia concepts model to represent the semantic elements in Wikipedia articles. Second, we further utilize multiple related articles, rather than a single article, to generate definition. The experiment results on TREC-QA demonstrate the effectiveness of our proposed method. The Wikipedia concept model outperforms the word model. Introducing multiple related articles helps find more essential nuggets. 0 0
TAPIR: Wiki-based task and personal information management supporting subjective process management Riss U.V. Personal semantic desktop wiki
Subject-orientation
Task management
English 2012 We introduce a subject-driven approach to integrated process, task, and information management for knowledge workers. This approach is realized in the Task and Personal Information Rendering (TAPIR) extension of the Semantic Mediawiki that we present in this paper. The focus is placed on eliciting subjective process information from daily task management. The approach starts from the insight that individuals' motivation to provide relevant process information can be increased if they directly benefit from their contributions. TAPIR uses process relevant information to support users in their task management. Hereby it fosters S-BPM by gathering subjective process information that can be used for organizational purposes. 0 0
Using a semantic wiki to improve the consistency and analyzability of functional requirements Jun Ma
Yao W.
Zhang Z.
Nummenmaa J.
Functional Requirements
Requirement Engineering
Semantic MediaWiki
English 2012 Even though the software industry seems to have matured from the initial stage, the software projects' success rates are still low. This is mainly because of lack of correct, unambiguous, complete, consistent description on software requirements. How to specify and represent requirements correctly, unambiguously, completely and reach a common understanding among stakeholders in software projects, have become high priority issues. The aim of this paper is to study the ways of specifying and representing the semantic information of functional requirements in software projects. We propose a meta-model and implement it into semantic forms in Semantic MediaWiki. Our approach enables the building of functional requirements on a common semantic basis, thereby improving the analyzability and consistency of the requirements. The wiki environment also enables asynchronous collaboration to create and maintain the functional requirements. 0 0
Adding semantic extension to wikis for enhancing cultural heritage applications Leclercq E.
Savonnet M.
Cultural Heritage application
Ontology Engineering
Semantic wiki
English 2011 Wikis are appropriate systems for community-authored content. In the past few years, they show that are particularly suitable for collaborative works in cultural heritage. In this paper, we highlight how wikis can be relevant solutions for building cooperative applications in domains characterized by a rapid evolution of knowledge. We will point out the capabilities of semantic extension to provide better quality of content, to improve searching, to support complex queries and finally to carry out different type of users. We describe the CARE project and explain the conceptual modeling approach. We detail the architecture of WikiBridge, a semantic wiki which allows simple, n-ary and recursive annotations as well as consistency checking. A specific section is dedicated to the ontology design which is the compulsory foundational knowledge for the application. 0 0
Approach of Web2.0 application pattern applied to the information teaching Li G.
Liu M.
Zhe Wang
Chen W.
Blogs
Information Teaching
Web 2.0
Wiki
English 2011 This paper firstly focuses on the development and function of Web2.0 from an educational perspective. Secondly, it introduces the features and theoretical foundation of Web 2.0. Consequently, The application pattern used in the information teaching based on the introduction described above is elaborated and proved to be an effective way of increasing educational productivity. Lastly, this paper presents the related cases and teaching resources for reference. 0 0
Comparison of different ontology-based query expansion algorithms for effective image retrieval Leung C.H.C.
Yanyan Li
Concept distance
Image retrieval
Ontology
Query expansion
English 2011 We study several semantic concept-based query expansion and re-ranking scheme and compare different ontology-based expansion methods in image search and retrieval. In particular, we exploit the two concept similarities of different concept expansion ontology-WordNet Similarity, Wikipedia Similarity. Furthermore, we compare the keywords semantic distance with the precision of image search results with query expansion according to different concept expansion algorithms. We also compare the image retrieval precision of searching with the expanded query and original plain query. Preliminary experiments have been able to demonstrate that the two proposed retrieval mechanism has the potential to outperform unaided approaches. 0 0
Cooperative WordNet editor for lexical semantic acquisition Szymanski J. Acquisition
Collaborative editing
Lexical semantic
Semantic dictionaries
Wordnet
English 2011 The article describes an approach for building WordNet semantic dictionary in a collaborative approach paradigm. The presented system system enables functionality for gathering lexical data in a Wikipedia-like style. The core of the system is a user-friendly interface based on component for interactive graph navigation. The component has been used for WordNet semantic network presentation on web page, and it brings functionalities of modification its content by the distributed group of people. 0 0
Creating and Exploiting a Hybrid Knowledge Base for Linked Data Zareen Syed
Tim Finin
Information extraction
Knowledge base
Linked data
Semantic web
Wikipedia
English 2011 Twenty years ago Tim Berners-Lee proposed a distributed hypertext system based on standard Internet protocols. The Web that resulted fundamentally changed the ways we share information and services, both on the public Internet and within organizations. That original proposal contained the seeds of another effort that has not yet fully blossomed: a Semantic Web designed to enable computer programs to share and understand structured and semi-structured information easily. We will review the evolution of the idea and technologies to realize a Web of Data and describe how we are exploiting them to enhance information retrieval and information extraction. A key resource in our work is Wikitology, a hybrid knowledge base of structured and unstructured information extracted from Wikipedia. 0 0
English-to-Korean cross-lingual link detection for Wikipedia Marigomen R.
Kang I.-S.
Keyword extraction
Wikipedia
Word sense disambiguation
English 2011 In this paper, we introduce a method for automatically discovering possible links between documents in different languages. We utilized the large collection of articles in Wikipedia as our resource for keyword extraction, word sense disambiguation and in creating a bilingual dictionary. Our system runs using these set of methods for which given an English text or input document, it automatically determines important words or phrases within the context and links it to a corresponding Wikipedia article in other languages. In this system we use the Korean Wikipedia corpus as the linking document. 0 0
Enterprise wikis - Types of use, benefits and obstacles: A multiple-case study Stocker A.
Tochtermann K.
Case Study
Enterprise 2.0
Knowledge Sharing
Web 2.0
Wiki
English 2011 In this paper we present the results of our explorative multiple-case study investigating enterprise wikis in three Austrian cases. Our contribution was highly motivated from the ongoing discussion on Enterprise 2.0 in science and practice, but the lack of well-grounded empirical research on how enterprise wikis are actually designed, implemented and more importantly utilized. We interviewed 7 corporate experts responsible for wiki operation and about 150 employees supposed to facilitate their daily business by using the wikis. The combination of qualitative data from the expert interviews and quantitative data from the user survey allows generating very interesting insights. Our cross-case analysis reveals commonalities and differences on usage motives, editing behaviour, individual and collective benefits, obstacles, and more importantly, derives a set of success factors guiding managers in future wiki projects. 0 0
Generation of hypertext for web-based learning based on wikification Lui A.K.-F.
Ng V.S.-C.
Tsang E.K.M.
Ho A.C.H.
Hypertext generation
Web-based learning
Wikification
Wikipedia
English 2011 This paper presents a preliminary study into the conversion of plain text documents into hypertext for web-based learning. The novelty of this approach is the generation of two types of hyperlinks: links to Wikipedia article for exploratory learning, and self-referencing links for elaboration and references. Hyperlink generation is based on two rounds of wikification. The first round wikifies a set of source documents so that the wikified source documents can be semantically compared to Wikipedia articles using existing link-based measure techniques. The second round of wikification then evaluates each hyperlink in the wikified source documents and checks if there is a semantically related source document for replacing the current target Wikipedia article. While preliminary evaluation of a prototype implementation seemed feasible, relatively few self-referencing links could be generated using a test set of course text. 0 0
Peer assessment using wiki to enhance their mastery of the Chinese language English 2011 0 0
Planning for a successful corporate wiki English 2011 0 0
Proof-of-concept design of an ontology-based computing curricula management system Tang A.
Abdur Rahman A.
Computing Ontology
Curricula development
Curriculum wiki
MAS-CommonKADS
Software agents
English 2011 The management of curricula development activities is time-consuming and labor-intensive. The accelerated nature of Computing technological advances exacerbates the complexity of such activities. A Computing Curricula Management System (CCMS) is proposed as a Proof-of-Concept (POC) design that utilizes an ontology as a knowledge source that interacts with a Curriculum Wiki facility through ontological agents. The POC design exploits agent-interaction models that had already been analyzed through the application of the Conceptualization and Analysis phases of the MASCommonKADS Agent Oriented Methodology. Thereafter, the POC design of the CCMS is developed in the Design phase. The paper concludes with a discussion of the resulting contribution, limitation and future work. 0 0
Sequential supervised learning for hypernym discovery from Wikipedia Litz B.
Langer H.
Malaka R.
Hidden Markov models
Hypernym discovery
Information extraction
Sequential supervised learning
Syntactic-semantic tagging
English 2011 Hypernym discovery is an essential task for building and extending ontologies automatically. In comparison to the whole Web as a source for information extraction, online encyclopedias provide far more structuredness and reliability. In this paper we propose a novel approach that combines syntactic and lexical-semantic information to identify hypernymic relationships. We compiled semi-automatically and manually created training data and a gold standard for evaluation with the first sentences from the German version of Wikipedia. We trained a sequential supervised learner with a semantically enhanced tagset. The experiments showed that the cleanliness of the data is far more important than the amount of the same. Furthermore, it was shown that bootstrapping is a viable approach to ameliorate the results. Our approach outperformed the competitive lexico-syntactic patterns by 7% leading to an F1-measure of over .91. 0 0
Developing personal learning environments based on calm technologies Fiaidhi J. English 2010 Educational technology is constantly evolving and growing, and it is inevitable that this progression will continually offer new and interesting advances in our world. The instigation of calm technologies for the delivery of education is another new approach now emerging. Calm technology aims to reduce the "excitement" of information overload by letting the learner select what information is at the center of their attention and what information need to be at the peripheral. In this paper we report on the adaptation of calm technologies in an educational setting with emphasis on the needs to cater the preferences of the individual learner to respond to the challenge of providing truly learner-centered, accessible, personalized and flexible learning. Central to calm computing vision is the notion of representing learning objects as widgets, harvesting widgets from the periphery based on semantic wikis as well as widgets garbage collection from the virtual/central learning memory. 0 0
Evaluation study of pedagogical methods and e - Learning material via web 2.0 for hearing impaired people Vrettaros J.
Argiri K.
Stavrou P.
Hrissagis K.
Drigas A.
Blogs
E-learning
Empirical study
Lip - reading
Social networking
Video - sign language
Web 2.0
Wiki
English 2010 The primary goal of this paper is to study whether WEB 2.0 tools such as blogs, wikis, social networks and typical hypermedia as well as techniques such as lip - reading, video - sign language and learning activities are appropriate to use for learning purpose for deaf and hard of hearing people. In order to check the extent in which the choices mentioned above are compatible with the features of the specific group and maximize the learning results we designed an empirical study which will be presented below. The study was conducted in the context of SYNERGIA, a project of Leonardo da Vinci of Lifelong Learning Programme, in the section of MULTILATERAL PROJECTS TRANSFER OF INNOVATION, The evaluation was conducted on data that came up through questionnaire analysis. 0 0
Learning assessment using wikis: Integrated or LMS independent? Forment M.A.
De Pedro X.
Casan M.J.
Piguillem J.
Galanis N.
E-Learning
Education
LMS
Social Learning
Wiki
English 2010 Wikis have a potentially huge educational value. This article outlines a feature set that wiki engines need in order to successfully host collaborative educational scenarios using wiki technology. One of the first issues to solve is the need for assessment methodologies supported by the software. And the second one is to choose between using an integrated wiki engine inside the Learning Management System (LMS), or an external standalone wiki engine. Advantages and disadvantages from both options of this second issue are discussed, with each choice presenting different implications as far as individual student assessment, feedback and grading are concerned. Among the expected results, the most notable are incentives to incorporate wikis in the teaching procedure, significant enhancements in usability, as well as allowing teachers to provide more timely written feedback on their students' individual contributions on wiki based activities, on top of the usual numerical grading. This paper exposes the conclusions of 5 years of experience of work in the field of wikis in education, development of improvements on open source wiki engines and thus, building from scratch accordingly the new wiki engine for the LMS Moodle 2.0. 0 0
Let me tell you something about (Y)our culture? Mac An Airchinnigh M. Digital re-discovery of culture
Empathy
Keyimage
Memory
Museum of innocence
Ontology
English 2010 Each person is born into a culture that is mediated by the mother tongue. Further development of the person is often associated with schooling and education. At an early age some persons will come into contact with other cultures especially if living in a cosmopolitan city or through frequent travel. Such intercultural contact consists of exposure to another tongue, initially aural, and images of the other, perhaps in the form of dress, or architecture, and so on. In the digital world of 2010 those who surf the electronic wave constantly dip in and out of many cultures. Those who normally use Wikipedia in English might over time also refer to a version of an article in another tongue. Those who are frequent users of YouTube might be curious enough to watch a video clip in Turkish or in Greek as well as the usual English, in the context of a history lesson in school. Culture in the digital world needs to be supported and sustained. Are you looking for something? Try Google or Bing or...You have found something you want to share? Post a video clip, or a photograph, or a piece of music. But how shall we keep track of this digital culture? Why would we want to? In this paper we will address the fundamental problem of how to manage cultural information in an integrated fashion in the world of Art. To be specific we will use Bulgarian Art to inform one aspect of Turkish culture. 0 0
Mining relations between wikipedia categories Szymanski J. English 2010 The paper concerns the problem of automatic category system creation for a set of documents connected with references. Presented approach has been evaluated on the Polish Wikipedia, where two graphs: the Wikipedia category graph and article graph has been analyzed. The linkages between Wikipedia articles has been used to create a new category graph with weighted edges. We compare the created category graph with the original Wikipedia category graph, testing its quality in terms of coverage. 0 0
Semantic relatedness approach for named entity disambiguation Gentile A.L.
Zhang Z.
Linsi Xia
Iria J.
English 2010 Natural Language is a mean to express and discuss about concepts, objects, events, i.e., it carries semantic contents. One of the ultimate aims of Natural Language Processing techniques is to identify the meaning of the text, providing effective ways to make a proper linkage between textual references and their referents, that is, real world objects. This work addresses the problem of giving a sense to proper names in a text, that is, automatically associating words representing Named Entities with their referents. The proposed methodology for Named Entity Disambiguation is based on Semantic Relatedness Scores obtained with a graph based model over Wikipedia. We show that, without building a Bag of Words representation of the text, but only considering named entities within the text, the proposed paradigm achieves results competitive with the state of the art on two different datasets. 0 0
Semantically enriched tools for the knowledge society: Case of project management and presentation Talas J.
Gregar T.
Pitner T.
Fresnel
Project management
RDF
Semantic wiki
Web 2.0
English 2010 Working with semantically rich data is one of the stepping stones to the knowledge society. In recent years, gathering, processing, and using semantic data have made a big progress, particularly in the academic environment. However, the advantages of the semantic description remain commonly undiscovered by a "common user", including people from academia and IT industry that could otherwise profit from capabilities of contemporary semantic systems in the areas of project management and/or technology-enhanced learning. Mostly, the root cause lays in complexity and non-transparency of the mainstream semantic applications. The semantic tool for project management and presentation consists mainly of a module for the semantic annotation of wiki pages integrated into the project management system Trac. It combines the dynamic, easy-of-use nature and applicability of a wiki for project management with the advantages of semantically rich and accurate approach. The system is released as open-source (OS) and is used for management of students' and research projects at the research lab of the authors. 0 0
The implications of information democracy and digital socialism for public libraries Oguz E.S.
Kajberg L.
Collective intelligence
Information democracy
Public libraries
Web 2.0
English 2010 In these times, public libraries in many countries have increasingly come under pressure from developments within the information landscape. Thus, not least because of the massive digitization of information resources, the proliferation and popularity of search engines, in particular Google, and the booming technologies of Web 2.0, public libraries find themselves in a very complex situation. In fact, the easy-to-use technologies of Web 2.0 challenge the basic principles of information services provision undertaken by libraries. The new digital information environment and social software tools such as blogs, wikis and social networking sites have fuelled a discussion of the future of public libraries as information providers. After all there seems to be a need for public libraries to reorient their aims and objectives and to redefine their service identity. At the same time search engines, and especially Google, are increasingly coming under scrutiny. Thus, analysis results referred to show that the conception of information and the underlying purpose of Google differ from those of public libraries. Further, an increasing amount of criticism is being directed at collaborative spaces (typically Wikipedia) and social networks (e.g. MySpace) and it is pointed out that these social media are not that innocent and unproblematic. In discussing the survival of public libraries and devising an updated role for libraries in the age of Google and social media, attention should be given to fleshing out a new vision for the public library as a provider of alternative information and as an institution supporting information democracy. 0 0
Using a semantic wiki for documentation management in very small projects English 2010 0 0
Using wikis to learn computer programming Gonzalez-Ortega D.
Diaz-Pernas F.J.
Martinez-Zarzuela M.
Anton-Rodriguez M.
Diez-Higuera J.F.
Boto-Giralda D.
De La Torre-Diez I.
Collaborative learning.
Computer programming learning
Wiki
English 2010 In this paper, we analyze the suitability of wikis in education, especially to learn computer programming, and present a wiki-based teaching innovation activity carried out in the first course of Telecommunication Engineering during two academic courses. The activity consisted in the creation of a wiki to collect errors made by students while they were coding programs in C language. The activity was framed in a collaborative learning strategy in which all the students had to collaborate and be responsible for the final result, but also in a competitive learning strategy, in which the groups had to compete to make original meaningful contributions to the wiki. The use of a wiki for learning computer programming was very satisfactory. A wiki allows to monitor continuously the work of the students, who become publishers and evaluators of contents rather than mere consumers of information, in an active learning approach. 0 0
An empirical study on the use of web 2.0 by Greek adult instructors in educational procedures Vrettaros J.
Tagoulis A.
Giannopoulou N.
Drigas A.
Blogs
E-learning
Empirical study
Facebook
Social software
Web 2.0
Wiki
Youtube
English 2009 In this paper is presented an empirical study and its results. The empirical study was designed through a pilot training program which was conducted in order to learn if Greek educators can learn to use and even adopt the use of web 2.0 tools and services in the educational process and in which extend, where the type of learning is either distant learning, blended learning or the learning takes place in the traditional classroom. 0 0
Quantitative analysis of the top ten wikipedias Felipe Ortega
Gonzalez-Barahona J.M.
Gregorio Robles
Collaborative development
Growth metrics
Quantitative analysis
Wikipedia
English 2008 In a few years, Wikipedia has become one of the information systems with more public of the Internet. Based on a relatively simple architecture it has proven to be capable of supporting the largest and more diverse community of collaborative authorship worldwide. Using a quantitative methodology, (analyzing public Wikipedia databases), we describe the main characteristics of the 10 largest language editions, and the authors that work in them. The methodology is generic enough to be used on the rest of the editions, providing a convenient framework to develop a complete quantitative analysis of the Wikipedia. Among other parameters, we study the evolution of the number of contributions and articles, their size, and the differences in contributions by different authors, inferring some relationships between contribution patterns and content. These relationships reflect (and in part, explain) the evolution of the different language editions so far, as well as their future trends. 0 0