Harvesting models from web 2.0 databases
|Harvesting models from web 2.0 databases|
|Author(s)||Diaz O., Puente G., Canovas Izquierdo J.L., Garcia Molina J.|
|Published in||Software and Systems Modeling|
|Keyword(s)||Data re-engineering, Databases, Harvesting, Model-driven engineering, Web2.0 (Extra: Competitive advantage, Database schemas, Domain specific languages, Extraction process, MediaWiki, Meta model, Model extraction, Model-driven Engineering, Semantic markup, Third party application (Apps), Web2.0, Competition, Database systems, Engines, Harvesting, Internet, Problem oriented languages, Semantics, Social networking (online), Web services, Digital storage)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of journal articles|
Data rather than functionality are the sources of competitive advantage for Web2. 0 applications such as wikis, blogs and social networking websites. This valuable information might need to be capitalized by third-party applications or be subject to migration or data analysis. Model-Driven Engineering (MDE) can be used for these purposes. However, MDE first requires obtaining models from the wiki/blog/website database (a. k. a. model harvesting). This can be achieved through SQL scripts embedded in a program. However, this approach leads to laborious code that exposes the iterations and table joins that serve to build the model. By contrast, a Domain-Specific Language (DSL) can hide these "how" concerns, leaving the designer to focus on the "what", i. e. the mapping of database schemas to model classes. This paper introduces Schemol, a DSL tailored for extracting models out of databases which considers Web2. 0 specifics. Web2. 0 applications are often built on top of general frameworks (a. k. a. engines) that set the database schema (e. g., MediaWiki, Blojsom). Hence, table names offer little help in automating the extraction process. In addition, Web2. 0 data tend to be annotated. User-provided data (e. g., wiki articles, blog entries) might contain semantic markups which provide helpful hints for model extraction. Unfortunately, these data end up being stored as opaque strings. Therefore, there exists a considerable conceptual gap between the source database and the target metamodel. Schemol offers extractive functions and view-like mechanisms to confront these issues. Examples using Blojsom as the blog engine are available for download.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 4 time(s)