ICSOFT 2010 Abstracts


Area 1 - Enterprise Software Technology

Short Papers
Paper Nr: 95
Title:

A VIDEO SURVEILLANCE MODEL INTEGRATION IN SMALL AND MEDIUM ENTERPRISES

Authors:

Dan Benta and Stefan I. Nitchi

Abstract: Rapid evolution of the Internet inevitably leads toward adapting to new technologies. To survive in the increasingly fierce competition, companies must keep pace with new trends and developments. Software solutions should be adapted to not change the structure of the company. A common case is the classical surveillance and monitoring system that switch from analogue to digital and from a system accessed from inside the company to a system accessed via IP. Wireless technologies are used on a large scale, are flexible, cheap, and accessible and without wiring systems or elements that disturb. The evolution of wireless communications was done in close dependence with the development of communication networks. A video surveillance and monitoring system (VSaMS) is a good tool that offers secure targets or premises or to monitor the activity of the perimeter. Access to the system will allow real time monitoring, recording and accessing records. Using a standard depends on the location and the geographical area and equipment. The aim of this paper is to highlight existing standards and solution and to propose a model for a VSaMS. Also, experimental results are presented.
Download

Paper Nr: 98
Title:

THE INFERENCE EFFICIENCY PROBLEM IN BUSINESS AND TECHNOLOGICAL RULES MANAGEMENT SYSTEMS

Authors:

Andrzej Macioł, Barbara Baster, Barbara Baster and Andrzej Maciol

Abstract: In the following paper we present the results of our works related to development of rule engine for automated interpretation of business rules. Our experiences and experimental results show that the knowledge description for this purpose may be stored in the form of relational databases. The aim of the experimental research presented in this paper was to determine degree in which organization of knowledge base and assumed inference strategy influence the efficiency of inference process itself. Experiments proved that owing to the application of mechanism characteristic for relational data bases, knowledge base can be easily arranged so as to maximize efficiency of inference. The efficiency of inference is strongly influenced by preliminary knowledge transformation from the set of examples or random rules into arranged form.
Download

Paper Nr: 102
Title:

PARALLELISM, ADAPTATION AND FLEXIBLE DATA ACCESS IN ON-DEMAND ERP SYSTEMS

Authors:

Vadym Borovskiy

Abstract: On-premise enterprise resource planning (ERP) systems are costly to maintain and adapt to specific needs. To lower the cost of ERP systems an on-demand consumption model can be employed. This requires ERP systems to support multi-tenancy and multi-threading to enable consolidation of multiple businesses onto the same operational system. To simplify the adaptation of ERP systems to customer-specific requirements the former must natively support extensions, meaning that customer-specific behavior must be factored out from a system and placed into an extension module. In this paper we propose the architecture of an ERP system that i) exploits parallelism and ii) is able to accommodate custom requirements by means of enterprise composite applications. We emphasize the importance of ERP data accessibility and contribute with a concept of business object query language that allows building fine-grained queries. All suggestions made in the paper have been prototyped.
Download

Paper Nr: 149
Title:

Towards the Automatic Identification of Violations to the Normalized Systems Design Theorems

Authors:

Kris Ven, David Bellens, Philip Huysmans and Dieter Van Nuffel

Abstract: Contemporary organizations are operating in increasingly volatile environments and must be able to respond quickly to their environment. Given the importance of information technology within organizations, the evolvability of information systems will to a large degree determine how quickly organizations are able to react to changes in their environment. Unfortunately, current information systems struggle to provide the required levels of evolvability. Recently, the Normalized Systems approach has been proposed which aims to address this issue. The Normalized Systems approach is based on the systems theoretic concept of stability to ensure the evolvability of information systems. To this end, the Normalized Systems approach proposes four design theorems that act as constraints on the modular structure of software. In this paper, we explore the feasibility of building a tool that is able to automatically identify manifestations of violations to these Normalized Systems design theorems in the source code of information systems. This would help organizations in identifying limitations to the evolvability of their information systems. We describe how a prototype of such tool was developed, and illustrate how it can help to analyze the source code of an existing application.
Download

Paper Nr: 153
Title:

FEATURE ASSEMBLY MODELLING: A NEW TECHNIQUE FOR MODELLING VARIABLE SOFTWARE

Authors:

Lamia Abo Zaid, Lamia Abo Zaid , Olga De Troyer and Frederic Kleinermann

Abstract: For over two decades feature modelling techniques are used in the software research community for domain analysis and modelling of variable software. However, feature modelling has not found its way to the industry. In this paper we present a new feature modelling technique, developed in the context of a new approach called Feature Assembly, which overcomes some of the limitations of the current feature modelling techniques. We use a multi-perspective approach to deal with the complexity of large systems, we provide a simpler and easier to use modelling language, and last but not least we separated the variability specifications from the feature specifications which allow reusing features in different contexts.
Download

Paper Nr: 180
Title:

Towards Modeling Large Scale Project Execution Monitoring: Project Status Model

Authors:

Ciprian-Leontin Stanciu, Dacian Tudor and Valdimir-Ioan Cretu

Abstract: Software projects are problematic considering their high overruns in terms of execution time and budget. In large scale projects, the monitoring activity is a very difficult task, due to the very complex relation between resources and constraints, and must be based on a well established methodology. The outputs of the monitoring process refer mostly to the current status of the project, which must be reflected as accurate as possible. We propose a model for project status determination. This is a sub-model of a future monitoring model subject of our current research. The project status model not only considers the perspective of the project manager, which defines the macro-universe of the project, but also the perspective of every worker involved in the project, who can be seen as manager of their assigned tasks, which defines the micro-universe of the worker.
Download

Paper Nr: 221
Title:

CONTEXT-AWARE SHARING CONTROL USING HYBRID ROLES IN INTER-ENTERPRISE COLLABORATION

Authors:

Ahmad Kamran Malik

Abstract: In enterprise-based collaborations, humans working in dynamic overlapping teams controlled by their respective enterprises, share personal and team related context for accomplishment of their activities. Privacy of the personal context becomes vital in this scenario. Personal context contains information that user may not want to share, for example, her current location and current activity. We propose a role-based dynamic sharing control model which is owner centric and extends role-based access control model. We provide privacy of owner's personal context by separating it from team related context through the use of owner defined roles. Owner has full control of her personal data and is able to dynamically change her own access rules facing any new situation. We describe a role-based dynamic sharing control architecture which makes use of enterprise-defined roles as well as owner-defined roles for separating user context from team context. We evaluate our approach by providing a real world scenario, its running example, and implementation as sharing control messenger using Web services in Java.
Download

Paper Nr: 234
Title:

ONTOLOGY BASED INTEGRATION OF TRAINING SYSTEMS, The electrical power production operators domain

Authors:

Ricardo Molina-González, Guillermo Rodriguez, Víctor León and Israel Paredes

Abstract: An ontology based approach to loosely integrate independent training management systems is presented. The three systems are: the traditional training management system, the labour skills management system and the talent and innovation management system. The method first represents the data of each of the three independent systems using a simplified ontology structure, and then the integration relationships among the systems are specified and implemented.
Download

Paper Nr: 237
Title:

Information Microsystems

Authors:

Jordi Pradel Miquel, Jose Raya and Xavier Franch

Abstract: Given their need to manage the information they have under control, organizations usually choose among two types of widely used IT solutions: 1) information systems based on databases (DBIS) that are powerful but expensive to develop and not much flexible; 2) spreadsheets, which threaten the integrity of the data and are limited in exploiting it. In this paper we propose a new type of IT solution, namely information microsystems (MicroIS), that aims at reconciling the best of these two worlds: the low development and maintenance costs, ease of use and flexibility of spreadsheets, with the structure, semantics and integrity of DBIS. The goal is not to replace any of the above two paradigms but to lie somewhere in between depending on the changing needs of the organization. From the various possible points of interest of this IT solution, the article will focus specifically on issues related to data management, introducing the conceptual model of MicroIS, the transformations and validations that can be done around it, and they way in which the structure of the information is inferred from the data that users provide.
Download

Paper Nr: 253
Title:

A METAMODEL INTEGRATION FOR METRICS AND PROCESSES CORRELATION

Authors:

Xabier Larrucea and Eider Iturbe

Abstract: Nowadays organizations need to improve their efficiency mainly due to the current economic situation. Several organizations are involved in process improvement initiatives in order to become more competitive in the market. These initiatives require processes definition and performance measurement activities. This paper describes briefly a metamodel integration between metrics metamodel and software and business execution metamodels in order to support this kind of improvement initiatives. In fact this integration implies to control coherently Software Metrics Metamodel for metrics, Software Process Engineering Metamodel 2.0 for defining processes and JBPM Process Definition Language for executing processes. This approach is supported by a prototype
Download

Paper Nr: 269
Title:

CONSTRUCTING AND EVALUATING SUPPLY-CHAIN SYSTEMS IN A CLOUD-CONNECTED ENTERPRISE

Authors:

Ethan Hadar and Donald Ferguson

Abstract: An enterprise that exploits its IT-services from the cloud, and optionally provides some of the services to its customers via the cloud, is defined by us as cloud-connected enterprise (CCE). Consumption from the cloud and provisioning to the cloud of IT services defines an IT supply-chain environment. Considering the conceptual similar offerings from different vendors is economical attractive, as specialization in services increases the quality and cost-effectiveness of the service. The overall value of a service is composed of characteristics that may be summarized as QARCC: Quality, Agility, Risk, Capability and Cost. Tradeoffs between implementing services internally and consuming services externally may depend on these characteristics and their sub-characteristics. Regardless of the origin of the services or sub-services, we propose that the construction or consumption of the solution should follow dedicated cloud-oriented lifecycle for managing such services. The proposed incremental and iterative process, fosters an agile approach of refactoring and optimization. It is based on the assumptions that services change their QARCC characteristics over time due to emerging opportunities for replacement of sub-components. It is designed to operate in internal clouds as well as external and hybrid ones.
Download

Paper Nr: 89
Title:

MODELLING COLLABORATIVE SERVICES: The COSEMO model

Authors:

Thoa Pham, Michel Léonard, Thang Le Dinh and Markus Helfert

Abstract: Despite the dominance of the service sector in the last decades, there is still a need for a strong foundation on service design and innovation. Little attention has been paid on service modelling, particularly in the collaboration context. Collaboration is considered as one of the solutions for surviving or sustaining the business in the high competitive atmosphere. Collaborative services require various service providers working together according to agreements between them, along with service consumers, in order to co-produce services. In this paper, we address crucial issues in collaborative services such as collaboration levels, sharing data and processes due to business interdependencies between service stakeholders. Subsequently, we propose a model for Collaborative Service Modelling – the COSEMO model, which is able to cover identified issues. We also apply our proposed model to modelling an example of Travelling services in order to illustrate the relevance of our modelling approach to the matter in hand.
Download

Paper Nr: 99
Title:

APPLICATION OF RULES ENGINES IN TECHNOLOGY MANAGEMENT

Authors:

Barbara Baster, Andrzej Maciol and Bogdan Rebiasz

Abstract: In the paper we present the initial results of our research aimed at development of the tool which will benefit from virtues of BRMS and will enable support of technological decisions. Our task was focused on preparation of use cases set along with precise description of rules used for solving specific decision problems. For this purpose two decision problems were analysed which covered such issues as selection of feedstock or executive production planning. These problems were analyzed in view of company producing cold-rolled strips in a wide dimension range and diversified grades of steel. The general conclusion which is the answer to the question of the possibility to create a tool similar to BRE but capable of technological decision supporting is a statement that it is necessary to combine two forms of knowledge presentation: declarative and procedural. It is also necessary to ensure the possibility of communication between this type of instrument and external data sources as well as various types of IT tools supporting specific technological decisions.
Download

Paper Nr: 205
Title:

COMMON SERVICES FRAMEWORK

Authors:

Louis Hoebel, Louis Hoebel, Michael Kinstrey and Jeanette Bruno

Abstract: The Common Services Framework (CSF) is developed by GE’s Global Research Center (GRC) as a design pattern and framework for application development. The CSF is comprised of a set of service-oriented API’s and components that implement the design pattern. GE GRC supports a wide diversity of R&D for GE and external customers. The motivation was for a reusable, extensible, domain and implementation agnostic framework that could be applied across various research projects and production applications. The CSF has been developed for use in finance, diagnostics, logistics and healthcare. The design pattern is an extension of the Model-View-Controller pattern and the reference implementation is in Java.
Download

Paper Nr: 265
Title:

TOWARDS THE INTEGRATION OF BIOINFORMATICS DATA AND SERVICES USING SOA AND MASHUPS

Authors:

Elarbi Badidi and Larbi Esmahi

Abstract: Worldwide biological research activities are generating publicly available biological data at a phenomenal pace. Data is usually stored in different formats (fasta, genbank, embl, xml, etc.). Therefore, retrieving, analyzing, parsing, and integrating these heterogeneous data require substantial programming expertise and effort that scientists do not have overall. Bioinformaticians have often considered several approaches to integrate heterogeneous data and software applications. Most of these integration approaches require significant computer skills. Recently, a new technology, called mashups, has emerged to simplify this integration. In this paper, we discuss widely used approaches for integrating data and applications in bioinformatics and our ongoing effort to use mashups in conjunction with Service Oriented Architecture (SOA) for integrating data and applications in Life Sciences.
Download

Area 2 - Software Engineering

Full Papers
Paper Nr: 17
Title:

Work Product-driven Software Development Methodology Improvement

Authors:

Graham Low, Ghassan Beydoun, Brian Henderson-Sellers and Paul Bogg

Abstract: A work product is a tangible artifact used during a software development project; for example, a requirements specifications or class model diagram. Towards a general approach for evaluating and potentially improving the quality of methodologies, this paper proposes utilizing a work product-based approach to method construction known as the “work product pool” approach to situational method engineering to accomplish this quality improvement. Starting from the final software application and identifying work product pre-requisites by working backwards through the methodology process, work product inter-dependencies are revealed. Using method fragments from a specific methodology (here, MOBMAS), we use this backward chaining approach to effectively recreate that methodology. Evaluation of the artificially recreated methodology allows the identification of missing and/or extraneous method elements and where process steps could be improved.
Download

Paper Nr: 29
Title:

A REQUIREMENTS METAMODEL FOR RICH INTERNET APPLICATIONS

Authors:

Maria Jose Escalona, Esteban R. Luna and Gustavo Rossi

Abstract: The evolution of the Web has motivated the development of several Web design approaches to support the systematic building of Web software. Together with the constant technological advances, these methods must be constantly improved to deal with a myriad of new feasible application features. In this paper we focus on the field of Rich Internet Applications (RIA); specifically we aim to offer a solution for the treatment of Web Requirements in RIA development. For this aim we present WebRE+, a requirements metamodel which incorporates RIA features into the modelling repertoire. We illustrate our ideas with a meaningful example of a business intelligence application.
Download

Paper Nr: 40
Title:

How developers test their Open Source Software products. A survey of well-known OSS projects

Authors:

Davide Tosi, Davide Tosi and ABBAS TAHIR

Abstract: Open Source Software (OSS) projects do not usually follow the traditional software engineering development paradigms found in textbooks, thus influencing the way OSS developers test their products. In this paper, we explore a set of 33 well-known OSS projects to identify how software quality assurance is performed under the OSS model. The survey investigates the main characteristics of the projects and common testing issues to understand whether a correlation exists between the complexity of the project and the quality of its testing activity. We compare the results obtained in our survey with the data collected in a previous survey by L. Zhao and S. Elbaum. Our results confirm that OSS is usually not validated enough and therefore its quality is not revealed enough. To reverse this negative trend, the paper suggests the use of a testing framework that can support most of the phases of a well-planned testing activity, and describes the use of Aspect Oriented Programming (AOP) to expose dynamic quality attributes of OSS projects.
Download

Paper Nr: 93
Title:

USING AToM3 FOR THE VERIFICATION OF WORKFLOW APPLICATIONS

Authors:

Amin B. Achouri, Leila Jemni Ben Ayed, Ahlem Ben Younes and Achouri Amine

Abstract: In this paper, we propose an approach for the verification of workflow applications using AToM3 and Event B. Workflow carries applications where many actors take part and cooperate in order to execute operations. Upon composing those operations, many problems such as deadlock freeness, and livelock might appear. In this context, we are going to show how to build a meta-model for UML activity diagram in AToM3. From this meta-model, AToM3 generates a visual tool to build and to specify workflow applications where syntactical verification is made. Further, we are going to define a graph grammar to generate a textual code from the graphically specified workflow. This code will maintain information about all the activities and their dependencies. Another role of the graph grammar is to generate an Event B machine used for the verification of the workflow. Structural errors like deadlock and absence of synchronization can be captured from the resulted Event B model.
Download

Paper Nr: 125
Title:

A Programming Language to Facilitate the Transition from Rapid Prototyping to Efficient Software Production

Authors:

Francisco Ortin, Daniel Zapico and Miguel Garcia

Abstract: Dynamic languages are becoming increasingly popular for developing different kinds of applications, being rapid prototyping one of the scenarios where they are widely used. The dynamism offered by dynamic languages is, however, counteracted by two main limitations: no early type error detection and fewer opportunities for compiler optimizations. To obtain the benefits of both dynamically and statically typed languages, we have designed the StaDyn programming language to provide both approaches. Our language implementation keeps gathering type information at compile time, even when dynamic references are used. This type information is used to offer compile-time type error detection, direct interoperation between static and dynamic code, and better runtime performance. Following the Separation of Concerns principle, dynamically typed references can be easily turned into statically typed ones without changing the application source code, facilitating the transition from rapid prototyping to efficient software production. This paper describes the key techniques used in the implementation of StaDyn to obtain these benefits.
Download

Paper Nr: 140
Title:

Application of service-oriented computing and model-driven development paradigms to business processes: a systematic review

Authors:

Andrea Delgado, Francisco Ruiz, Ignacio G. Rodríguez De Guzmán and Mario Piattini

Abstract: To achieve the defined value for their businesses, current organizations need to manage their business processes in an integrated manner, interconnecting the software systems that support these processes. Over the last few years, new paradigms have appeared to respond to this and other organizational and software needs: Business Process Management (BPM) and Service-Oriented Computing (SOC) which are closely interconnected. Additionally, the Model-Driven Development (MDD) paradigm has been called upon to play an important role in supporting business process implementation by software services. BPM handles the management of business processes, including their modelling, deployment, execution, analysis and improvement. Service-Oriented Computing bases software development on services, which correspond to business concepts and are created in order to perform business processes. Model-Driven Development promotes software development based on models which enable, among other things, transformations and the automatic generation of code for different platforms. With the aim of establishing the bases for research into the integration of these paradigms to support business process management in organizations, a systematic review was carried out, focusing on the current state of the literature concerning the application of service-oriented and model-driven paradigms to business processes.
Download

Paper Nr: 163
Title:

A MODEL-BASED NARRATIVE USE CASE SIMULATION ENVIRONMENT

Authors:

Veit Hoffmann and Horst Lichter

Abstract: Since their introduction use cases are one of the most widespread techniques for the specification of functional requirements. Low quality use cases often cause serious problems in later phases of the development process. Simulation of the behavior described in the use cases may be an important technique to assure the quality of use case descriptions. In this paper we present a model based use case simulation environment for narrative use cases. At first we motivate core requirements of a simulation environment and an underlying execution model. Moreover we describe our model based simulation approach are present some first experiences.
Download

Paper Nr: 181
Title:

Trace Transformation Reuse to Guide Co-Evolution of Models

Authors:

Bastien Amar, Phillipe Dhaussy, Bernard Coulette and Hervé Leblanc

Abstract: With the advent of languages and tools dedicated to model-driven engineering (e.g., ATL, Kermeta, EMF), as well as reference metamodels (MOF, Ecore), model-driven development processes can be used more easily. These processes are based on a large range of closely inter-related models and transformation covering the whole software devolopment lifecycle. When a model is transformed, designers must re-implement the transformation choices for all the related models, which raises some inconsistency problems. To prevent this, we proposed trace transformation reuse to guide coevolution of models. The contribution of this paper is a conceptual framework where repercussion transformation can be easily deployed. The maturity of a software engineering technology should be evaluated by the use of traceability practices.
Download

Paper Nr: 186
Title:

Constraint Reasoning In FocalTest

Authors:

Matthieu Carlier, Matthieu Carlier, Arnaud Gotlieb and Catherine Dubois

Abstract: Property-based testing implies selecting test data satisfying coverage criteria on user-specified properties. However, current automatic test datageneration techniques adopt direct generate-and-test approaches for this task. In FocalTest, a testing tool designed to generate test data for programs and properties written in the functionnal language Focal, test data are generated at random and rejected when they do not satisfy selected coverage criteria. In this paper, we improve FocalTest with a test-and-generate approach, through the usage of constraint reasoning. A particular difficulty is the generation of test data satisfying MC/DC on the precondition of a property, when it contains function calls with pattern matching and higher-order functions. Our experimental results show that a non-naive implementation of constraint reasoning on these constructions outperform traditional generation techniques when used to find test data for testing properties.
Download

Paper Nr: 188
Title:

TOWARDS A HACKER ATTACK REPRESENTATION METHOD

Authors:

Guttorm Sindre, Peter Karpati, Andreas L. Opdahl and Guttorm Sindre

Abstract: Security must be addressed at an early stage of information systems development, and one must learn from previous hacker attacks to avoid similar exploits in the future. Many security threats are hard to understand for stakeholders with a less technical background. To address this issue, we present a five-step method that represents hacker intrusions diagrammatically. It lifts specific intrusions to a more general level of modelling and distils them into threats that should be avoided by a new or modified IS design. It allows involving different stakeholder groups in the process, including non-technical people who prefer simple, informal representations. For this purpose, the method combines five different representation techniques that together provide an integrated view of security attacks and system architecture. The method is illustrated with a real intrusion from the literature, and its representation techniques are tied together as a set of extensions of the UML metamodel.
Download

Paper Nr: 229
Title:

Model-Driven Deployment of Distributed Components-based Software

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Due to architecture and environment complexity, the life cycle of distributed component-based software systems raises a new challenge. Hence there is an increased need for new techniques and tools to manage these systems. This paper focuses on software deployment which is emerging as a new research field. Deployment is a complex process gathering activities to make applications operational after development. The goal of this document is to define what could be an unified meta modeling architecture for deployment of distributed component based software systems. To illustrate the feasibility of the approach, we present a tool called UDeploy (Unified Deployment architecture) which manages the planning process from meta-information related to the application and the infrastructure; secondly, the generation of specific deployment descriptors related to the application and the environment (i.e. the machines connected to a network where a software system is deployed); and finally the execution of a plan produced by means of deployment strategies used for elaborating deployment plan.
Download

Short Papers
Paper Nr: 15
Title:

TOWARDS A META-MODEL FOR WEB SERVICES’ PREFERENCES

Authors:

Ghazi Al-Khatib, Said Elnaffar and Zakaria Maamar

Abstract: This paper presents a meta-model for describing preferences of Web services. Two types of preferences are examined namely privacy and membership. Privacy restricts the data that Web services exchange, and membership restricts the peers that Web services interact with. Both types have risen lately with force in response to the open and dynamic nature of the Internet. While most of the research work on Web services has been driven by the concerns of users, this paper stresses out the concerns of providers of Web services. Different meta-classes like Web service, functionality, and WSDL are included the meta-model for Web services' preferences. To guarantee the satisfaction of these preferences, policies are developed in this~paper.
Download

Paper Nr: 38
Title:

X-Fee - An extensible Framework for Providing Feedback in the Internet of Services

Authors:

Anja Strunk

Abstract: In the Internet of Services, there is a big demand for feedback about services and their attributes. For example, on service market places, feedback is used by service discovery to help a service user to find the right service or at runtime, feedback is employed to detect and compensate errors. Thus, the research community suggests a large amount of techniques to make feedback available. However, there is a lack of adequate feedback frameworks to be used to implement these techniques. In this paper we suggest the feedback framework X-Fee, which is highly extensible, flexible and interoperable to easily realize feedback components and integrate them in arbitrary infrastructures in the Internet of Services.
Download

Paper Nr: 48
Title:

What is wrong with AOP?

Authors:

Adam Przybylek

Abstract: Modularity is a key concept that programmers wield in their struggle against the complexity of software systems. The implementation of crosscutting concerns in a traditional programming language (e.g. C, C#, Java) results in software that is difficult to maintain and reuse. Although modules have taken many forms over the years from functions and procedures to classes, no form has been capable of expressing a crosscutting concern in a modular way. The latest decomposition unit to overcome this problem is an aspect promoted by aspect-oriented programming (AOP). The aim of this paper is to review AOP within the context of software modularity.
Download

Paper Nr: 62
Title:

Design Patterns with AspectJ, generics, and reflective programming

Authors:

Adam Przybylek

Abstract: Over the past decade, there has been a lot of interest towards aspect-oriented programming (AOP). Hannemann and Kiczales developed AspectJ implementations of the Gang-of-Four (GoF) design patterns. Their study was continued by Hachani, Bardou, Borella, and others. However, no one has tried to improve the implementations by using generics or reflective programming. This research faces up to this issue. As a result, highly reusable implementations of Decorator, Proxy, and Prototype are presented.
Download

Paper Nr: 82
Title:

QuEF: AN ENVIRONMENT FOR QUALITY EVALUATION ON MODEL-DRIVEN WEB ENGINEERING APPROACHES

Authors:

Francisco José Domínguez Mayo, Maria Jose Escalona, Manuel Mejías Risoto and Arturo H. Torres Zenteno

Abstract: Due to the high number and wide variety of methodologies which currently exist in the field of Model-Driven Web Engineering (MDWE) methodologies, it has become necessary to evaluate the quality of the existing methodologies to provide helpful information for the developers. Since proposals are constantly appearing, the need may arise not only to evaluate quality but also to find out how it can be improved. This article presents work being carried out in this field and describes tasks to define QuEF (Quality Evaluation Framework), which is an environment to evaluate, under objective measures, the quality of Model-Driven Web Engineering methodologies.
Download

Paper Nr: 96
Title:

COEVOLUTIVE META-EXECUTION SUPPORT

Authors:

Gilles Dodinet, Gilles Dodinet, Michel Zam and Jomier Geneviève

Abstract: Despite its promises, the lack of support for consistent coevolution of models with theirs meta-models and instances prevents a broader adoption of MDE. This article presents a coevolution support for reflective meta-models and their instances tightly integrated into an execution platform. The platform allows stakeholders, developers and final users to define, update and run models and theirs instances concurrently. Design changes are reflected immediately in the running applications, hosted by the platform. Both instances and models are stored in a shared multi-version database that brings persistency, consistency and traceability support. A web-based implementation of the platform validates the approach and sets the foundations for a collaborative integrated development environment that evolves continuously.
Download

Paper Nr: 128
Title:

WEB SERVICES FOR HIGHER INTEGRITY INSTRUMENT CONTROL

Authors:

Susan Mengel and Phillip Huffman

Abstract: This paper relates the experience in using a modified life cycle development process which is proposed herein for integrity planning applied to web services as reusable software components in order to enhance the web services' reliability, safety, and security in an instrument control environment. Using the integrity-enhanced lifecycle, a test bed instrument control system is developed using .NET web services. A commercial web service is also included in the test bed system for comparison. Both systems are monitored over a one-year period and failure data is collected. For a further comparison, a similar instrument control system is developed to a high quality pedigree but lacking the focus on integrity and reusable components. Most of the instrumentation is the same between the two systems; however, the comparative system uses a more traditional approach with a single, integrated software control package. As with the test bed system, this comparative system is monitored over a one-year period. The data for the two systems is compared and the results demonstrate a significant increase in integrity for the web service-based test bed system. The failure rate for the test bed system is approximately 1 in 8100 as compared to 1 in 1600 for the comparison system.
Download

Paper Nr: 144
Title:

SLR-TOOL: A TOOL FOR PERFORMING SYSTEMATIC LITERATURE REVIEWS

Authors:

Ana M. Fernández-Sáez, Marcela Genero and Francisco P. Romero

Abstract: Systematic literature reviews (SLRs) have been gaining a significant amount of attention from Software Engineering researchers since 2004. SLRs are considered to be a new research methodology in Software Engineering, which allow evidence to be gathered with regard to the usefulness or effectiveness of the technology proposed in Software Engineering for the development and maintenance of software products. This is demonstrated by the growing number of publications related to SLRs that have appeared in recent years. While some tools exist that can support some or all of the activities of the SLR processes defined in (Kitchenham & Charters, 2007), these are not free. The objective of this paper is to present the SLR-Tool, which is a free tool and is available on the following website: http://alarcosj.esi.uclm.es/SLRTool/, to be used by researchers from any discipline, and not only Software Engineering. SLR-Tool not only supports the process of performing SLRs proposed in (Kitchenham & Charters, 2007), but also provides additional functionalities such as: refining searches within the documents by applying text mining techniques; defining a classification schema in order to facilitate data synthesis; exporting the results obtained to the format of tables and charts; and exporting the references from the primary studies to the formats used in bibliographic packages such as EndNote, BibTeX or Ris. This tool has, to date, been used by members of the Alarcos Research Group and PhD students, and their perception of it is that it is both highly necessary and useful. Our purpose now is to circulate the use of SLR-Tool throughout the entire research community in order to obtain feedback from other users.
Download

Paper Nr: 159
Title:

FRAMEWORK AS SOFTWARE SERVICE (FASS) An Agile e-Toolkit to Support Agile Method Tailoring

Authors:

Asif Qumer and Brian Henderson-Sellers

Abstract: In a real software application development environment, a pre-defined or fixed methodology, whether plan-based or agile, is unlikely to be successfully adopted “off-the-shelf”. Agile methods have recognised that a method should be tailored to each situation. The purpose of this paper is to present an agile e-toolkit software service to facilitate the tailoring of agile processes in the overall context of agile method adoption and improvement. The agile e-toolkit is a web-based tool to store and manage agile practices extracted from various agile methods and frameworks. The core component of the e-toolkit is the agile knowledge-base or repository. The agile knowledge-base contains agile process fragments. Agile consultants or teams can then use agile process fragments stored in the agile knowledge-base for the tailoring of situation-specific agile processes by using a situational method engineering approach. The e-toolkit software service has been implemented using a service-oriented cloud computing technology platform (Software as a Service – SaaS). The agile e-toolkit specifications and software application details have been summarized in this paper.
Download

Paper Nr: 162
Title:

Model Checking Is Refinement: From Computation Tree Logic to Failure Trace Testing

Authors:

Stefan Bruda and Zhiyu Zhang

Abstract: Two major systems of formal conformance testing are model checking and algebraic model-based testing. Model checking is based on some form of temporal logic. One powerful and realistic logic being used is computation tree logic (CTL), which is capable of expressing most interesting properties of processes such as liveness and safety. Model-based testing is based on some operational semantics of processes (such as traces, failures, or both) and its associated preorders. The most fine-grained preorder beside bisimulation (mostly of theoretical importance) is based on failure traces. We show that these two most powerful variants are equivalent, in the sense that for any CTL formula there exists a set of failure trace tests that are equivalent to it. Combined with previous results, this shows that CTL and failure trace tests are equivalent.
Download

Paper Nr: 172
Title:

AUTOMATIC GENERATION OF DATA MERGING PROGRAM CODES

Authors:

Hyeonsook Kim, Samia Oussena, Ying zhang and Tony Clark

Abstract: Data merging is an essential part of ETL (Extract-Transform-Load) processes to build a data warehouse system. To avoid rewheeling merging techniques, we propose a Data Merging Meta-model (DMM) and its transformation into executable program codes in the manner of model driven engineering. DMM allows defining relationships of different model entities and their merging types in conceptual level and our formalized transformation described using ATL (ATLAS Transformation Language) enables automatic generation of PL/SQL packages to execute data merging in commercial ETL tools. With this approach data warehouse engineers can be relieved from burden of repetitive complex script coding and pain of maintaining consistency of design and implementation.
Download

Paper Nr: 174
Title:

Towards an integrated support for traceability of quality requirements using software spectrum analysis

Authors:

Haruhiko Kaiya, Yuutarou Shimizu, Kenji Kaijiri and Kasuhisa Amemiya

Abstract: In actual software development, software engineering artifacts such as requirements documents, design diagrams and source codes can be updated and changed respectively and simultaneously, and they should be consistent with each other. However, maintaining such consistency is one of the difficult problems especially for software quality features such as usability, reliability, efficiency and so on. Managing traceability among such artifacts is one of solutions, and several types of techniques for traceability have been already proposed. However, there is no silver bullet for solving the problem. In this paper, we categorized current techniques for managing traceability into three types: traceability links, central model and projection traceability. We then discuss how to cope with these types of techniques for managing traceability for software quality features. Because projection traceability seems to be suitable for quality features and there are few implementations of projection traceability, we implement a method based on projection traceability using spectrum analysis for software quality. We also apply the method to an example to confirm the usefulness of projection traceability as well as traceability links and central model.
Download

Paper Nr: 183
Title:

A Tool for User-Guided Database Application Development

Authors:

José Luis Caro, Angel Mora and Mora Bonilla Angel

Abstract: Beyond the database normalization process, much work has been done on the use of functional dependencies (FDs), their discovery using mining techniques, their use in query optimization and in the design of algorithms dealing with the implication problem etc. Nevertheless, although much research expounds the benefits of using functional dependencies, only a few modeling tools actually use them. In this work we present CBD, a new software development tool which allows end users to specify their requirements. CBD allows the user to design his/her own GUI for the application using forms and interface elements and it builds a meta-data dictionary with information on functional dependencies. This data dictionary will be used to generate the unified data model and a behavior model.
Download

Paper Nr: 185
Title:

Extending UML to Represent Interaction Roles and Variants of Design Pattern

Authors:

Keen Ngee Loo, Keen N. Loo and Sai P. Lee

Abstract: There are various descriptions, structures and behavior on the solution for a design problem in a design pattern. However, there is not much visual aid on the internal workings of a design pattern in a visual design modeling tool. Currently, it is difficult to determine the pattern roles and variants of interaction groups of a design pattern as these information is not represented in the UML interaction diagram. There is a need to have a consistent way to define the pattern roles participating in a design pattern interaction and whether there is a variant in each interaction group. This paper proposes to extend the UML sequence diagram via UML profile to allow designers to define and visualise the pattern roles and the different types of interaction groups for a design pattern. The proposed extensions are able to capture the two ways of design pattern interaction variants in sequence diagram. An example of the approach is then applied to the observer design pattern. The benefit of the extension enables tool support on cataloguing and retrieval of design patterns’ structural and behavioural information as well as variant in a visual design modeling tool.
Download

Paper Nr: 189
Title:

UNIFYING SOFTWARE AND DATA REVERSE ENGINEERING A Pattern Based Approach

Authors:

Gianluigi Viscusi, Francesca Arcelli Ontana and Marco Zanoni

Abstract: At the state of the art, objects oriented applications use data structured in relational databases by exploiting some patterns, like the Domain Model and Data Mapper. These approaches aim to represent data in the OO way, using objects for representing data entities. Furthermore, we point out that the identification of these patterns can show the link between the object model and the conceptual entities, exploiting their associations to the physical data objects. The aim of this paper is to present a unified perspective for the definition of an integrated approach for software and data reverse engineering. The discussion is carried out by means of a sample application and a comparison with results from current tools.
Download

Paper Nr: 193
Title:

COGNITIVE INFLUENCES IN PRIORITIZING SOFTWARE REQUIREMENTS

Authors:

Nadina Martínez Carod, Nadina M. Carod and Alejandra Cechich

Abstract: In software development, the elicitation process and particularly the acquisition of software requirements are critical success factors. Elicitation is about learning the needs of users, and communicating those needs to system builders. Prioritizing requirements includes negotiation as an important issue, which becomes extremely difficult, as clients often do not know exactly what they need. To overcome this situation, aiming at improving stakeholder’s negotiation, we propose reducing the gap of misunderstanding between them by the use of cognitive science. Particularly, we suggest using cognitive styles to characterize people from the way their process information. In this paper, we introduce a case study showing that cognitive profiles may affect requirement understanding and prioritization. Our controlled experiment shows that considering cognitive profiles when performing elicitation might increase stakeholders’ satisfaction and prioritization accuracy
Download

Paper Nr: 195
Title:

SOFTWARE RELEASES MANAGEMENT IN THE Trigger AND DATA ACQUISITION OF ATLAS EXPERIMENT

Authors:

Andrei Kazarov, Igor Soloviev, Mihai Caprini and Reiner Hauser

Abstract: ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of commissioning and running the TDAQ system. In particular, the patching and distribution model based on RPM packaging is discussed, which is important for the s/w which is maintained for a long period on the running production system.
Download

Paper Nr: 226
Title:

AN ASPECT-BASED APPROACH FOR CONCURRENT PROGRAMMING USING CSP FEATURES

Authors:

111 111, José E. Araújo, Henrique Rebelo, Ricardo Lima, Alexandre Mota and Fernando Castor

Abstract: The construction of large scale parallel and concurrent applications is one of the greatest challenges faced by software engineers nowadays. Programming models for concurrency implemented by mainstream programming languages, such as Java, C, and C++, are too low-level and difficult to use by the average programmer. At the same time, the use of libraries implementing high level concurrency abstractions (such as JCSP, a library implementing Communicating Sequential Processes for Java) requires additional learning effort and produces programs where application logic is tangled with library-specific code. In this paper we propose separating concurrent concerns (CSP code) from the development of sequential Java processes. We explore aspect-oriented programming to implement this separation of concerns. A compiler generates an AspectJ code, which instruments the sequential Java program with JCSP concurrent constructors. We have conducted an experiment to evaluate the benefits of the proposed framework. We employ metrics for attributes such as separation of concerns, coupling, and size to compare our approach against the JCSP framework and thread based approaches.
Download

Paper Nr: 249
Title:

A LEGACY SYSTEMS USE CASE RECOVERY METHOD

Authors:

Philippe Dugerdil

Abstract: During the development of a legacy system reverse engineering method we developed a technique to help with the recovery of the system’s use-cases. In fact, our reverse-engineering method starts with the re-documentation of the system’s use-case by observing its actual users. But these use-cases are never complete and accurate. In particular, the many alternative flows are often overlooked by the users. This paper presents our use-case recovery methodology as well as the techniques we implemented to identify all the flows of the legacy system’s use-case. Starting from an initial use-case based on the observation of the users, we gather the corresponding execution trace by running the system according to this use-case. The analysis of this execution trace coupled with a static analysis of the source code lets us find the possible alternative execution paths of the system. The execution conditions for these paths are analyzed to establish the link to the use-case level. This lets us synthesize alternative flows for the use-case. Next we run the system again following these alternative flows to uncover possible new alternative paths, until one converges to a stable use-case model.
Download

Paper Nr: 252
Title:

Lessons from engineering: can software benefit from product based evidence of reliability?

Authors:

Neal Snooke

Abstract: This paper argues that software engineering should not overlook the lessons learned by other engineering disciplines with longer established histories. As software engineering evolves it should focus not only on application functionality but also on mature engineering concepts such as reliability, dependability, safety, failure mode analysis, and maintenance. Software is rapidly approaching the level of maturity that other disciplines have already encountered where it is not merely enough to be able to make it work (sometimes), but we must be able to objectively assess quality, determine how and when it can fail and mitigate risk as necessary. The tools to support these tasks are in general not integrated into the design and implementation stages as they are for other engineering disciplines although recent techniques in software development have the potential to allow new types of analysis to be developed and integrated so that software justify its claim to be engineered. Currently software development relies primarily on development processes and testing to achieve these aims; but neither of these provide the hard design and product analysis that engineers find essential in other disciplines. This paper considers how software development can learn from other engineering analyses and investigates failure modes and effects analysis as an example.
Download

Paper Nr: 259
Title:

EVOLVABILITY IN SERVICE ORIENTED SYSTEMS

Authors:

Anca D. Ionita and Marin Litoiu

Abstract: The paper investigates the evolution and maintenance of service oriented systems deployed in SOA and cloud infrastructures. It analyzes the challenges entailed by the frequent modifications of business environments, discussing their causes, grasping the evolution points in service architectures, studying classifications of human actors involved across the whole life cycle, as well as pointing out possible risks and difficulties encountered in the process of change. Based on the lessons learned in our study, four pillars for improving service evolvability are identified: orientation towards the users, increasing the level of abstraction, supporting automation and enabling adaptivity through feedback loops.
Download

Paper Nr: 263
Title:

A FRAMEWORK FOR PROACTIVE SLA NEGOTIATION

Authors:

Khaled Mahbub, Khaled Mahbub and George Spanoudakis

Abstract: In this position paper we propose a framework for proactive SLA negotiation that integrates this process with dynamic service discovery and, hence, can provide integrated runtime support for both these key activities which are necessary in order to achieve the runtime operation of service based systems with minimised interruptions. More specifically, our framework discovers candidate constituent services for a composite service, establishes an agreed but not enforced SLA and a period during which this pre-agreement can be activated should this become necessary
Download

Paper Nr: 20
Title:

Slicing of UML Models

Authors:

Kevin Lano and Sekoufeh Kolahdouz-Rahimi

Abstract: This paper defines techniques for the {\em slicing} of UML models, that is, for the restriction of models to those parts which specify the properties of a subset of the elements within them. The purpose of this restriction is to produce a smaller model which permits more effective analysis and comprehension than the complete model, and also to form a step in factoring of a model. We consider class diagrams, single state machines, and communicating sets of state machines.
Download

Paper Nr: 27
Title:

TOWARDS A ‘UNIVERSAL’ SOFTWARE METRICS TOOL - Motivation, Process and a Prototype

Authors:

Gordana Rakic, Zoran Budimac and Klaus Bothe

Abstract: In this paper we investigate main limitations of actual software metrics techniques/tools, propose a unified intermediate representation for calculation of software metrics, and describe a promising prototype of a new metrics tool. The motivation was the evident lack of wider utilization of software metrics in raising the quality of software products.
Download

Paper Nr: 31
Title:

HEAP GARBAGE COLLECTION WITH REFERENCE COUNTING

Authors:

Wuu Yang, Rong-Hong Jan and Huei-Ru Tseng

Abstract: In algorithms based on reference counting, a garbage-collection decision has to be made whenever a pointer x -> y is about to be destroyed. At this time, the node y may become dead even if y’s reference count is not zero. This is because y may belong to a piece of cyclic garbage. Some aggressive collection algorithms will put y on the list of potential garbage regardless of y’s reference count. Later a trace procedure starting from y will be initiated. Other, less aggressive algorithms will put y on the list of potential garbage only if y’s reference count falls below a threshold, such as 1. The formal approach may waste time on tracing live nodes and the latter may leave cyclic garbage uncollected indefinitely. The problem with the above two general approaches is that it is difficult to decide if y is dead when the pointer x -> y is destroyed. We propose a new garbage-collection algorithm in which each node maintains two, rather than one, reference counters, gcount and hcount. gcount is the number of references from the global variables and from the run-time stack. hcount is the number of references from the heap. Our algorithm will put node y on the list of potential garbage if and only if y’s gcount becomes 0. The better prediction made by our algorithm results in more efficient garbage collectors.
Download

Paper Nr: 44
Title:

Training Global Software Development Skills Through a Simulated Environment

Authors:

Miguel Jiménez Monasor, Aurora Vizcaino and Mario Piattini

Abstract: Training and Education in Global Software Development (GSD) is a challenge that has recently emerged for companies and universities, which entails tackling certain drawbacks caused by the distance, such as communication problems, cultural and language differences or the inappropriate use of groupware tools. We have carried out a Systematic Literature Review in relation to the teaching of GSD which has proved that educators should provide learners with a wide set of realistic and practical experiences, since the skills required are best learned by doing. However, this is difficult as companies are not willing to incorporate students in their projects. In this paper we present an alternative: an environment that will simulate typical GSD problems and will allow students and practitioners to develop skills by interacting with virtual agents from different cultures, thus avoiding the risks of involving non-qualified people in real settings.
Download

Paper Nr: 60
Title:

Color image encryption solution based on the chaotic system of Logistic and Henon

Authors:

Lifu Huang, Yunpeng Zhang, Jing Xie, Peng Sun and Yunting Huang

Abstract: The security of color image has become an important network information security research field. To meet the security requirements of color image and according to the characteristics of the image coding and chaotic system, the paper presented a color image encryption solution based on the chaotic systems. With the help of Logistic system, the solution generates the chaotic sequence, which is used to the parameters and the number of iterations of Henon system. And then, encrypt the color image through multiple iterating the Henon system. At last, we analyse and prove the solution in theory and experiment. The results show that the encrypted image has a uniform distribution of the pixel value, has a good solution diffusion, can effectively resist the phase space reconstruction attacks, and has a good security and reliability.
Download

Paper Nr: 69
Title:

Language-Oriented Programming via DSL Stacking

Authors:

Bernhard G. Humm, Bernhard Humm and Ralf Engelschall

Abstract: According to the paradigm of Language-Oriented Programming, an application for a problem should be implemented in the most appropriate domain-specific language (DSL). This paper introduces DSL stacking, an efficient method for implementing Language-Oriented Programming where DSLs and general-purpose languages are incrementally developed on top of a base language. This is demonstrated with components of a business information system that are implemented in different DSLs for Semantic Web technology in Lisp.
Download

Paper Nr: 74
Title:

MODEL-DRIVEN APPROACHES FOR SERVICE-BASED APPLICATIONS DEVELOPMENT

Authors:

Andreas Prinz and Selo Sulistyo

Abstract: Service-based systems are considered as an architectural approach for managing software complexity and their development. With this, a software application is built by defining a set of interactions of autonomous, compound, and loosely-coupled software units called services. Another way of managing software complexity is using model-driven approaches. With this, the development of software applications is started from model levels and thereby, code for implementing the software application is generated automatically. This paper presents AMG (abstract, model and generate), a combination of the two approaches.
Download

Paper Nr: 173
Title:

PERFORMANCE OPTIMIZATION OF EXHAUSTIVE RULES IN GRAPH REWRITING SYSTEMS

Authors:

Tamás Mészáros, Márk Asztalos, Gergely Mezei and Hassan Charaf

Abstract: Graph rewriting-based model transformation is a well known technique with strong mathematical background to process domain specific models represented as graphs. The performance optimization techniques realized in today's graph transformation engines usually place focus on the optimization of a single execution of the individual rules, and do not consider the optimization possibilities in the repeated execution. In this paper we present a performance optimization technique called deep exhaustive matching for exhaustively executed rules. Deep exhaustive matching continues the matching of the same rule from the next possible position after a successful rewriting phase, thus we can achieve noticeable performance gain.
Download

Paper Nr: 192
Title:

A Scala-Based Domain Specific Language for Structured Data Representation

Authors:

Kazuaki Maeda

Abstract: This paper describes Sibon, a new representation written in a text-based data format using Scala syntax. The design principle of Sibon is good readability and simplicity of structured data representation. An important feature of Sibon is an executable representation. Once Sibon-related definitions are loaded, the representation can be executed corresponding to the definitions. A program generator was developed to create Scala and Java programs from Sibon definitions. In the author's experience, productivity was improved in the design and implementation of programs that manipulate structured data.
Download

Paper Nr: 197
Title:

WEB TOOL FOR OBJECT ORIENTED DESIGN METRICS

Authors:

Jose R. Hilera, Luis Fernandez Sanz and Marina Cabello

Abstract: An open source web application to calculate metrics from UML class diagrams is presented. The system can process any class diagram encoded in XMI format. After processing the XMI document, a complete report can be obtained in two different formats, HTML and spreadsheet file. The application can be accessed freely in a website. Source code is available for downloading.
Download

Paper Nr: 199
Title:

IMPLEMENTING QVT IN A DOMAIN-SPECIFIC MODELING FRAMEWORK

Authors:

Istvan Madari, Mark Asztalos, Tamás Mészáros, Laszlo Lengyel and Hassan Charaf

Abstract: Meta Object Facility 2.0 Query/Views/Transformation (QVT) is OMG’s standard for specifying model transformations, views and queries. In this paper we deal with the QVT Relations language, which is a declarative specification of model transformation between two models. The QVT Relations language specifies several great features in practice, such as implicit trace creation support, or bidirectional transformations. However, QVT lacks implementation because its specification is not final and far too complex. The main contribution of this paper is to show how we integrated QVT constructs in our domain-specific modeling environment to facilitate a later implementation of QVT Relations-driven bidirectional model transformation.
Download

Paper Nr: 200
Title:

TEXTUAL SYNTAX MAPPING CAN ENABLE SYNTACTIC MERGING

Authors:

László Angyal, Laszlo Lengyel, Tamás Mészáros and Hassan Charaf

Abstract: As the support is increasing for textual domain-specific languages (DSL), the reconstruction of visual models from the generated textual artifacts has also come into focus. The state-of-the-art bidirectional approaches support reversible text generation from models by a single syntax mapping. However, even these tools have not gone such far to facilitate the synchronization between models and generated artifacts. This paper presents the importance of synchronization and how these mappings can enable syntactic reconciliation for custom DSLs. Our approach provides algorithms for supporting incremental DSL-driven software development, which enables the freedom of choosing between the textual or visual editing of artifacts. It depends on the developer which representation is more effective for her/him at a specific moment.
Download

Paper Nr: 206
Title:

SPECIFICATION AND VERIFICATION OF WORKFLOW APPLICATIONS USING A COMBINATION OF UML ACTIVITY DIAGRAMS AND EVENT B

Authors:

Ahlem Ben Younes and Leila Jemni Ben Ayed

Abstract: In this paper, we present a specification and formal verification technique for workflow applications using UML Activity Diagrams (AD) and Event B. The lack of a precise semantics for UML AD makes the reasoning on models constructed using such diagrams infeasible. However, such diagrams are widely used in domains that require a certain degree of confidence. Due to economical interests, the business domain is one of these. To enhance confidence level of UML AD, this paper provides a formal definition of their syntax and semantics. The main interest of our approach is that we chose UML AD, which are recognized to be more tractable by engineers. We outline the translation of UML AD into Event B in order to verify functional properties of workflow models (such as deadlock-inexistence, liveness, fairness) automatically, using the B powerful support tools like B4free. We propose a solution to specify time in Event B, and by an example of workflow application, we illustrate the proposed technique
Download

Paper Nr: 208
Title:

EXPLORING EMPIRICALLY THE RELATIONSHIP BETWEEN LACK OF COHESION IN OBJECT-ORIENTED SYSTEMS AND COUPLING AND SIZE

Authors:

Mourad Badri, Linda Badri and Fadel Touré

Abstract: Many metrics have been proposed in the last several years to measure class cohesion in object-oriented systems. Cohesion is, in fact, considered as one of most important object-oriented software attributes. The study presented in this paper aims at exploring empirically the relationship between lack of cohesion of classes in object-oriented systems and their coupling and size. We designed and conducted an empirical study on various open source Java software systems. The experiment has been conducted using several well known code-based metrics related to cohesion, coupling and size. The results of this study provide evidence that a lack of cohesion may actually be associated with (high) coupling and (large) size.
Download

Paper Nr: 214
Title:

MUTATION TESTING STRATEGIES: A COLLATERAL APPROACH

Authors:

Mike Papadakis, Mike Papadakis, Nicos Malevris and Marinos Kintis

Abstract: Mutation Testing is considered to be one of the most powerful techniques for unit testing and at the same time one of the most expensive. The principal expense of mutation is the vast number of imposed test requirements, many of which cannot be satisfied. In order to overcome these limitations, researchers have proposed many cost reduction techniques, such as selective mutation, weak mutation and a novel approach based on mutant combination, which combines first order mutants to generate second order ones. An experimental comparison involving weak mutation, strong mutation and various proposed strategies was conducted. The experiment shows that all proposed approaches are quite effective in general as they result in high collateral coverage of strong mutation (approximately 95%), while recording remarkable effort savings. Additionally, the results suggest that some of the proposed approaches are more effective than others making it possible to reduce the mutation testing application cost with only a limited impact on its effectiveness.
Download

Paper Nr: 215
Title:

An UML Activities Diagrams Translation into Event B Supporting the Specification and the Verification of Workflow Application Models

Authors:

Yousra H. Bendaly, Leila Jemni Ben Ayed and Najet Hamdi

Abstract: This paper exposes the transformation of UML activity diagrams into Event B for the specification and the verification of parallel and distributed workflow applications. With this transformation, UML models could be verified by verifying derived event B models. The design is initially expressed graphically with UML and translated into Event B. The resulting model is then enriched with invariants describing dynamic properties such as deadlock freeness, livelock freeness and reachability. The approach uses activity diagrams meta-model.
Download

Paper Nr: 219
Title:

JSimil: A Java Bytecode Clone Detector

Authors:

Luis Quesada, Fernando Berzal, Luis Quesada and Juan Carlos Cubero

Abstract: We describe JSimil, a code clone detector that uses a novel algorithm to detect similarities in sets of Java programs at the bytecode level. The proposed clone detector emphasizes scalability and efficiency. It also supports customization through profiles that allow the user to specify matching rules, system behavior, pruning thresholds, and output details. Experimental results reveal that JSimil outperforms existing systems. It is even able to spot similarities when complex code obfuscation techniques have been applied.
Download

Paper Nr: 225
Title:

META-DESIGN PARADIGM BASED APPROACH FOR ITERATIVE RAPID DEVELOPMENT OF ENTERPRISE WEB APPLICATIONS

Authors:

Athula Ginige

Abstract: Developing enterprise software or web applications that meet user requirements within time and budget still remains a challenge. The success of these applications mostly depends on how well the user requirements have been captured. The literature shows progress has been made on two fronts; improving ways requirements are captured and increasing interaction between users and developers to detect gaps or miscommunication of requirements early in the lifecycle by using iterative rapid development approaches. This paper presents a Meta-Design paradigm based approach that builds on work already done in the area of Model Driven Web Engineering to address this issue. It includes a Meta-Model of an enterprise web application to capture the requirements and an effective way of generating the application.
Download

Paper Nr: 266
Title:

Testing in Parallel

Authors:

Zhenyu Zhang, Zhenyu Zhang, Zijian Tong and Xiaopeng Gao

Abstract: When software evolves, its functionalities are evaluated using regression testing. In a regression testing process, a test suite is augmented, reduced, prioritized, and run on a software build version. Regression testing has been used in industry for decades; while in some modern software activities, we find that regression testing is yet not practical to apply. For example, according to our realistic experiences in Sohu.com Inc., running a reduced test suite, even concurrently, may cost two hours or longer. Nevertheless, in an urgent task or a continuous integration environment, the version builds and regression testing requests may come more often. In such a case, it is not strange that a new round of test suite run needs to start before all the previous ones have terminated. As a solution, running test suites on different build versions in parallel may increase the efficiency of regression testing and facilitate evaluating the fitness of software evolutions. On the other hand, hardware and software resources limit the number of paralleled tasks. In this paper, we raise the problem of testing in parallel, give the general problem settings, and use a pipeline presentation for data visualization. Solving this problem is expected to make practical regression testing.
Download

Area 3 - Distributed Systems

Full Papers
Paper Nr: 50
Title:

NOTES ON PRIVACY-PRESERVING DISTRIBUTED MINING AND HAMILTONIAN CYCLES

Authors:

Ray Kresman and Renren Dong

Abstract: Distributed storage and retrieval of data is both the norm and a necessity in today’s computing environment. However, sharing and dissemination of this data is subject to privacy concerns. This paper addresses the role of graph theory, especially Hamiltonian cycles, on privacy preserving algorithms for mining distributed data. We propose a new heuristic algorithm for discovering disjoint Hamiltonian cycles in the underlying network. Disjoint Hamiltonian cycles are useful in a number of applications; for example, to ensure that someone’s private data remains private even when others collude to discover the data.
Download

Short Papers
Paper Nr: 25
Title:

The Overhead of Safe Broadcast Persistency

Authors:

Francesc D. Muñoz-Escoí, Jose R. Gonzalez de Mendivil, Rubén de Juan-Marín and José E. Armendáriz-Iñigo

Abstract: Although the need of logging messages in secondary storage once they have been received has been stated in several papers that assumed a recoverable failure model, none of them analysed the overhead implied by that logging in case of using reliable broadcasts in a group communication system guaranteeing virtual synchrony. At a glance, it seems an excessive cost for its apparently limited advantages, but there are several scenarios that contradict this intuition. This paper surveys some of these configurations and outlines some benefits of this persistence-related approach.
Download

Paper Nr: 33
Title:

FALL DETECTION SYSTEMS: A SOLUTION BASED ON LOW COST SENSORS

Authors:

María J. Tirado, Javier Finat, Miguel A. Laguna, María Jesús Tirado, Javier Finat and José M. Marqués

Abstract: The problem of fall detection in elderly patients is particularly critical in persons who live alone or are alone most of the day. The use of information and communication technologies to facilitate their autonomy is a clear example of how technological advances can improve the quality of life of dependent people. This article presents a prototype developed with a low cost device (the gamepad of a known video console) using its Bluetooth communication capabilities and built-in accelerometer. The latter is much more sensitive than other similar devices integrated in mobile phones and much cheaper than industrial accelerometers. Besides its stand-alone use, the system can be connected to a generic remote monitoring system that has been developed as a software product line for use in aged persons residences.
Download

Paper Nr: 61
Title:

An improved high-density knapsack-type public key cryptosystem

Authors:

Yu Zhu, Yunpeng Zhang, Yunting Huang and Xia Lin

Abstract: Almost all knapsack-type public key cryptography has been proven unsafe. To solve this problem, more secure public key cryptographic algorithms are urgently needed.This article first discusses the basic theory of knapsack-type public key and methods that used to attack the knapsack public key. Then, it analysis the literature[1], and points out the potential defects of its cryptography safety. Meanwhile, the article gives out an improved algorithm, and discusses the safety and efficiency of the algorithm.The analysis of the algorithm shows that the improved algorithm is better than the original one in security.
Download

Paper Nr: 150
Title:

A STUDY OF SECURITY APPROACHES FOR THE DEVELOPMENT OF MOBILE GRID SYSTEMS

Authors:

David G. Rosado, Eduardo Fernández-Medina and Javier Lopez

Abstract: Mobile Grid systems allow us to build highly complex information systems with various and remarkable features (interoperability between multiple security domains, cross-domain authentication and authorization, dynamic, heterogeneous and limited mobile devices, etc), which demand secure development methodologies to build quality software, offering methods, techniques and tools that facilitate the work of the entire team involved in software development. These methodologies should be supported by Grid security architectures that define the main security aspects to be considered, and by solutions to the problem of how to integrate mobile devices within Grid systems. Some approaches regarding secure development methodologies of Grid security architectures and of the integration of mobile devices in the Grid have been found in literature, and these are analyzed and studied in this paper, offering a comparison framework of all the approaches related to security in Mobile Grid environments.
Download

Paper Nr: 166
Title:

AN APPROACH TO DATA-DRIVEN ADAPTABLE SERVICE PROCESSES

Authors:

George Athanasopoulos and Aphrodite Tsalgatidou

Abstract: Within the currently forming pervasive computing environment services and information sources thrive. Instantiations of the service oriented computing paradigm e.g. Web, Peer-to-Peer (P2P) and Grid services are continuously emerging, whilst information can be collected from several information sources e.g. materializations of the Web 2.0 and Web 3.0 trends, Social Networking apps and Sensor Networks. Within this context the development of adaptable service oriented processes utilizing heterogeneous services, in addition to available information is an emerging trend. This paper presents an approach and an enabling architecture that leverage the provision of data-driven, adaptable, heterogeneous service processes. Core within the proposed architecture is a set of interacting components that accommodate the acquisition of information, the execution of service chains and their adaptation based on collected information.
Download

Paper Nr: 178
Title:

QUANTUM CRYPTOGRAPHY BASED KEY DISTRIBUTION IN WI-FI NETWORKS

Authors:

Shirantha Wijesekera, Xu Huang and Dharmendra Sharma

Abstract: Demand for wireless communications around the world is growing. IEEE 802.11 wireless networks, also known as Wi-Fi, are one of the popular wireless networks with over tens of millions of users across the globe. Hence, providing secure communication for wireless networks has become one of the prime concerns. We have proposed a Quantum Key Distribution (QKD) based novel protocol to exchange the encryption key in Wi-Fi networks. In this paper, we present the protocol modifications done in existing IEEE 802.11 standard to implement the proposed QKD based key exchange.
Download

Paper Nr: 190
Title:

DECENTRALIZED SYSTEM FOR MONITORING AND CONTROL OF RAIL TRAFFIC IN EMERGENCIES: A new distributed support tool for rail traffic management

Authors:

Roberto Carballedo, Itziar Salaberria, Asier Perallos and Unai Gutierrez

Abstract: Traditionally Rail Traffic Management is performed automatically using centralized systems based on wired sensors and electronic elements fixed on the tracks. These systems, called Centralized Traffic Control systems (CTC) are robust and high availability, but when these systems fail, traffic management must be done manually. This paper is the result of 4 years of work with railway companies in the development of a distributed support tool for rail traffic control and management. The new system developed combines train-side systems and terrestrial applications that exchange information via a hybrid mobile and radio wireless communications architecture.
Download

Paper Nr: 201
Title:

AN EXTENSIBLE, MULTI-PARADIGM MESSAGE-ORIENTED MOBILE MIDDLEWARE

Authors:

Yuri Bezerra and Gledson Elias da Silveira

Abstract: Message-oriented middleware (MOM) platforms are usually based in asynchronous, peer-to-peer interaction styles, leading to more loosely coupled architectures. As a consequence, MOMs have the potential for supporting the development of networked mobile applications. However, MOM platforms have been implemented under a limited set of message-based communication paradigms, each one being specifically adapted to a given application domain or network model. In such a context, this paper proposes a mobile middleware solution which offers a comprehensive set of extensible, message-based communication paradigms, such as publish/subscribe, message queue and tuple spaces. Supported by a Software Product Line (SPL) approach, the proposed middleware is suitable for constrained devices as all supported communication paradigms share and reuse a reasonable number of software components that deal with common messaging features. Additionally, by means of an extensible design, new communication paradigms can be easily accommodated, as well as existing ones can be removed in order to better fit in more constrained devices.
Download

Paper Nr: 251
Title:

PySense: Python decorators for Wireless Sensor Macroprogramming

Authors:

Davide Carboni

Abstract: PySense aims at bringing wireless sensor (and "internet of things") macroprogramming to the audience of Python programmers. WSN macroprogramming is an emerging approach where the network is seen as a whole and the programmer focuses only on the application logic. The PySense runtime environment partitions the code and transmits code snippets to the right nodes finding a balance between energy consumption and computing performances.
Download

Paper Nr: 7
Title:

NETWORK CONVERGENCE AND MODELING: Design of Interconnecting SW for Intranets and Fieldbuses

Authors:

Miroslav Sveda

Abstract: The paper deals with the current software architectures for intermediate system for Intranet and small-range wireless interconnections. This article brings two case studies founded on real-world applications that demonstrate another input to network convergence and network modeling in software architecture development stemming from design experience based on industrial network applications and on metropolitan networking. The first case study focuses on IEEE 1451 family of standards that provides a design framework for creating applications based not only on IP/Ethernet profile but also on ZigBee. Next case study explores how security and safety properties of Intranets can be verified under every network configuration using model checking.
Download

Paper Nr: 10
Title:

Artificial Immune System Framework for Pattern Extraction in Distributed Environment

Authors:

Rafal Pokrywka

Abstract: Information systems today are dynamic, heterogeneous environments and places where a lot of critical data is stored and processed. Such an envrionment is usually build over many virtualization layers on top of backbone which is hardware and network. The key problem within this environment is to find, in realtime, valuable information among large sets of data. In this article a framework for a pattern extraction system based on artificial immune system is presented and discussed. As an example a system for anomalous pattern extraction for intrusion detection in a computer network is presented.
Download

Paper Nr: 36
Title:

AN ASSESSMENT OF HEURISTICS FOR FAST SCHEDULING OF GRID JOBS

Authors:

Wolfgang Suess, Alexander Quinte, Florian Moeser, Wilfried Jakob and Karl-Uwe Stucky

Abstract: Due to the dynamic nature of the grid and the frequent arrival of new jobs, rescheduling of already planned and new jobs is a permanent process that is in need of good and fast planning algorithms. This paper extends previous work and deals with newly implemented heuristics for our Global Optimizing Resource Broker and Allocator GORBA. Of a range of possibly usable heuristics, the most promising ones have been chosen for implementation and evaluation. They serve for the following two purposes: Firstly the heuristics are used to quickly generate feasible schedules. Secondly, these schedules go into the start population of a subsequent run of our Evolutionary Algorithm incorporated in GORBA for improvement. Both are also used for evaluation. The effect of the selected heuristics is compared to our best simple one used in the first version of GORBA. The investigation is based on two synthetically generated benchmarks representing a load of 300 grid jobs each. A formal definition of the scheduling problem is given together with an assessment of its complexity. The results of the evaluation underline the described intricacy of the problem, because none of the heuristics performs better than our simple one, although they work well on other presumably easier problems.
Download

Paper Nr: 87
Title:

The task graph assignment for KASKADA platform

Authors:

Henryk Krawczyk and Jerzy Proficz

Abstract: Comparing the evaluation results for both types of optimisation, we can see that the BFD (except BFD/SSP together) is the best for the fragmentation optimisation, and works quite well for latency. The HLT algorithm, as you could expect, is the best for latency optimisation, but performs extremely poorly for the fragmentation.
Download

Paper Nr: 126
Title:

Structured Use-Cases as a Basis for Self-Management of Distributed Systems

Authors:

Reza Haydarlou, Martijn Warnier, Reza Haydarlou, Michel Oey and Frances Brazier

Abstract: Automated support for management of complex distributed object-oriented systems is a challenge: self-management of such systems the goal. This paper presents a use-case based approach to self-management of such systems, focusing on autonomic monitoring and diagnosis. The existing notion of use-case has been extended to different levels of system design: explicitly specifying system behavior at different levels, and the relations between these levels, coupling structural models to these descriptions when and where appropriate. The proposed model is illustrated with a small example.
Download

Paper Nr: 220
Title:

IT infrastructure design and implementation considerations for the ATLAS TDAQ system

Authors:

Sergio Ballestrero, Alexandr Zaytsev and Alexander Bogdanchikov

Abstract: This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS experiment provides 340 racks for hardware components and 4 MW of cooling capacity. The estimated data flow rate exported by the ATLAS TDAQ system for future long term analysis is about 2.5 PB/year. The number of CPU cores installed in the system will exceed 10000 during 2010.
Download

Paper Nr: 245
Title:

A SOFTWARE FRAMEWORK TO SUPPORT AGRICULTURE ACTIVITIES USING REMOTE SENSING AND HIGH PERFORMANCE COMPUTING

Authors:

Shamim Akhter and Kento AIDA

Abstract: Agricultural activity monitoring, enclosed quantifying the irrigation scheduling, tracing the soil hydraulic properties, generating the crop calendar etc., is very important for ensuring food security. Farmers want to know these information in a regular basis. Additionally, large scale agricultural activity monitoring requires to congregate information from Remote Sensing (RS) images and that type of processing takes a huge amount of computational time. Thus, optimization on the computational time is a vital requirement. In such cases, High Performance Computing (HPC) can help to reduce the processing time by increasing the computational resources. Moreover, web based technology can contribute an understandable, efficient and effective monitoring system. Still, the merging domain researches on RS image processing, agriculture and HPC are mainly in hypothetical or conjectural theme rather than practical implementation. Thus, this research contributes a new software system to support agriculture activities in real time using both RS and HPC. The main purpose of the system is to serve the valuable crop parameters information to the farmers through a web base system in real time. Additionally, we are going to discuss in details about the implementation issues of the proposed software system.
Download

Area 4 - Data Management

Full Papers
Paper Nr: 86
Title:

DATABASE AUTHENTICATION BY DISTORTION FREE WATERMARKING

Authors:

Sukriti Bhattacharya and Agostino Cortesi

Abstract: In this paper we introduce a distortion free watermarking technique that strengthen the verification of integrity of the relational databases by using a public zero distortion authentication mechanism based on the Abstract Interpretation framework. The watermarking technique is partition based. The partitioning can be seen as a virtual grouping, which does not change neither the value of the table’s elements nor their physical positions. Instead of inserting the watermark directly to the database partition, we treat it as an abstract representation of that concrete partition, such that any change in the concrete domain reflects in its abstract counterpart. The main idea is to generate a gray scale image of the partition as a watermark of that partition, that serves as tamper detection procedure, followed by employing a public zero distortion authentication mechanism to verify the ownership.
Download

Paper Nr: 92
Title:

DISCOVERING LARGE SCALE MANUFACTURING PROCESS MODELS FROM TIMED DATA: Application to STMicroelectronics’ Production Processes

Authors:

Pamela Viale, Jacques Pinaton, Nabil Benayadi and Marc Le Goc

Abstract: Modeling manufacturing process of complex products like electronic chips is crucial to maximize the quality of the production. The Process Mining methods developed since a decade aims at modeling such manufacturing process from the timed messages contained in the database of the supervision system of this process. Such process can be complex making difficult to apply the usual Process Mining algorithms. This paper proposes to apply the Stochastic Approach framework to model large scale manufacturing processes. A series of timed messages is considered as a sequence of class occurrences and is represented with a Markov chain from which models are deduced with an abductive reasoning. Because sequences can be very long, a notion of process phase based on a concept of class of equivalence is defined to cut the sequences so that a model of a phase can be locally produced. The model of the whole manufacturing process is then obtained from the concatenation of the models of the different phases. This paper presents the application of this method to model STMicroelectronics’ manufacturing processes. STMicroelectronics’ interest in modeling its manufacturing processes is based on the necessity to detect the discrepancies between the real processes and experts’ definitions of them.
Download

Paper Nr: 97
Title:

Searching Keyword-lacking Files Based on Latent Interfile Relationships

Authors:

Tetsutaro Watanabe, Takashi Kobayashi and Haruo Yokota

Abstract: Current information technologies require file systems to contain so many files that searching for desired files is a major problem. To address this problem, desktop search tools using full-text search techniques have been developed. However, those files lacking any given keywords, such as picture files and the source data of experiments, cannot be found by tools based on full-text searches, even if they are related to the keywords. It is even harder to find files located in different directories from the files that include the keywords. In this paper, we propose a method for searching for files that lack keywords but do have an association with them. The proposed method derives relationship information from file access logs in the file server, based on the concept that those files opened by a user in a particular time period are related. We have implemented the proposed method, and evaluated its effectiveness by experiment. The evaluation results indicate that the proposed method is capable of searching keyword-lacking files and has superior precision and recall compared with full-text and directory-search methods.
Download

Paper Nr: 133
Title:

A Domain-related Authority Model for Web Pages based on Source and Related Information

Authors:

Liu Yang, Chunping Li and Ming Gu

Abstract: The Internet has become a great source for searching and acquiring information, while the authority of the resources is difficult to evaluate. In this paper we propose a domain-related authority model which aims to calculate the authority of web pages in a specific domain using the source and related information. These two factors, together with link structure, are what we mainly consider in our model. We also add the domain knowledge to adapt to the characteristics of the domain. Experiments on the finance domain show that our model is able to provide good authority scores and ranks for web pages and is helpful for people to better understand the pages.
Download

Paper Nr: 158
Title:

OBSERVATION-BASED FINE GRAINED ACCESS CONTROL FOR RELATIONAL DATABASES

Authors:

Raju Halder and Agostino Cortesi

Abstract: Fine Grained Access Control (FGAC) provides users the access to the non-confidential database information while preventing unauthorized leakage of the confidential data. It provides two extreme views to the database information: completely public or completely hidden. In this paper, we propose an Observation-based Fine Grained Access Control (OFGAC) mechanism based on the Abstract Interpretation framework where data are made accessible at various level of abstraction. In this setting, unauthorized users are not able to infer the exact content of a cell containing confidential information, while they are allowed to get partial information out of it, according to their access rights. Di fferent level of sensitivity of the information correspond to di fferent level of abstraction. In this way, we can tune di fferent parts of the same database content according to di fferent level of abstraction at the same time. The traditional FGAC can be seen as a special case of the OFGAC framework.
Download

Paper Nr: 203
Title:

BUILDING A VIRTUAL VIEW OF HETEROGENEOUS DATA SOURCE VIEWS

Authors:

Clelio Quattrocchi, Maria Tortorella, Lerina Aversano, Roberto Intonti and Clelio Quattrocchi

Abstract: In order to make possible the analysis of data stored in heterogeneous data sources, it could be necessary a preliminary building of an aggregated view of these sources, also referred as virtual view. The problem is that the data sources can use different technologies and represent the same information in different ways. The use of a virtual view allows the unified access to heterogeneous data sources without knowing details regarding each single source. This paper proposes an approach for creating a virtual view of the views of the heterogeneous data sources. The approach provides features for the automatic schema matching and schema merging. It exploits both syntax-based and semantic-based techniques for performing the matching; it also considers both semantic and contextual features of the concepts. The usefulness of the approach is validated through a case study.
Download

Short Papers
Paper Nr: 54
Title:

A Structured Wikipedia for Mathematics

Authors:

Henry Lin and Henry Lin

Abstract: In this paper, we propose a new idea for developing a collaborative online system for storing mathematical work similar to Wikipedia, but much more suitable for storing mathematical results and concepts. The main idea proposed in this paper is to design a system that would allow users to store mathematics in a structured manner, which would make related work easier to find. The proposed system would have users use indentation to add a hierarchical structure to mathematical results and concepts entered into the system. The hierarchical structure provided by the indentation of results and concepts would provide users with additional search functionality useful for finding related work. Additionally, the system would automatically link related results by using the structure provided by users, and also provide other useful functionality. The system would be flexible in terms of letting users decide how much structure to add to each mathematical result or concept to ensure that contributors are not overly burdened with having to add too much structure to each result. The system proposed in this paper serves as a starting point for discussion on new ideas to organize mathematical results and concepts, and many open questions remain for new research.
Download

Paper Nr: 56
Title:

AUTOMATIC MINING OF HUMAN ACTIVITY AND ITS RELATIONSHIPS FROM CGM

Authors:

Hiroyuki Nakagawa, Minh Nguyen, Hiroyuki Nakagawa, Yasuyuki Tahara, Takahiro Kawamura and Akihiko Ohsuga

Abstract: The goal of this paper is to describe a method to automatically extract all basic attributes namely actor, action, object, time and location which belong to an activity, and the relationships (transition and cause) between activities in each sentence retrieved from Japanese CGM (consumer generated media). Previous work had some limitations, such as high setup cost, inability of extracting all attributes, limitation on the types of sentences that can be handled, insufficient consideration of interdependency among attributes, and inability of extracting causes between activities. To resolve these problems, this paper proposes a novel approach that treats the activity extraction as a sequence labeling problem, and automatically makes its own training data. This approach has advantages such as domain-independence, scalability, and unnecessary hand-tagged data. Since it is unnecessary to fix the positions and the number of the attributes in activity sentences, this approach can extract all attributes and relationships between activities by making only a single pass over its corpus. Additionally, by converting to simpler sentences, removing stop words, utilizing html tags, google map api, and wikipedia, the proposed approach can deal with complex sentences retrieved from Japanese CGM.
Download

Paper Nr: 71
Title:

ON USING THE NORMALIZED COMPRESSION DISTANCE TO CLUSTER WEB SEARCH RESULTS

Authors:

Alexandra Cernian, Liliana Dobrica, Dorin Carstoiu and Valentin Sgarciu

Abstract: Current Web search engines return long lists of ranked documents that users are forced to sift through to find relevant documents. This paper introduces a new approach for clustering Web search results, based on the notion of clustering by compression. Compression algorithms allow defining a similarity measure based on the degree of common information. Classification methods allow clustering similar data without any previous knowledge. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files. Our goal is to apply the clustering by compression algorithm in order to cluster the documents returned by a Web search engine in response to a user query.
Download

Paper Nr: 107
Title:

On Binary Similarity Measures for Privacy-Preserving top-N Recommendations

Authors:

Alper Bilge, Cihan Kaleli and Huseyin Polat

Abstract: Collaborative filtering(CF) algorithms fundamentally depend on similarities between users and/or items to predict individual preferences. Most of the CF algorithms operate on numerical rating values. However, in some cases, it might be crucial to predict whether a user purely likes or dislikes an item, which can be represented in a binary form. There are various binary similarity measures like Kulzinsky, Sokal-Michener, Yule, and so on to estimate the relation between two binary vectors. Although binary ratings-based CF algorithms are utilized, there remains work to be conducted to compare the performances of binary similarity measures. Moreover, the success of CF systems enormously depend on reliable and truthful data collected from many customers, which can only be achieved if individual users' privacy is protected. In this study, we compare eight binary similarity measures in terms of accuracy while providing top-N recommendations. Moreover, we scrutinize how such measures perform with privacy-preserving top-N recommendation process. We perform real data-based experiments. Our results show that Dice and Jaccard similarity measures provide the best outcomes. After analyzing our results, we provide conclusions and suggestions on employing binary similarity measures in CF processes.
Download

Paper Nr: 161
Title:

Towards a Faster Symbolic Aggregate Approximation Method

Authors:

Pierre-François Marteau and Muhammad M. Fuad

Abstract: The similarity search problem is one of the main problems in time series data mining. Traditionally, this problem was tackled by sequentially comparing the given query against all the time series in the database, and returning all the time series that are within a predetermined threshold of that query. But the large size and the high dimensionality of time series databases that are in use nowadays make that scenario inefficient. There are many representation techniques that aim at reducing the dimensionality of time series so that the search can be handled faster at a lower-dimensional space level. The symbolic aggregate approximation (SAX) is one of the most competitive methods in the literature. In this paper we present a new method that improves the performance of SAX by adding to it another exclusion condition that increases the exclusion power. This method is based on using two representations of the time series: one of SAX and the other is based on an optimal approximation of the time series. Pre-computed distances are calculated and stored offline to be used online to exclude a wide range of the search space using two exclusion conditions. We conduct experiments which show that the new method is faster than SAX.
Download

Paper Nr: 177
Title:

TRIOO, Keeping the Semantics of Data Safe and Sound into Object-Oriented Software

Authors:

Sergio Fernández, Diego Berrueta, Miguel García Rodríguez and Jose E. Labra Gayo

Abstract: Data management is a key factor in any software effort. Traditional solutions, such as relational databases, are rapidly losing weight in the market towards more flexible approaches and data models due to the fact that data stores as monolithic components are not valid in too many current scenarios. The World Wide Consortium proposes RDF as a suitable framework for modelling, describing and linking resources on the Web. Unfortunately the current methods to access to RDF data can be considered a kind of handcrafted work. Therefore the Trioo project aims to provide powerful and flexible methods to access RDF datasets from object-oriented programming languages, allowing the usage of this data without negative influences in object-oriented designs and trying to keep the semantics of data as accurate as possible.
Download

Paper Nr: 30
Title:

VASCULAR NETWORK SEMI-AUTOMATIC SEGMENTATION USING COMPUTED TOMOGRAPHY ANGIOGRAPHY

Authors:

Petr Maule and Jiri Polivka

Abstract: The article describes simple and straightforward method for vascular network segmentation of computed tomography examinations. Proposed method is shown step by step with illustrations on liver's portal vein segmentation. There is also described method of creating and exporting mesh and simple way of its visualization which is possible also from a web-browser. The method was developed to provide satisfactory results in a short time and is supposed to be used as geometry input for mathematical models.
Download

Area 5 - Knowledge-Based Systems

Full Papers
Paper Nr: 83
Title:

LEARNING DYNAMIC BAYESIAN NETWORKS WITH THE TOM4L PROCESS

Authors:

Ahmad Ahdab and Marc Le Goc

Abstract: This paper addresses the problem of learning a Dynamic Bayesian Network from timed data without prior knowledge to the system. One of the main problems of learning a Dynamic Bayesian Network is building and orienting the edges of the network avoiding loops. The problem is more difficult when data are timed. This paper proposes a new algorithm to learn the structure of a Dynamic Bayesian Network and to orient the edges from the timed data contained in a given timed data base. This algorithm is based on an adequate representation of a set of sequences of timed data and uses an information based measure of the relations between two edges. This algorithm is a part of the Timed Observation Mining for Learning (TOM4L) process that is based on the Theory of the Timed Observations. The paper illustrates the algorithm with a theoretical example before presenting the results on an application on the Apache system of the Arcelor-Mittal Steel Group, a real world knowledge based system that diagnoses a galvanization bath.
Download

Paper Nr: 100
Title:

MALAPROPISMS DETECTION AND CORRECTION USING A PARONYMS DICTIONARY, A SEARCH ENGINE AND WORDNET

Authors:

Costin-Gabriel Chiru, Traian Rebedea, Stefan Trausan-Matu and Valentin Cojocaru

Abstract: This paper presents a method for the automatic detection and correction of malapropism errors found in documents using the WordNet lexical database, a search engine (Google) and a paronyms dictionary. The malapropisms detection is based on the evaluation of the cohesion of the local context using the search engine, while the correction is done using the whole text cohesion evaluated in terms of lexical chains built using the linguistic ontology. The correction candidates, which are taken from the paronyms dictionary, are evaluated versus the local and the whole text cohesion in order to find the best candidate that is chosen for replacement. The testing methods of the application are presented, along with the obtained results.
Download

Paper Nr: 118
Title:

A Study on Aligning Documents Using the Circle of Interest Technique

Authors:

Daniel Joseph and Cesar Marin

Abstract: In this paper we present a study on applying a technique called Circle of Interest, along with Formal Concept Analysis and Rough Set Theory to semantically align documents such as those found in a business domain. Indeed, when companies try to engage in business it becomes crucial to keep the semantics when exchanging information usually known as a business document. Typical approaches are not practical or require a high cost to implement. In contrast, we consider the concepts and their relationships discovered within an exchanged business document to find automatically an alignment to a local interpretation known as a document type. We present experimental results on applying Formal Concept Analysis as the ontological representation of documents, the Circle of Interest for selecting the most relevant document types to choose from, and Rough Set Theory for discerning among them. The results on a set of business documents show the feasibility of our approach and its direct application to a business domain.
Download

Paper Nr: 132
Title:

NLU Methodologies for Capturing Non-Redundant Information from Multi-Documents: A Survey

Authors:

Michael T. Mills, Michael Mills and Nikolaos Bourbakis

Abstract: This paper provides a comparative survey of natural language understanding (NLU) methodologies for capturing non-redundant information from multiple documents. The scope of these methodologies is to generate a text output with reduced information redundancy and increased information coverage. The purpose of this paper is to inform the reader what methodologies exist and their features, capabilities and maturities based on evaluation criteria selected by users and developers. Tables of comparison at the end of this survey provide a quick glance of these technical attributes and maturity indicators abstracted from available information in the publications.
Download

Paper Nr: 138
Title:

Genetic Heuristics for Reducing Memory Energy Consumption in Embedded Systems

Authors:

Maha I. Aouad, René Schott and Olivier Zendra

Abstract: Nowadays, reducing memory energy has become one of the top priorities of many embedded systems designers. Given the power, cost, performance and real-time advantages of Scratch-Pad Memories (SPMs), it is not surprising that SPM is becoming a common form of SRAM in embedded processors today. In this paper, we focus on heuristic methods for SPMs careful management in order to reduce memory energy consumption. We propose Genetic Heuristics for memory management which are, to the best of our knowledge, new original alternatives to the best known existing heuristic (BEH). Our Genetic Heuristics outperform BEH. In fact, experimentations performed on our benchmarks show that our Genetic Heuristics consume from 76.23% up to 98.92% less energy than BEH in different configurations. In addition they are easy to implement and do not require list sorting (contrary to BEH).
Download

Paper Nr: 160
Title:

Metamodel-Based Decision Support System For Disaster Management

Authors:

Siti H. Othman and Ghassan Beydoun

Abstract: Generally software model developers use a general purpose language such as Unified Modelling Language (UML) in modelling their domain application models. But when they come to the situation in which the models they create do not perfectly fit the modelling needs as they desire, a more specific domain modelling language offers a better alternative approach. In this paper, we create a Disaster Management (DM) metamodel that can be used to create a disaster management language. It will serve as a representational layer of DM expertise leading to a DM decision support system based on combining and matching different DM activities according to the disaster on hand. A creation process of the metamodel is presented leading to the synthesis of initial metamodel, as a main component to create a decision support system to unify, facilitate and expedite access to DM expertise.
Download

Paper Nr: 232
Title:

Facet and prism based model for pedagogical indexation of texts for language learning

Authors:

Mathieu Loiseau, Georges Antoniadis and Claude Ponton

Abstract: In this article, we discuss the problem of pedagogical indexation of texts for language learning and address it under the scope of the notion of ``pedagogical context''. This prompts us to propose a new version of a model based on a couple formed of two entities : prisms and facets. We first evoke the importance of material selection in the task of planing a language class in order to introduce our point of view of Yinger's model of planing applied to language teacher's search of texts. This is closely intermingled with the elaboration of the notion of pedagogical context from which our model stem. This version though in a way similar to our first attempt provides sounder notions on which to build on.
Download

Short Papers
Paper Nr: 37
Title:

EFFECTS OF EXPERT SYSTEMS IN COMPUTER BASED SUPPORT FOR CMMI IMPLEMENTATIONS

Authors:

Nilgün Gökmen, Nilgün Gokmen and Ercan ÖZTEMEL

Abstract: Computing systems are becoming more complex in very dynamic and uncertain situations. Due to this complexity, the importance of process focused quality approaches is increasing. Capability Maturity Model Integration (CMMI) standards and implementation practices were developed to simplify the software project management and to assure expected quality of the respective software. Realizing the CMMI systems, building and monitoring the implementation practices require an extensive knowledge and experience. Organizations receive these knowledge and experience mainly through consultants which may become too costly in most of the cases. Although there have been some computer based support tools available in the market, those still require human experts to justify the related artifacts. In this study a knowledge-based assistant system so called “CMMI Assistant” is introduced. The main aim of this tool is to support CMMI implementations through utilising expert system methodology.
Download

Paper Nr: 65
Title:

Approximate reasoning based on linguistic modifiers in a learning system

Authors:

Saoussen H. Kacem, Saoussen Bel Hadj Kacem, Amel Borgi and Moncef Tagina

Abstract: Approximate reasoning, initially introduced in fuzzy logic context, allows reasoning with imperfect knowledge. We have proposed in a previous work an approximate reasoning based on linguistic modifiers in a symbolic context. To apply such reasoning, a base of rules is needed. We propose in this paper to use a supervised learning system named SUCRAGE, that automatically generates multi-valued classification rules. Our reasoning is used with this rule base to classify new objects. Experimental tests and comparative study with two initial reasoning modes of SUCRAGE are presented. This application of approximate reasoning based on linguistic modifiers gives satisfactory results. Besides, it provides a comfortable linguistic interpretation to the human mind thanks to the use of linguistic modifiers.
Download

Paper Nr: 101
Title:

FILLING THE GAPS USING GOOGLE 5-GRAMS CORPUS

Authors:

Costin-Gabriel Chiru, Andrei Hanganu, Traian Rebedea and Stefan Trausan-Matu

Abstract: In this paper we present a text recovery method based on a probabilistic post-recognition processing of the output of an Optical Character Recognition system. The proposed method is trying to fill in the gaps of missing text resulted from the recognition process of degraded documents. For this task, a corpus of up to 5-grams provided by Google is used. Several heuristics for using this corpus for the fulfilment of this task are described after presenting the general problem and alternative solutions. These heuristics have been validated using a set of experiments that are also discussed together with the results that have been obtained.
Download

Paper Nr: 152
Title:

A Green Decision Support System for Integrated Assembly and Disassembly Sequence Planning Using a PSO Approach

Authors:

Yuan-Jye Tseng, Feng-Yi Huang and Fang-Yu Yu

Abstract: A green decision support system is presented to integrate assembly and disassembly sequence planning and to evaluate the two costs in one integrated model. In a green product life cycle, it is important to determine how a product can be disassembled before the product is planned to be assembled. For an assembled product, an assembly sequence planning model is required for assembling the product at the start, whereas a disassembly sequence planning model is needed for disassembling the product at the end. In typical assembly and disassembly sequence planning approaches, the two sequences and costs are independently planned and evaluated. In this research, a new integrated model is presented to concurrently generate and evaluate the assembly and disassembly sequences. First, graph-based models are presented for representing feasible assembly sequences and disassembly sequences. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.
Download

Paper Nr: 168
Title:

Mining Timed Sequences to find Signatures

Authors:

Nabil Benayadi and Marc Le Goc

Abstract: We introduce the problem of mining sequential patterns among timed messages in large database of sequences using a Stochastic Approach. An example of patterns we are interested in is : 50% of cases of engine stops in the car are happened between 0 and 2 minutes after observing a lack of the gas in the engine, produced between 0 and 1 minutes after the fuel tank is empty. We call this patterns ``signatures''. Previous research have considered some equivalent patterns, but such work have three mains problems : (1) the sensibility of their algorithms with the value of their parameters, (2) too large number of discovered patterns, and (3) their discovered patterns consider only "after`` relation (succession in time) and omit temporal constraints between elements in patterns. To address this issue, we present TOM4L process (Timed Observations Mining for Learning process) which uses a stochastic representation of a given set of sequences on which an inductive reasoning coupled with an abductive reasoning is applied to reduce the space search. A very simple example is used to show the efficiency of the TOM4L process against others literature approaches.
Download

Paper Nr: 169
Title:

SEAMLESS SOFTWARE DEVELOPMENT FOR SYSTEMS BASED ON BAYESIAN NETWORKS. An agricultural pest control system example.

Authors:

Isabel Maria Del Aguila Cano, Jose Del Sagrado, S. Túnez and Francisco J. Orellana Zubieta

Abstract: This work presents a specific solution for the development of software systems that embed functionalities based and not based on knowledge, concerning the decision support process and the information management processes, respectively. When constructing a knowledge model, the processes to be performed are mainly focus on the description of the steps necessary to build it. Usually, all approaches concentrate on adapting the software engineering life cycle to develop a knowledge model and forget the problem of integrating it in the final software system. We propose a process model that allows developing software systems that use a Bayesian network as knowledge model. In order to show how to apply our software process model, we have included a partial view of the development process of a knowledge-based system for a real world project, related to decision making in an agricultural domain, specifically related to pest control in a given crop
Download

Paper Nr: 202
Title:

EVALUATING AN INTELLIGENT COLLABORATIVE LEARNING ENVIRONMENT FOR UML

Authors:

Kalliopi Tourtoglou and Maria Virvou

Abstract: In this paper, we present an evaluation experiment of AUTO-COLLEAGUE conducted at the University of Piraeus. AUTO-COLLEAGUE is a collaborative learning environment for UML. Students are organized into groups supported with a chat system to collaborate with each other. It builds integrated individual student models aiming at suggesting optimum groups of learners. These optimum groups will allow the trainer of the system to organize them in the most effective way as far as their performance is concerned. In other words, the strengths and weaknesses of the students are blended for the best of the individuals and the groups. The student models concern the level of expertise and specific personality characteristics of the students. The results of the evaluation were quite optimistic, as they indicated a better individual performance of the students.
Download

Paper Nr: 246
Title:

THE ReALIS MODEL OF HUMAN INTERPRETERS AND ITS APPLICATION IN COMPUTATIONAL LINGUISTICS

Authors:

Judit Kleiber, Gábor Alberti, Márton Károly and Judit Kleiber

Abstract: As we strive for sophisticated machine translation and reliable information extraction, we have launched a subproject pertaining to the modeling of human interpreters. The model is based on ReALIS, a new “post-Montagovian” discourse-semantic theory concerning the formal interpretation of sentences constituting coherent discourses, with a lifelong model of lexical, interpersonal and cultural / encyclopedic knowledge of interpreters in its center including their reciprocal knowledge on each other. Section 1 is devoted to the introduction of ReALIS. In Section 2 we provide linguistic data in order to show that intelligent language processing requires a realistic model of human interpreters. Section 3 puts down some principles of the implementation (in progress), and Section 4 demonstrates how to apply our model in computational linguistics.
Download

Paper Nr: 260
Title:

A SEMANTIC SEARCH ENGINE FOR A BUSINESS NETWORK: A Personalized Vision of the Web Applied to a Business Network

Authors:

Franco Tuveri, Manuela Angioni, Gavino Paddeu, Emanuela De Vita, Ivan Marcialis and Cristian Lai

Abstract: The Web’s evolution during the last few years shows that the advantages from the users’ point of view are not so macroscopic. Despite information is still the primal element, is ever more evident the need to redefine the information paradigm so that the net and the information can become really user-centric by an inverse process that brings the information to the user and not more the user to information. Define new tools is needed to create a privileged window of observation on information and knowledge: each user with his specific interest. Not more a single available space of information but shared data for everyone. What each user needs is a specific private space of information according to his point of view, his way to classify and manage the information, related to his network of contacts in the way each person choose to live the Web, the net and the knowledge. In this paper we illustrate a part of a project named A Semantic Search Engine for a Business Network where the introduction of Natural Languages, user profiling, automatic information classification according to users’ personal schemas will contribute to redefine the vision of information and delineate processes of Human-Machine Interaction.
Download

Paper Nr: 8
Title:

ONTOLOGYJAM - A Tool for Ontology Reuse

Authors:

Luis F. Piasseski, Milton Borsato and Cesar A. Tacla

Abstract: There has been notable growth in the use of ontologies in knowledge management. This occurs because, with the use of ontologies, knowledge is shared and reuse efficiently and clearly among all the resources, such as a person or an application. However, for the ontologies to establish confidence within an extremely competitive and flexible market, they must be created in a way that is swift and has high credibility, portability and scalability. However, there is a noted lack of tools to aid knowledge specialists in the activities of construction of a new ontology. For this purpose, this article presents a tool that enables the performance of concept research in the knowledge entities represented in an ontology, through the import of multiple ontologies. As a result, knowledge can be exported into a brand new ontology. Thus, based on the knowledge reuse, with the aim of extending an ontology so as to make it adequate to its application.
Download

Paper Nr: 51
Title:

Integration of Apriori algorithms with Case-Based Reasoning for Flight Accident Investigation

Authors:

Nan-Hsing Chiu, Pei-Da Lin and Chang E. Pu

Abstract: The analysis of flight accidents has been demonstrated to be a crucial tool for improving flight safety. The utilization of visual decision support systems potentially assists investigators in quickly and accurately identifying the underlying causes of accidents. This study aims at supplying a visual decision support system, based on the Apriori and case-based reasoning approaches, for assisting investigators in analyzing human injuries in flight accidents. We demonstrate our approach using the aircraft configuration of flight CI611. The experimental results show that the proposed approach provides support for quick decisions by investigators on the basis of a visualization system.
Download

Paper Nr: 51
Title:

Integration of Apriori algorithms with Case-Based Reasoning for Flight Accident Investigation

Authors:

Nan-Hsing Chiu, Pei-Da Lin and Chang E. Pu

Abstract: The analysis of flight accidents has been demonstrated to be a crucial tool for improving flight safety. The utilization of visual decision support systems potentially assists investigators in quickly and accurately identifying the underlying causes of accidents. This study aims at supplying a visual decision support system, based on the Apriori and case-based reasoning approaches, for assisting investigators in analyzing human injuries in flight accidents. We demonstrate our approach using the aircraft configuration of flight CI611. The experimental results show that the proposed approach provides support for quick decisions by investigators on the basis of a visualization system.
Download

Paper Nr: 72
Title:

A framework for information diffusion over social networks research - outlining options and challenges

Authors:

Juan Yao and Markus Helfert

Abstract: Information diffusion is a phenomenon in which new ideas or behaviours spread contagiously through social networks in the style of an epidemic. Recently, researchers have contributed a plethora of studies, approaches and theoretical contributions related to various aspects of the diffusion phenomenon. There are many options and approaches. However there are only rare research articles consolidating and reviewing the various options. In this paper, we aim to contribute an overview of the most prominent approaches related to the studies of the diffusion phenomenon. We present a framework and research overview for this area. Our framework can assist researchers and practitioners to identify suitable solutions and understand the challenges in the information diffusion over social networks research.
Download

Paper Nr: 105
Title:

CHALLENGES IN DISTRIBUTED INFORMATION SEARCH IN A SEMANTIC DIGITAL LIBRARY

Authors:

Antonio Martin and Carlos Leon

Abstract: Nowadays an enormous quantity of heterogeneous and distributed information is stored in the current digital libraries. Access to these collections poses a serious challenge, however, because present search techniques based on manually annotated metadata and linear replay of material selected by the user do not scale effectively or efficiently to large collections. The artificial intelligent and semantic Web provides a common framework that allows knowledge to be shared and reused. In this paper we propose a comprehensive approach for discovering information objects in large digital collections based on analysis of recorded semantic metadata in those objects and the application of expert system technologies. We suggest a conceptual architecture for a semantic and intelligent search engine. OntoFAMA is a collaborative effort that proposes a new form of interaction between people and Digital Library, where the latter is adapted to individuals and their surroundings. We have used Case Based-Reasoning methodology to develop a prototype for supporting efficient retrieval knowledge from digital library of Seville University. We concentrate on the critical issue of metadata/ontology-based search and expert systems. More specifically the objective is investigated from a search perspective possible intelligent infrastructures form constructing decentralized digital libraries where no global schema exists
Download

Paper Nr: 154
Title:

A KNOWLEDGE SHARING SYSTEM FOR SOFTWARE DEVELOPERS

Authors:

Rentaro Yoshioka, Takayuki Shibata, Takanobu Sato and Kazuyuki Nakamura

Abstract: Knowledge sharing is a key factor for increasing productivity of programmers and also in maintaining the quality of programs in companies. However, programmers tend to resort to outside resources for solving their problems. This paper proposes a system to facilitate active sharing of program related knowledge among a group of programmers in a company. The system introduces a flexible unit to define the target knowledge, defines a set of function tags to describe its functionality from a programming point of view, and a set of project tags to describe its environmental aspects. We illustrate the rigid structure and classification of the tags and how this approach can decrease the work load of programmers in registering and retrieving knowledge along with a few examples. In addition, simple evaluation tests have been performed with an experimental implementation of the proposed system.
Download

Paper Nr: 155
Title:

Timed Observations Modelling For Diagnosis Methodology: A Case Study

Authors:

Laura Pomponio and Marc Le Goc

Abstract: The TOM4D methodology is based on constructing models at the same level of abstraction that experts use to diagnose a process; thus, the resultant models are more simple and abstract allowing a more efficient diagnosis. For this purpose, the framework CommonKADS to interpret and organize the available knowledge of experts is combined with a multi-modelling approach in order to describe the knowledge. This paper complements works accomplished previously about TOM4D, introducing the combined use of Formal Logic and the Tetrahedron of States in order to build models more suitable for the diagnosis task. Formal Logic provides a logical interpretation of expert's reasoning. The Tetrahedron of States provides a physical interpretation of the process variables and allows to exclude of the logical model those states physically impossible.
Download

Paper Nr: 157
Title:

KNOWBENCH: A SEMANTIC USER INTERFACE FOR MANAGING KNOWLEDGE IN SOFTWARE DEVELOPMENT

Authors:

Dimitris Panagiotou and GREGORIS MENTZAS

Abstract: Modern software development consists of typical knowledge intensive tasks, in the sense that it requires that software developers create and share new knowledge during their daily work. In this paper we propose KnowBench a knowledge management system integrated inside Eclipse IDE that supports developers during the software development process to produce better quality software. The goal of KnowBench is to support the whole knowledge management process when developers design and implement software by supporting identification, acquisition, development, distribution, preservation, and use of knowledge – the building blocks of a knowledge management system.
Download

Paper Nr: 175
Title:

APPLICATIONS OF EXPERT SYSTEM TECHNOLOGY IN THE ATLAS TDAQ CONTROLS FRAMEWORK

Authors:

Alina Corso-Radu, Giovanna L. Miotto, Raul M. Garcia, Luca Magnoni, Andrei Kazarov and John E. Sloper

Abstract: The ATLAS Trigger-DAQ system is composed of O(10000) of applications running ~1500 computers distributed over a network. To maximize the experiment run efficiency, the Trigger-DAQ control system includes advanced verification, diagnostics and complex dynamic error recovery tools, based on an expert system. The error recovery (ER) system is responsible for analyzing and recovering from a variety of errors, both software and hardware, without stopping the data-gathering operations. The verification framework allows users to develop and configure tests for any component in the system with different levels of complexity. It can be used as a standalone test facility during the general TDAQ initialization procedure, and for diagnosing the problems which may occur at run time. A key role in both recovery and verification frameworks is played by the rule-based expert system, which is also known as a knowledge-based system, to analyze errors and decide on appropriate recovery actions. The system is composed of a dynamic set of rules that describe the TDAQ system behavior and by an inference engine that takes decisions on which actions to perform. The system is currently used on a daily basis for the operation of the ATLAS experiment. The paper describes the architecture and implementation of the TDAQ error-recovery system and verification framework with emphasis on the latest developments and experience gained over the first LHC beam runs.
Download