ICSOFT 2006 Abstracts


Area 1 - Distributed and Paralled Systems

Full Papers
Paper Nr: 187
Title:

ALGORITHMIC SKELETONS FOR BRANCH & BOUND

Authors:

Michael Poldner and Herbert Kuchen

Abstract: Algorithmic skeletons are predefined components for parallel programming. We will present a skeleton for branch & bound problems for MIMD machines with distributed memory. This skeleton is based on a distributed work pool. We discuss two variants, one with supply-driven work distribution and one with demanddriven work distribution. This approach is compared to a simple branch & bound skeleton with a centralized work pool, which has been used in a previous version of our skeleton library Muesli. Based on experimental results for two example applications, namely the n-puzzle and the traveling salesman problem, we show that the distributed work pool is clearly better and enables good runtimes and in particular scalability. Moreover, we discuss some implementation aspects such as termination detection as well as overlapping computation and communication.
Download

Paper Nr: 203
Title:

PARALLEL PROCESSING OF ”GROUP-BY JOIN” QUERIES ON SHARED NOTHING MACHINES

Authors:

Mohamad Al Hajj Hassan and Mostafa Bamha

Abstract: SQL queries involving join and group-by operations are frequently used in many decision support applications. In these applications, the size of the input relations is usually very large, so the parallelization of these queries is highly recommended in order to obtain a desirable response time. The main drawbacks of the presented parallel algorithms that treat this kind of queries are that they are very sensitive to data skew and involve expansive communication and Input/Output costs in the evaluation of the join operation. In this paper, we present an algorithm that minimizes the communication cost by performing the group-by operation before redistribution where only tuples that will be present in the join result are redistributed. In addition, it evaluates the query without the need of materializing the result of the join operation and thus reducing the Input/Output cost of join intermediate results. The performance of this algorithm is analyzed using the scalable and portable BSP (Bulk Synchronous Parallel) cost model which predicts a near-linear speed-up even for highly skewed data.
Download

Paper Nr: 286
Title:

IMPACT OF WRAPPED SYSTEM CALL MECHANISM ON COMMODITY PROCESSORS

Authors:

Satoshi Yamada, Shigeru Kusakabe and Hideo Taniguchi

Abstract: Split-phase style transactions separate issuing a request and receiving the result of an operation in different threads. We apply this style to system call mechanism so that a system call is split into several threads in order to cut off the mode changes from system call execution inside the kernel. This style of system call mechanism improves throughput, and is also useful in enhancing locality of reference. In this paper, we call this mechanism as Wrapped System Call (WSC) mechanism, and we evaluate the effectiveness of WSC on commodity processors. WSC mechanism can be effective even on commodity platforms which do not have explicit multithread support. We evaluate WSC mechanism based on a performance evaluation model by using a simplified benchmark. We also apply WSC mechanism to variants of cp program to observe the effect on the enhancement of locality of reference. When we apply WSC mechanism to cp program, the combination of our split-phase style system calls and our scheduling mechanism is effective in improving throughput by reducing mode changes and exploiting locality of reference.
Download

Short Papers
Paper Nr: 92
Title:

AN APPROACH TO MULTI-AGENT COOPERATIVE SCHEDULING IN THE SUPPLY-CHAIN, WITH EXAMPLES

Authors:

Joaquim Reis

Abstract: The approach to scheduling presented in this article is applicable to multi-agent cooperative supply-chain production-distribution scheduling problems. The approach emphasises a scheduling temporal perspective, it is based on a set of three steps each agent must perform, in which the agents communicate through an interaction protocol, and presupposes the sharing of some specific temporal information (among other) about the scheduling problem, for coordination. It allows the set of agents involved to conclude if a given scheduling problem has, or has not, any feasible solutions. In the first case, agent actions are prescribed to re-schedule, and so repair, a first solution, if it contains constraint violations. The resulting overall agent scheduling behaviour is cooperative. We also include some results of the application of the approach based on simulations.
Download

Paper Nr: 120
Title:

TOWARDS A QUALITY MODEL FOR GRID PORTALS

Authors:

Mª Ángeles Moraga, Coral Calero, Mario Piattini and David Walker

Abstract: Researchers require multiple computing resources when conducting their computational research; this makes necessary the use of distributed resources. In response to the need for dependable, consistent and pervasive access to distributed resources, the Grid came into existence. Grid portals subsequently appeared with the aim of facilitating the use and management of distributed resources. Nowadays, many Grid portals can be found. In addition, users can change from one Grid portal to another with only a click of a mouse. So, it is very important that users regularly return to the same Grid portal, since otherwise the Grid portal might disappear. However, the only mechanism that makes users return is high quality. Therefore, in this paper and with all the above considerations in mind, we have developed a Grid portal quality model from an existing portal quality model, namely, PQM. In addition, the model produced has been applied to two specific Grid portals.
Download

Paper Nr: 170
Title:

A METHODOLOGY FOR ADAPTIVE RESOLUTION OF NUMERICAL PROBLEMS ON HETEROGENEOUS HIERARCHICAL CLUSTERS

Authors:

Wahid Nasri, Sonia Mahjoub and Slim Bouguerra

Abstract: Solving a target problem by using a single algorithm or writing portable programs that perform well is not always efficient on any parallel environment due to the increasing diversity of existing computational supports where new characteristics are influencing the execution of parallel applications. The inherent heterogeneity and the diversity of networks of such environments represent a great challenge to efficiently implement parallel applications for high performance computing. Our objective within this work is to propose a generic framework based on adaptive techniques for solving a class of numerical problems on cluster-based heterogeneous hierarchical platforms. Toward this goal, we refer to adaptive approaches to better adapt a given application to a target parallel system. We apply this methodology on a basic numerical problem, namely solving the matrix multiplication problem, while determining an adaptive execution scheme minimizing the overall execution time depending on the problem and architecture parameters.
Download

Paper Nr: 188
Title:

LANGUAGE-BASED SUPPORT FOR SERVICE ORIENTED ARCHITECTURES: FUTURE DIRECTIONS

Authors:

Pablo Giambiagi, Olaf Owe, Gerardo Schneider and Anders Ravn

Abstract: The popularity of service-oriented architectures (SOA) lives on the promise of dynamic IT-supported interbusiness collaborations. Yet the programming models in use today are a poor match for the distributed, loosely-coupled, document-based SOA; and the gap widens: interoperability across organizations needs contracts to reduce risks. Thus, high-level contract models are making their way into SOA, but application developers are still left to their own devices when it comes to writing code that will comply with a contract. This paper surveys existing and future directions regarding language-based solutions to the above problem.
Download

Paper Nr: 192
Title:

A HYBRID TOPOLOGY ARCHITECTURE FOR P2P FILE SHARING SYSTEMS

Authors:

Juan P. Muñoz-Gea, Josemaría Malgosa-sanahuja, Pilar Manzanares-Lopez, Juan Carlos Sánchez-aarnoutse and Antonio M. Guirado-puerta

Abstract: Over the Internet today, there has been much interest in emerging Peer-to-Peer (P2P) networks because they provide a good substrate for creating data sharing, content distribution, and application layer multicast applications. There are two classes of P2P overlay networks: structured and unstructured. Structured networks can efficiently locate items, but the searching process is not user friendly. Conversely, unstructured networks have efficient mechanisms to search for a content, but the lookup process does not take advantage of the distributed system nature. In this paper, we propose a hybrid structured and unstructured topology in order to take advantages of both kind of networks. In addition, our proposal guarantees that if a content is at any place in the network, it will be reachable with probability one. Simulation results show that the behaviour of the network is stable and that the network distributes the contents efficiently to avoid network congestion.
Download

Paper Nr: 91
Title:

AN APPROACH TO MULTI-AGENT VISUAL COMPOSITION WITH MIXED STYLES

Authors:

Joaquim Reis

Abstract: Applications of computer systems that mix Art, Science and Engineeering have appeared as a result of the evolution of information technologies in the last three decades. Frequently, they involve the use of Artificial Intelligence techniques and they have appeared in the fields of music, literary arts and, more recently, visual arts. This article proposes a computational system based on creative intelligent agents that, by making use of the shape grammar formalism, can support visual composition synthesis activities. In this system, each agent gives its creative contribution through a style of its own. Different modes of agent contribution can be put into perspective like, for instance, cooperative or non-cooperative modes, the resulting composition emerging from these contributions.
Download

Paper Nr: 252
Title:

DEVELOPING A FAULT ONTOLOGY ENGINE FOR EVALUATION OF SERVICE-ORIENTED ARCHITECTURE USING A CASE STUDY SYSTEM

Authors:

Binka Gwynne and Jie Xu

Abstract: This paper reports on the current progress of research into the development and implementation of a Fault Ontology Engine. The engine was devised to facilitate the testing and evaluation of Service-Oriented Architecture (SOA), using ontologically supported software fault injection testing mechanisms. The aims of this research stem from the importance of system evaluation and the notion that testing and evaluation methods could be better supported for modern distributed systems by autonomous software machines, due to their potential dynamics, size, and complexity of SOA, and the variety of resources they offer. Fault injection testing is still generally in the domain of human expertise, experience and intuition, and machines require knowledge of testing mechanisms to develop testing strategies, with information that is formal, explicit, and in languages that they can interpret. This paper contains descriptions of experimental work carried out in order to generate information for modelling the fault and failure domains of a real-world case study system. Information from this case study system will be used to identify points of interest and target points for subsequent tests. It is hoped to show that inferences taken from known systems can be used for intelligent testing of unknown systems.
Download

Paper Nr: 263
Title:

A PEER-TO-PEER SEARCH IN DATA GRIDS BASED ON ANT COLONY OPTIMIZATION

Authors:

Uros Jovanovic and Bostjan Slivnik

Abstract: A method for (1) an efficient discovery of data in large distributed raw datasets and (2) collection of thus procured data is considered. It is a pure peer-to-peer method without any centralized control and is therefore primarily intended for a large-scale, dynamic (data)grid environments. It provides a simple but highly efficient mechanism for keeping the load it causes under control and proves especially usefull if data discovery and collection is to be performed simultaneoulsy with dataset generation. The method supports a user-specified extraction of structured metadata from raw datasets, and automatically performs aggregation of extracted metadata. It is based on the principle of ant colony optimization (ACO). The paper is focused on effective data aggregation and includes the detailed description of the modifications of the basic ACO algorithm that are needed for effective aggregation of the extracted data. Using a simulator, the method was vigorously tested on the wide set of different network topologies for different rates of data extraction and aggregation. Results of the most significant tests are included.
Download

Area 2 - Information Systems and Data Management

Full Papers
Paper Nr: 89
Title:

DISCOVERY AND AUTO-COMPOSITION OF SEMANTIC WEB SERVICES

Authors:

Philippe Larvet and Bruno Bonnin

Abstract: In order to facilitate the on-demand delivery of new services for mobile terminals as well as for fixed phones, we propose a user-centric solution based on Semantic Service-Oriented Architecture (SSOA) for instant building and delivery of new services composed with existing Web services discovered and assembled on-the-fly. This solution, based on semantic descriptions of Web services, is made of three main mechanisms: a semantic service discoverer, transparent for the user, allows to find the pertinent Web services matching with the user's original request, expressed vocally or by a SMS or a simple text; a semantic service composer, using the semantic descriptions of the Web services, allows to combine and orchestrate the discovered services in order to build a new service fully matching the user's request, and a service deliverer makes the new service immediately accessible by the user.
Download

Paper Nr: 129
Title:

ADDING MORE SUPPORT FOR ASSOCIATIONS TO THE ODMG OBJECT MODEL

Authors:

Bryon Ehlmann

Abstract: The Object Model defined in the ODMG standard for object data management systems (ODMSs) provides referential integrity support for one-to-one, one-to-many, and many-to-many associations. It does not, however, provide support that enforces the multiplicities often specified for such associations in UML class diagrams, nor does it provide the same level of support for associations that is provided in relational systems via the SQL references clause. The Object Relationship Notation (ORN) is a declarative scheme that provides for the specification of enhanced association semantics. These semantics include multiplicities and are more powerful than those provided by the SQL references clause. This paper describes how ORN can be added to the ODMG Object Model and discusses algorithms that can be used to support ORN association semantics in an ODMG-compliant ODMS. The benefits of such support are improved productivity in developing object database systems and increased system reliability.
Download

Paper Nr: 167
Title:

MEASURING EFFECTIVENESS OF COMPUTING FACILITIES IN ACADEMIC INSTITUTES A NEW SOLUTION FOR A DIFFICULT PROBLEM

Authors:

Smriti Sharma and Veena Bansal

Abstract: There has been a constant effort to evaluate the success of Information Technology in organizations. This kind of investment is extremely hard to evaluate because of difficulty in identifying tangible benefits, as well as high uncertainty about achieving the expected value. Though a lot of research has taken place in this direction, but not much is written about evaluating IT in non-profit organizations like educational institutions. Measures for evaluating success of IT in such kind of institutes are markedly different from that of business organizations. The purpose of this paper is to build further upon the existing body of research by proposing a new model for measuring effectiveness of computing facilities in academic institutes. As a baseline, Delone & McLean’s model for measuring the success of Information System (DeLone & McLean 1992,DeLone & McLean 2003) is used, as it is the most pioneering model in this regard.
Download

Paper Nr: 196
Title:

COMBINING INFORMATION EXTRACTION AND DATA INTEGRATION IN THE ESTEST SYSTEM

Authors:

Dean Williams and Alexandra Poulovassilis

Abstract: We describe an approach which builds on techniques from Data Integration and Information Extraction in order to make better use of the unstructured data found in application domains such as the Semantic Web which require the integration of information from structured data sources, ontologies and text. We describe the design and implementation of the ESTEST system which integrates available structured and semi-structured data sources into a virtual global schema which is used to partially configure an information extraction process. The information extracted from the text is merged with this virtual global database and is available for query processing over the entire integrated resource. As a result of this semantic integration, new queries can now be answered which would not be possible from the structured and semi-structured data alone. We give some experimental results from the ESTEST system in use.
Download

Paper Nr: 227
Title:

A FRAMEWORK FOR THE DEVELOPMENT AND DEPLOYMENT OF EVOLVING APPLICATIONS - Elaborating on the Model Driven Architecture Towards a Change-resistant Development Framework

Authors:

Georgios Voulalas and Georgios Evangelidis

Abstract: Software development is an R&D intensive activity, dominated by human creativity and diseconomies of scale. Current efforts focus on design patterns, reusable components and forward-engineering mechanisms as the right next stage in cutting the Gordian knot of software. Model-driven development improves productivity by introducing formal models that can be understood by computers. Through these models the problems of portability, interoperability, maintenance, and documentation are also successfully addressed. However, the problem of evolving requirements, which is more prevalent within the context of business applications, additionally calls for efficient mechanisms that ensure consistency between models and code, and enable seamless and rapid accommodation of changes, without interrupting severely the operation of the deployed application. This paper introduces a framework that supports rapid development and deployment of evolving web-based applications, based on an integrated database schema. The proposed framework can be seen as an extension of the Model Driven Architecture targeting a specific family of applications.
Download

Paper Nr: 232
Title:

SMART BUSINESS OBJECT - A New Approach to Model Business Objects for Web Applications

Authors:

Xufeng Liang and Athula Ginige

Abstract: At present, there is a growing need to accelerate the development of web applications and to support continuous evolution of web applications due to evolving business needs. The object persistence capability and web interface generation capability in contemporary MVC (Model View Controller) web application development frameworks and model-to-code generation capability in Model-Driven Development tools has simplified the modelling of business objects for developing web applications. However, there is still a mismatch between the current technologies and the essential support for high-level, semantic-rich modelling of web-ready business objects for rapid development of modern web applications. Therefore, we propose a novel concept called Smart Business Object (SBO) to solve the above-mentioned problem. In essence, SBOs are web-ready business objects. SBOs have high-level, web-oriented attributes such as email, URL, video, image, document, etc. This allows SBO to be modelled at a higher-level of abstraction than traditional modelling approaches. A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs. We have created a toolkit to streamline the creation (modelling) and consumption (execution) of SBOs. With these tools, we are able to build fully functional web applications in a very short time without any coding.
Download

Short Papers
Paper Nr: 42
Title:

ON THE EVALUATION OF TREE PATTERN QUERIES

Authors:

Yangjun Chen

Abstract: The evaluation of Xpath expressions can be handled as a tree embedding problem. In this paper, we propose two strategies on this issue. One is ordered-tree embedding based and the other is unordered-tree embedding based. For the ordered-tree embedding, our algorithm needs only O(|T| x |P|) time and (|T| x |P|) space, where |T| and |P| stands for the numbers of the nodes in the target tree T and the pattern tree P, respectively. For the unordered-tree embedding, we give an algorithm that needs (|T| x |P|)x 22k) time, where k is the largest out-degree of any node in P.
Download

Paper Nr: 61
Title:

PROGRAM VERIFICATION TECHNIQUES FOR XML SCHEMA-BASED TECHNOLOGIES

Authors:

Suad Alagic, Mark Royer and David Briggs

Abstract: Representation and verification techniques for XML Schema types, structures, and applications, in a program verification system PVS are presented. Type derivations by restriction and extension as defined in XML Schema are represented in the PVS type system using predicate subtyping. Availability of parametric polymorphism in PVS makes it possible to represent XML sequences and sets via PVS theories. Powerful PVS logic capabilities are used to express complex constraints of XML Schema and its applications. Transaction verification methodology developed in the paper is grounded on declarative, logic-based specification of the frame constraints and the actual transaction updates. A sample XML application given in the paper includes constraints typical for XML schemas such as keys and referential integrity, and in addition ordering and range constraints. The developed proof strategy is demonstrated by a sample transaction verification with respect to this schema. The overall approach has a model theory based on the view of XML types and structures as theories.
Download

Paper Nr: 76
Title:

VIRTUAL MUSEUM – AN IMPLEMENTATION OF A MULTIMEDIA OBJECT-ORIENTED DATABASE

Authors:

Rodrigo Maia and Jorge Rady De Almeida Júnior

Abstract: This paper describes the main characteristics involved in the process of using multimedia content in the Internet sites and it presents a proposal for an implementation of an object-oriented database, in order to assist the multimedia data exigency in a dynamic website. It is described an implementation of the proposed architecture, consisting of a virtual museum made for the Contemporary Art Museum of the USP, called Virtual MAC, which was elected as the 3rd best virtual museum of the world by INFOLAC Web 2005 (UNESCO) . The main objective of Virtual MAC is to create a virtual collection of works at art and make it available on Internet. Our analysis shows that it is more appropriate to use the Object Oriented paradigm instead of Relational Modelling due to the nature of the multimedia data and the structure of the dynamic web site used for the Virtual MAC.
Download

Paper Nr: 85
Title:

CRYSTALLIZATION OF AGILITY - Back to Basics

Authors:

Asif Qumer and Brian Henderson-Sellers

Abstract: There are a number of agile and traditional methodologies for software development. Agilists provide agile principles and agile values to characterize the agile methods but there is no clear and inclusive definition of agile methods; subsequently it is not feasible to draw a clear distinction between traditional and agile software development methods in practice. The purpose of this paper is to explain the concept of agility in detail; and then to suggest a definition of agile methods that would help in the ranking or differentiation of agile methods from other available methods.
Download

Paper Nr: 118
Title:

ON CONTEXT AWARE PREDICATE SEQUENCE QUERIES

Authors:

Hagen Höpfner

Abstract: Due to the limited input capabilities of small mobile information system clients like mobile phones, it is not a must to support a descriptive query language like SQL. Furthermore, information systems with mobile clients have to address characteristics resulting from clients mobility as well as from wireless communications. These additional functions can be supported by a reasonable, well-defined notation of queries. Moreover, such systems should be context aware. In this paper we present a query notation named “context aware predicate sequence queries” which respects these issues.
Download

Paper Nr: 136
Title:

USAGE TRACKING LANGUAGE: A META LANGUAGE FOR MODELLING TRACKS IN TEL SYSTEMS

Authors:

Christophe Choquet and Sébastien Iksal

Abstract: In the context of distance learning and teaching, the re-engineering process needs a feedback on the learners' usage of the learning system. The feedback is given by numerous vectors, such as interviews, questionnaires, videos or log files. We consider that it is important to interpret tracks in order to compare the designer’s intentions with the learners’ activities during a session. In this paper, we present the usage tracking language – UTL. This language is designed to be generic and we present an instantiation of a part of it with IMS-Learning Design, the representation model we chose for our three years of experiments.
Download

Paper Nr: 161
Title:

PCA-BASED DATA MINING PROBABILISTIC AND FUZZY APPROACHES WITH APPLICATIONS IN PATTERN RECOGNITION

Authors:

Luminita State, Catalina Cocianu, Panayiotis Vlamos and Viorica Stefanescu

Abstract: The aim of the paper is to develop a new learning by examples PCA-based algorithm for extracting skeleton information from data to assure both good recognition performances, and generalization capabilities. Here the generalization capabilities are viewed twofold, on one hand to identify the right class for new samples coming from one of the classes taken into account and, on the other hand, to identify the samples coming from a new class. The classes are represented in the measurement/feature space by continuous repartitions, that is the model is given by the family of density functions (fh) hϵH, where H stands for the finite set of hypothesis (classes). The basis of the learning process is represented by samples of possible different sizes coming from the considered classes. The skeleton of each class is given by the principal components obtained for the corresponding sample.
Download

Paper Nr: 201
Title:

AN ANALYSIS OF THE EFFECTS OF SPATIAL LOCALITY ON THE CACHE PERFORMANCE OF BINARY SEARCH TREES

Authors:

Thomas B. Puzak and Chun-Hsi Huang

Abstract: The topological structure of binary search trees does not translate well into the linear nature of a computer’s memory system, resulting in high cache miss rates on data accesses. This paper analyzes the cache performance of search operations on several varieties of binary trees. Using uniform and nonuniform key distributions, the number of cache misses encountered per search is measured for Vanilla, AVL, and two types of Cache Aware Trees. Additionally, concrete measurements of the degree of spatial locality observed in the trees is provided. This allows the trees to be evaluated for situational merit, and for definitive explanations of their performance to be given. Results show that the balancing operations of AVL trees effectively negates any spatial locality gained through naive allocation schemes. Furthermore, for uniform input, this paper shows that large cache lines are only beneficial to trees that consider the cache’s line size in their allocation strategy. Results in the paper demonstrate that adaptive cache aware allocation schemes that approximate the key distribution of a tree have universally better performance than static systems that favor a particular key distribution.
Download

Paper Nr: 222
Title:

A UNIFIED APPROACH FOR SOFTWARE PROCESS REPRESENTATION AND ANALYSIS

Authors:

Vassilis Gerogiannis, George Kakarontzas and Ioannis Stamelos

Abstract: This paper presents a unified approach for software process management which combines object-oriented (OO) structures with formal models based on (high-level timed) Petri nets. This pairing may be proved beneficial not only for the integrated representation of software development processes, human resources and work products, but also in analysing properties and detecting errors of a software process specification, before the process is put to actual use. The use of OO models provides the advantages of graphical abstraction, high- level of understanding and manageable representation of software process classes and instances. Resulted OO models are mechanically transformed into a high-level timed Petri net representation to derive a model for formally proving process properties as well as applying managerial analysis. We demonstrate the applicability of our approach by addressing a simple software process modelling example problem used in the literature to exercise various software process modelling notations.
Download

Paper Nr: 254
Title:

CLICKSTREAM DATA MINING ASSISTANCE - A Case-Based Reasoning Task Model

Authors:

Cristina Wanzeller and Orlando Belo

Abstract: This paper presents a case-based reasoning system to assist users in knowledge discovery from clickstream data. The system is especially oriented to store and make use of the knowledge acquired from the experience in solving specific clickstream data mining problems inside a corporate environment. We describe the main design, implementation and characteristics of this system. It was implemented as a prototype Web-based application, centralizing the past mining processes in a corporative memory. Its main goal is to recommend the most suited mining strategies to address the problem at hand, accepting as inputs the characteristics of the available data and the analysis requirements. The system also takes advantage and integrates corporative related information resources, supporting a semi-automated data gathering approach.
Download

Paper Nr: 255
Title:

ADMIRE FRAMEWORK: DISTRIBUTED DATA MINING ON DATA GRID PLATFORMS

Authors:

Nhien L. Khac, Mohand-Tahar Kechadi and Joe Carthy

Abstract: In this paper, we present the ADMIRE architecture; a new framework for developing novel and innovative data mining techniques to deal with very large and distributed heterogeneous datasets in both commercial and academic applications. The main ADMIRE components are detailed as well as its interfaces allowing the user to efficiently develop and implement their data mining applications techniques on a Grid platform such as Globus ToolKit, DGET, etc.
Download

Paper Nr: 257
Title:

FORMAL FRAMEWORK FOR SEMANTIC INTEROPERABILITY

Authors:

Nadia Yaacoubi Ayadi, Mohamed Ben Ahmed and Yann Pollet

Abstract: Semantics of schema models is not explicit but always hidden in their structures and labels. To obtain semantic interoperability we need to make their semantics explicit by taking into account both the interpretation of the labels and the structures described by the arcs. We address in this paper the issue of semantic interoperability between systems relying on semantically heterogeneous hierarchies, having been designed for the purpose of independent specific goals and activities. Given a set of generalization hierarchies, our approach gives much emphasis on semantics added-value by ”emerging” the intended informal meaning of concepts, we rely on Wordnet lexical repository. In the first part of the paper, we provide a rigorous logical framework for representing and automatically reasoning on generalization hierarchies except their formalism (UML, ER diagram, etc). Then, we describe The SEM-INTEROP algorithm that consists on two main steps: semantic interpretation and semantic comparaison.
Download

Paper Nr: 266
Title:

USING LINGUISTIC TECHNIQUES FOR SCHEMA MATCHING

Authors:

Ozgul Unal and Hamideh Afsarmanesh

Abstract: In order to deal with the problem of semantic and schematic heterogeneity in collaborative networks, matching components among database schemas need to be identified and heterogeneity needs to be resolved, by creating the corresponding mappings in a process called schema matching. One important step in this process is the identification of the syntactic and semantic similarity among elements from different schemas, usually referred to as Linguistic Matching. The Linguistic Matching component of a schema matching and integration system, called SASMINT, is the focus of this paper. Unlike other systems, which typically utilize only a limited number of similarity metrics, SASMINT makes an effective use of NLP techniques for the Linguistic Matching and proposes a weighted usage of several syntactic and semantic similarity metrics. Since it is not easy for the user to determine the weights, SASMINT provides a component called Sampler as another novelty, to support automatic generation of weights.
Download

Paper Nr: 292
Title:

A DATA MINING APPROACH TO LEARNING PROBABILISTIC USER BEHAVIOR MODELS FROM DATABASE ACCESS LOG

Authors:

Mikhail Petrovskiy

Abstract: The problem of user behavior modeling arises in many fields of computer science and software engineering. In this paper we investigate a data mining approach for learning probabilistic user behavior models from the database usage logs. We propose a procedure for translating database traces into representation suitable for applying data mining methods. However, most existing data mining methods rely on the order of actions and ignore time intervals between actions. To avoid this problem we propose novel method based on combination of decision tree classification algorithm and empirical time-dependent feature map, motivated by potential functions theory. The performance of the proposed method was experimentally evaluated on real-world data. The comparison with existing state-of-the-art data mining methods has confirmed outstanding performance of our method in predictive user behavior modeling and has demonstrated competitive results in anomaly detection.
Download

Paper Nr: 22
Title:

SOFTWARE SYNTHESIS OF THE WEB-BASED QUESTIONNARIE SYSTEM

Authors:

Masahiro Yamamoto

Abstract: The questionnaires on the web are increasing in the fields of business and personals recently. However, staff peoples of business areas and conventional personals can not implement them by themselves. Usually they ask professionals of information technology fields for building of such kinds of questionnaires. In this case it takes many times and costs much. If staff peoples of business fields and personals can easily make it by themselves, it is very useful. Software synthesis system of a web–based questionnaire system for them is developed.
Download

Paper Nr: 30
Title:

INFORMATION SYSTEM DESIGN AND PROTOTYPING USING FORM TYPES

Authors:

Jelena Pavicevic, Ivan Luković, Pavle Mogin and Miro Govedarica

Abstract: The paper presents the form type concept that generalizes screen forms that users utilize to communicate with an information system. The concept is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema. IIS*Case is a CASE tool based on the form type concept that supports conceptual modelling of an information system and its database schema. The paper outlines a way how this tool can generate XML specifications of application prototypes of an information system. The aim is to improve IIS*Case through implementation of a generator which can produce an executable prototype of an information system, automatically.
Download

Paper Nr: 30
Title:

INFORMATION SYSTEM DESIGN AND PROTOTYPING USING FORM TYPES

Authors:

Jelena Pavicevic, Ivan Luković, Pavle Mogin and Miro Govedarica

Abstract: The paper presents the form type concept that generalizes screen forms that users utilize to communicate with an information system. The concept is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema. IIS*Case is a CASE tool based on the form type concept that supports conceptual modelling of an information system and its database schema. The paper outlines a way how this tool can generate XML specifications of application prototypes of an information system. The aim is to improve IIS*Case through implementation of a generator which can produce an executable prototype of an information system, automatically.
Download

Paper Nr: 30
Title:

INFORMATION SYSTEM DESIGN AND PROTOTYPING USING FORM TYPES

Authors:

Jelena Pavicevic, Ivan Luković, Pavle Mogin and Miro Govedarica

Abstract: The paper presents the form type concept that generalizes screen forms that users utilize to communicate with an information system. The concept is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema. IIS*Case is a CASE tool based on the form type concept that supports conceptual modelling of an information system and its database schema. The paper outlines a way how this tool can generate XML specifications of application prototypes of an information system. The aim is to improve IIS*Case through implementation of a generator which can produce an executable prototype of an information system, automatically.
Download

Paper Nr: 32
Title:

DATA MINING METHODS FOR GIS ANALYSIS OF SEISMIC VULNERABILITY

Authors:

Florin Leon and Gabriela M. Atanasiu

Abstract: This paper aims at designing some data mining methods of evaluating the seismic vulnerability of regions in the built infrastructure. A supervised clustering methodology is employed, based on k-nearest neighbor graphs. Unlike other classification algorithms, the method has the advantage of taking into account any distribution of training instances and also data topology. For the particular problem of seismic vulnerability analysis using a Geographic Information System, the gradual formation of clusters (for different values of k) allows a decision- making stakeholder to visualize more clearly the details of the cluster areas. The performance of the k-nearest neighbor graph method is tested on three classification problems, and finally it is applied to a sample from a digital map of Iaşi, a large city located in the North-Eastern part of Romania.
Download

Paper Nr: 124
Title:

A BAYESIAN NETWORK TO STRUCTURE A DATA QUALITY MODEL FOR WEB PORTALS

Authors:

Angélica Caro, Coral Calero, Houari Sahraoui, Ghazwa Malak and Mario Piattini

Abstract: The technological advances and the use of the internet have favoured the appearance of a great diversity of web applications, among them Web portals. Through them, organizations develop their businesses in a highly competitive environment. One decisive factor for this competitiveness is the assurance of its data quality. In previous works, a data quality model for Web portals has been developed. The model is represented as a matrix that links the user expectations of data web quality to the portal functionalities. Into this matrix a set of 34 attributes where classified. However, the quality attributes on this model have not an operational structure, necessary to be used actual assessment. In this paper we present how we have structured these attributes by means of a probabilistic approach, using Bayesian Networks. The final objective is to use the Bayesian network obtained for evaluating the quality of a data portal (or a subset of its characteristics).
Download

Paper Nr: 194
Title:

A DETECTION METHOD OF STAGNATION SYMPTOMS BY USING PROJECT PROGRESS MODELS GENERATED FROM PROJECT REPORTS

Authors:

Satoshi Tsuji, Yoshitmo Ikkai and Masanori Akiyoshi

Abstract: The purpose of this research is to extract “stagnation symptoms” from progress reports related to a research project. A stagnation symptom is defined in a portion where remarkable stagnation is seen during the progress of a project. Specifically, according to project managers, stagnation symptoms can be classified into the following three kinds: first one is a project bottleneck grasped from one document; the second is clarified by comparing it with the most recent document; and the third is clarified from changes to a working object in a series of documents. We propose a method of extracting stagnation symptoms using the structural analysis of a project’s progress. A progress model that is a structural chart to expressing the progress of a project is generated from documents with label tags, which indicate prior contexts or attributes. This progress model has the following features: a multilevel layer model using detailed degrees and situation analysis using color, and relation analysis of these details and basis using color propagation. Stagnation symptoms are automatically extracted by applying stagnation symptom extraction rules to the progress model. This proposed method was been applied to a set of real progress reports. It could extract stagnation symptoms that were extracted manually.
Download

Paper Nr: 299
Title:

EFFECTIVENESS OF WEB BASED PBL USING COURSE MANAGEMENT TECHNOLOGIES: A CASE STUDY

Authors:

Havva Basak and Serdar Ayan

Abstract: Maritime education and training has typically focused on delivering practical courses for a practical vocation. In the modern environment, maritime personnel now need to be more professional, more open to change and more business-like in their thinking. This has led to changes in the education system that supports the maritime industries. Teaching thinking skills has become a major agenda for education. Problem Based Learning is a part of this thinking. Problem-Based Learning (PBL) within a web-based environment in the delivery of an undergraduate courses has been investigated. The effects was evaluated by comparing the performances of the students using the web-based PBL and comparing the outcomes with those of the traditional PBL. The outcomes of the experiments was positive. By having real life problems as focal points and students as active problem-solvers, the learning paradigm would shift towards the attainment of higher thinking skills.
Download

Paper Nr: 307
Title:

WEB INFORMATION SYSTEM: A FOUR LEVEL ARCHITECTURE

Authors:

Roberto Paiano, Anna L. Guido and Leonardo Mangia

Abstract: Business processes are playing a very important role in companies and the explicit introduction of them in Information System architecture is a must. According to the interest shown towards Web Application it is important to introduce a new web-oriented class of software, which is able to gives to the manager the possibility to operating directly with the process (we will talk about process oriented WIS - Web Information System). It is necessary to replace the three-level logic of the traditional application development (Data, Business Logic, Presentation), that hides processes in the Business Logic, with a four- level logic that allows to separate the process level from the application level: definition and management of the processes will be not tied solely to business logic. Our research (work in progress) focus is on an innovative framework (software architecture and methodology) for Information System development that links together the know-how acquired in Web Application design and in the process definition concepts.
Download

Area 3 - Knowledge Engineering

Full Papers
Paper Nr: 88
Title:

APPROXIMATE REASONING TO LEARN CLASSIFICATION RULES

Authors:

Amel Borgi

Abstract: In this paper, we propose an original use of approximate reasoning not only as a mode of inference but also as a means to refine a learning process. This work is done within the framework of the supervised learning method SUCRAGE which is based on automatic generation of classification rules. Production rules whose conclusions are accompanied by belief degrees, are obtained by supervised learning from a training set. These rules are then exploited by a basic inference engine: it fires only the rules with which the new observation to classify matches exactly. To introduce more flexibility, this engine was extended to an approximate inference which allows to fire rules not too far from the new observation. In this paper, we propose to use approximate reasoning to generate new rules with widened premises: thus imprecision of the observations are taken into account and problems due to the discretization of continuous attributes are eased. The objective is then to exploit the new base of rules by a basic inference engine, easier to interpret. The proposed method was implemented and experimental tests were carried out.
Download

Paper Nr: 126
Title:

COMBINING METAHEURISTICS FOR THE JOB SHOP SCHEDULING PROBLEM WITH SEQUENCE DEPENDENT SETUP TIMES

Authors:

Miguel Á. González Fernández, María Rita Sierra Sánchez, María Del Camino Rodríguez Vela, Ramiro Varela Arias and Jorge Puente

Abstract: The Job Shop Scheduling (J SS) is a hard problem that has interested to researchers in various fields such as Operations Research and Artificial Intelligence during the last decades. Due to its high complexity, only small instances can be solved by exact methods, while instances with a size of practical interest should be solved by means of approximate methods guided by heuristic knowledge. In this paper we confront the Job Shop Scheduling with Sequence Dependent Setup Times (SDJ SS). The SDJ SS problem models many real situations better than the J SS. Our approach consists in extending a genetic algorithm and a local search method that demonstrated to be efficient in solving the J SS problem. We report results from an experimental study showing that the proposed approaches are more efficient than other genetic algorithm proposed in the literature, and that it is quite competitive with some of the state-of-the-art approaches.
Download

Paper Nr: 242
Title:

MINING OF COMPLEX OBJECTS VIA DESCRIPTION CLUSTERING

Authors:

Alejandro G. Lopez, Rafael Berlanga Llavori and Roxana Danger

Abstract: In this work we present a formal framework for mining complex objects, being those characterised by a set of heterogeneous attributes and their corresponding values. First we will do an introduction of the various Data Mining techniques available in the literature to extract association rules. We will as well show some of the drawbacks of these techniques and how our proposed solution is going to tackle them. Then we will show how applying a clustering algorithm as a pre-processing step on the data allow us to find groups of attributes and objects that will provide us with a richer starting point for the Data Mining process. Then we will define the formal framework, its decision functions and its interesting measurement rules, as well as a newly designed Data Mining algorithms specifically tuned for our objectives. We will also show the type of knowledge to be extracted in the form of a set of association rules. Finally we will state our conclusions and propose the future work.
Download

Paper Nr: 281
Title:

A PATTERN SELECTION ALGORITHM IN KERNEL PCA APPLICATIONS

Authors:

Ruixin Yang, John Tan and Menas Kafatos

Abstract: Principal Component Analysis (PCA) has been extensively used in different fields including earth science for spatial pattern identification. However, the intrinsic linear feature associated with standard PCA prevents scientists from detecting nonlinear structures. Kernel-based principal component analysis (KPCA), a recently emerging technique, provides a new approach for exploring and identifying nonlinear patterns in scientific data. In this paper, we recast KPCA in the commonly used PCA notation for earth science communities and demonstrate how to apply the KPCA technique into the analysis of earth science data sets. In such applications, a large number of principal components should be retained for studying the spatial patterns, while the variance cannot be quantitatively transferred from the feature space back into the input space. Therefore, we propose a KPCA pattern selection algorithm based on correlations with a given geophysical phenomenon. We demonstrate the algorithm with two widely used data sets in geophysical communities, namely the Normalized Difference Vegetation Index (NDVI) and the Southern Oscillation Index (SOI). The results indicate the new KPCA algorithm can reveal more significant details in spatial patterns than standard PCA.
Download

Short Papers
Paper Nr: 156
Title:

GEOSPATIAL PUBLISHING - Creating and Managing Geo-Tagged Knowledge Repositories

Authors:

Arno Scharl

Abstract: International media have recognized the potential of geo-browsers such as NASA World Wind and Google Earth, for example when Web and television coverage on hurricane “Katrina” used interactive geospatial projections to illustrate its path and the scale of destruction. Yet these early applications only hint at the true potential of geo-browsing technology to build and maintain virtual communities, and to revolutionize the production, distribution and consumption of media products. Investigating this potential, this paper reviews the literature on geospatial publishing with a special focus on extracting geospatial context from unstructured textual resources. A content analysis of online coverage based on a suite of text mining tools then sheds light on the popularity and adoption of geo-browsing platforms. While such platforms might help en- rich a company’s portfolio of media products, they also pose a threat for existing players through attracting new competitors; e.g., independent providers of geospatial metadata or location-based services.
Download

Paper Nr: 166
Title:

PARTNER ASSESSMENT USING MADM AND ONTOLOGY FOR TELECOM OPERATORS

Authors:

Long Zhang and Xiaoyan Chen

Abstract: Nowadays, the revenue of telecom operators generated by traditional services declined dramatically while the value added services involving 3rd party value added service providers (partners) are becoming the most prominent source of revenue growth. To regulate the behaviours of the partners and make the operators be able to select best service for end users, a flexible partner assessment framework is required. This paper 1) presents a flexible partner assessment framework based on Multiple Attribute Decision Making (MADM) method for telecom operators to adapt to the changing requirements of value-added services; 2) proposes ontology to model the complicated relationship in the assessment factors to achieve high extensibility for the increasing decision knowledge. From our study, the method adopted and the system proposed can handle the partner assessment problem and support service selection reasonably in telecom industry.
Download

Paper Nr: 191
Title:

A RETRIEVAL METHOD OF SIMILAR QUESTION ARTICLES FROM WEB BULLETIN BOARD

Authors:

Yohei Sakurai, Soichiro Miyazaki and Masanori Akiyoshi

Abstract: This paper proposes a method for retrieving similar question articles from Web bulletin boards, which basically use the cosine similarity index derived from a user’s query sentence and article question sentences. Since these sentences are mostly short, it is difficult to distinguish whether article question sentences are similar to a user’s query sentence or not simply by applying the conventional cosine similarity index. In an attempt to overcome this problem, our method modifies the elements of the word vectors used in the cosine similarity index, which are derived from a sentence structure from the viewpoints of common words and non-common words between a user’s query sentence and article question sentences. Experimental results indicate that our proposed method is effective.
Download

Paper Nr: 240
Title:

AN EXTRACTION METHOD OF TIME-SERIES NUMERICAL DATA FROM ENTERPRISE PRESS RELEASES

Authors:

Masanori Akiyoshi, Mayu Gen, Masaki Samejima and Norihisa Komoda

Abstract: This paper addresses an extraction method of time-series numerical data from enterprise press releases for business strategy design. Business strategy consists of logical actions for continuously producing enterprise outcome. The business strategy design process that is partially based on competitive environment analysis may extremely resort to professional skills so far. To enhance and accelerate the competitive environment analysis, we focus on press releases of competitors in order to extract numerical data related to products or services. Sentences in press releases are well organized and grammatically correct. Therefore such extraction is simply done by identifying the keywords of products or services and the unit indicator co-occurrence. In addition to such simple approach, we clarify the specific rules to applying our method to practical press releases.
Download

Paper Nr: 81
Title:

A VIEW ON THE WEB ENGINEERING NATURE OF WEB BASED EXPERT SYSTEMS

Authors:

Ioannis M. Dokas and Alexandre Alapetite

Abstract: The Web has become the ubiquitous platform for distributing information and computer services. The tough Web competition, the way people and organizations rely on Web applications, and the increasing user requirements for better services have raised their complexity. Expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. This paper supports that the Web engineering and expert systems principals should be combined when developing Web based expert systems. A development process model will be presented that illustrates, in brief, how these principals can be combined. Based on this model, a publicly available Web based expert system called Landfill Operation Management Advisor (LOMA) was developed. In addition, the results of an accessibility evaluation on LOMA – the first ever reported on Web based expert systems – will be presented. Based on this evaluation some thoughts on accessibility guidelines specific to Web based expert systems will be reported.
Download

Paper Nr: 168
Title:

MODELLING AND MANAGING KNOWLEDGE THROUGH DIALOGUE: A MODEL OF COMMUNICATION-BASED KNOWLEDGE MANAGEMENT

Authors:

Violaine Prince

Abstract: In this paper, we describle a model that relies on the following assumption; ontology negotiation and creation is necessary to make knowledge sharing and KM successful through communication. We mostly focus on the modifying process, i.e. dialogue, and we show a dynamic modification of agents knowledge bases could occur through messages exchanges, messages being knowledge chunks to be mapped with agents KB. Dialogue takes account of both success and failure in mapping. We show that the same process helps repair its own anomalies. We describe an architecture for agents knowledge exchange through dialogue. Last we conclude about the benefits of introducing dialogue features in knowledge management.
Download

Paper Nr: 218
Title:

SOME SPECIFIC HEURISTICS FOR SITUATION CLUSTERING PROBLEMS

Authors:

Boris Melnikov, Alexey Radionov, Andrey Moseev and Elena Melnikova

Abstract: The present work is a continuation of several preceding author's works dedicated to a specific multiheuristic approach to discrete optimization problems. This paper considers those issues of this multiheuristic approach which relate to the problems of clustering situations. In particular it considers the issues of the author’s approach to the problems and the description of specific heuristics for the problems. We give the description of a particular example from the group of “Hierarchical Clustering Algorithms”, which we use for clustering situations. We also give descriptions of some common methods and algorithms related to such clustering. There are two examples of metrics on situations sets for two different problems; one of the problems is a classical discrete optimization problem and the other one is a game-playing programming problem.
Download

Paper Nr: 218
Title:

SOME SPECIFIC HEURISTICS FOR SITUATION CLUSTERING PROBLEMS

Authors:

Boris Melnikov, Alexey Radionov, Andrey Moseev and Elena Melnikova

Abstract: The present work is a continuation of several preceding author's works dedicated to a specific multiheuristic approach to discrete optimization problems. This paper considers those issues of this multiheuristic approach which relate to the problems of clustering situations. In particular it considers the issues of the author’s approach to the problems and the description of specific heuristics for the problems. We give the description of a particular example from the group of “Hierarchical Clustering Algorithms”, which we use for clustering situations. We also give descriptions of some common methods and algorithms related to such clustering. There are two examples of metrics on situations sets for two different problems; one of the problems is a classical discrete optimization problem and the other one is a game-playing programming problem.
Download

Paper Nr: 259
Title:

SPECIFICATION OF DEPENDENCIES FOR IT SERVICE FAULT MANAGEMENT

Authors:

Andreas Hanemann, David Schmitz, Patricia Marcu and Martin Sailer

Abstract: The provisioning of IT services is often based on a variety of resources and underlying services. To deal with this complexity the dependencies between these elements have to be well-known. In particular, dependencies are needed for tracking a failure in a higher-level service being offered to customers down to the provisioning infrastructure. Another usage of dependencies is the impact estimation of an assumed or actual resource failure onto the services to allow for decision making about appropriate measures. Starting from a set of requirements an analysis of the state-of-the-art shows the contributions and limitations of existing research approaches and industry tools for a configuration management solution with respect to the dependencies. To improve the current situation a methodology is proposed to model dependencies for given service provisioning scenarios. The proposed dependency modeling is part of a larger solution for an overall service management information repository.
Download

Paper Nr: 262
Title:

AN ACQUISITION KNOWLEDGE PROCESS FOR SOFTWARE DEVELOPMENT - Knowledge Acquisition for a Software Process Implementation Environment

Authors:

Sandro Oliveira, Alexandre Vasconcelos, Lúcio Silva and Albérico Pena

Abstract: Knowledge must be managed efficiently through the capture, maintenance and dissemination of it in an organization. However, knowledge related to business processes execution is distributed in documents, corporative systems and in key-members minds making the access, preservation and distribution of this knowledge to other members more difficult. In this context, systematic knowledge acquisition processes are necessary to acquire and preserve organizational knowledge. This work presents a process to acquire tacit and explicit organization members’ knowledge related to business processes, and the functionalities of a tool developed to support the execution of this process in a software development context. This tool is part of a software process implementation environment, called ImPProS, developed at CIn/UFPE – Center of Informatics/Federal University of Pernambuco.
Download

Paper Nr: 273
Title:

ROA MODULAR LDAP-BASED APPROACH TO INDUSTRIAL SOFTWARE REVISION CONTROL

Authors:

Cristina De Castro and Paolo Toppan

Abstract: A software revision control system stores and manages successive, revised versions of applications, so that every design stage can be easily backtracked. In an industrial context, revision control concerns the evolution of software installed on complex systems and plants, where the need for revision is likely to arise from many different and correlated factors. In this paper, starting from the wide bibliography available on the subject, some typical schemes are discussed for representing such factors. An LDAP-based architecture is addressed for modelling and storing their evolution.
Download

Area 4 - Programing Languages

Full Papers
Paper Nr: 147
Title:

FROM STATIC TO DYNAMIC PROCESS TYPES

Authors:

Franz Puntigam

Abstract: Process types – a kind of behavioral types – specify constraints on message acceptance for the purpose of synchronization and to determine object usage and component behavior in object-oriented languages. So far process types have been regarded as a purely static concept for Actor languages incompatible with inherently dynamic programming techniques. We propose solutions of related problems causing the approach to become useable in more conventional dynamic and concurrent languagues. The proposed approach can ensure message acceptability and support local and static checking of race-free programs.
Download

Paper Nr: 215
Title:

ASPECTBOXES – CONTROLLING THE VISIBILITY OF ASPECTS

Authors:

Alexandre Bergel, Robert Hirschfeld, Siobhan Clarke and Pascal Costanza

Abstract: Aspect composition is still a hot research topic where there is no consensus on how to express where and when aspects have to be composed into a base system. In this paper we present a modular construct for aspects, called aspectboxes, that enables aspects application to be limited to a well defined scope. An aspectbox encapsulates class and aspect definitions. Classes can be imported into an aspectbox defining a base system to which aspects may then be applied. Refinements and instrumentation defined by an aspect are visible only within this particular aspectbox leaving other parts of the system unaffected
Download

Paper Nr: 230
Title:

ON STATE CLASSES AND THEIR DYNAMIC SEMANTICS

Authors:

Ferruccio Damiani, Elena Giachino, Paola Giannini and Emanuele Cazzola

Abstract: We introduce state classes, a construct to program objects that can be safely concurrently accessed. State classes model the notion of object’s state (intended as some abstraction over the value of fields) that plays a key role in concurrent object-oriented programming (as the state of an object changes, so does its coordination behavior). We show how state classes can be added to Java-like languages by presenting STATEJ, an extension of JAVA with state classes. The operational semantics of the state class construct is illustrated both at an abstract level, by means of a core calculus for STATEJ, and at a concrete level, by defining a translation from STATEJ into JAVA.
Download

Paper Nr: 285
Title:

SOFTWARE IMPLEMENTATION OF THE IEEE 754R DECIMAL FLOATING-POINT ARITHMETIC

Authors:

Marius Cornea, Cristina Anderson and Charles Tsen

Abstract: The IEEE Standard 754-1985 for Binary Floating-Point Arithmetic (IEEE Std. 754, 1985) is being revised (IEEE Std. 754R Draft, 2006), and an important addition to the current text is the definition of decimal floating-point arithmetic (Cowlishaw, 2003). This is aimed mainly to provide a robust, reliable framework for financial applications that are often subject to legal requirements concerning rounding and precision of the results in the areas of banking, telephone billing, tax calculation, currency conversion, insurance, or accounting in general. Using binary floating-point calculations to approximate decimal calculations has led in the past to the existence of numerous proprietary software packages, each with its own characteristics and capabilities. New algorithms are presented in this paper which were used for a generic implementation in software of the IEEE 754R decimal floating-point arithmetic, but may also be suitable for a hardware implementation. In the absence of hardware to perform IEEE 754R decimal floating-point operations, this new software package that will be fully compliant with the standard proposal should be an attractive option for various financial computations. The library presented in this paper uses the binary encoding method from (IEEE Std. 754R Draft, 2006) for decimal floating-point values. Preliminary performance results show one to two orders of magnitude improvement over a software package currently incorporated in GCC, which operates on values encoded using the decimal method from (IEEE Std. 754R Draft, 2006).
Download

Short Papers
Paper Nr: 41
Title:

ASSOCIATIVE PROGRAMMING AND MODELING: ABSTRACTIONS OVER COLLABORATION

Authors:

Bent B. Kristensen

Abstract: Associations as abstractions over collaborations are motivated and explored. Associations are seen as first class concepts at both modeling and programming levels. Associations are seen as concepts/phenomena and possess properties. Various notations for collaboration in object-oriented programming and modeling are discussed and compared to associations. Concurrent and interleaved execution of objects is described in relation to associations.
Download

Paper Nr: 180
Title:

ON ABILITY OF ORTHOGONAL GENETIC ALGORITHMS FOR THE MIXED CHINESE POSTMAN PROBLEM

Authors:

Hiroshi Masuyama, Tetsuo Ichimori and Toshihiko Sasama

Abstract: The well known Chinese Postman Problem has many applications, and this problem has been proved to be NP-hard in graphs where directed and non-directed edges are mixed. In this paper, in order to investigate the salient feature of orthogonal design, we designed a genetic algorithm adopting an orthogonal crossover seoperation to solve this (mixed Chinese Postman) problem and evaluate the salient ability. The results indicate that for problems of small sizes, the orthogonal genetic algorithm can find near-optimal solutions within a moderate number of generations. We confirmed that the orthogonal design shows better performance, even for graph scales where simple genetic algorithms almost never find the solution. However, only the introduction of orthogonal design is not yet effective for the Chinese Postman Problem of practical size where a solution can be obtained in less than 104 generations. This paper concludes that the optimal design scale of orthogonal array to this mixed Chinese Postman Problem does not conform to the same scale as the multimedia multicast routing problem.
Download

Paper Nr: 226
Title:

A DECLARATIVE EXECUTABLE MODEL FOR OBJECT-BASED SYSTEMS BASED ON FUNCTIONAL DECOMPOSITION

Authors:

Pierre Kelsen

Abstract: Declarative models are a commonly used approach to deal with software complexity: by abstracting away the intricacies of the implementation these models are often easier to understand than the underlying code. Popular modeling languages such as UML can however become complex to use when modeling systems in sufficient detail. In this paper we introduce a new declarative model, the EP-model, named after the basic entities it contains - events and properties - that possesses the following features: it has a small metamodel; it supports a graphical notation; it can represent both static and dynamic aspects of an application; finally, it allows executable models to be described by annotating model elements with code snippets. By leaving complex parts at the code level this hybrid approach achieves executability while keeping the basic modeling language simple.
Download

Paper Nr: 277
Title:

ZÁS - ASPECT-ORIENTED AUTHORIZATION SERVICES

Authors:

Paulo Zenida, Manuel Menezes de Sequeira, Diogo Henriques and Carlos Serrão

Abstract: This paper proposes Zás, a novel, flexible, and expressive authorization mechanism for Java. Zás has been inspired by Ramnivas Laddad’s proposal to modularize Java Authentication and Authorization Services (JAAS) using an Aspect-Oriented Programming (AOP) approach. Zás’ aims are to be simultaneously very expressive, reusable, and easy to use and configure. Zás allows authorization services to be non-invasively added to existing code. It also cohabits with a wide range of authentication mechanisms. Zás uses Java 5 annotations to specify permission requirements to access controlled resources. These requirements may be changed directly during execution. They may also be calculated by client supplied permission classes before each access to the corresponding resource. These features, together with several mechanisms for permission propagation, expression of trust relationships, depth of access control, etc., make Zás, we believe, an interesting starting point for further research on the use of AOP for authorization.
Download

Paper Nr: 239
Title:

AVOIDING TWO-LEVEL SYSTEMS: USING A TEXTUAL ENVIRONMENT TO ADDRESS CROSS-CUTTING CONCERNS

Authors:

David Greaves

Abstract: We believe that, owing to the paucity of textual facilities in contemporary HLLs (high-level languages), large software systems frequently require an additional level of meta-programming to sufficiently address their cross-cutting concerns. A programming team can either implement its system by both writing the main application in a slightly customised language and the corresponding customised compiler for it, or it can use a macro pre-processor to provide the remaining cross-cutting requirements not found in the chosen HLL. With either method, a two-level system arises. This paper argues that textual macro-programming is an important cross-cutting medium, that existing proposals for sets of pre-defined AOP (aspect-oriented programming) join-points are overly constrictive and that a generalised meta-programming facility, based on a textual environment should instead be directly embedded in HLLs. The paper presents the semantics of the main additions required in an HLL designed with this feature. We recommend that the textual features must be compiled out as the reference semantics would generally be too inefficient if naively interpreted.
Download

Area 5 - Software Engineering

Full Papers
Paper Nr: 78
Title:

BRIDGING BETWEEN MIDDLEWARE SYSTEMS: OPTIMISATIONS USING DOWNLOADABLE CODE

Authors:

Jan Newmarch

Abstract: There are multiple middleware systems and no single system is likely to become predominant. There is therefore an interoperability requirement between clients and services belonging to different middleware systems. Typically this is done by a bridge between invocation and discovery protocols. In this paper we introduce three design patterns based on a bridging service cache manager and dynamic proxies. This is illustrated by examples including a new custom lookup service which allows Jini clients to discover and invoke UPnP services. There is a detailed discussion of the pros and cons of each pattern.
Download

Paper Nr: 97
Title:

MDE FOR BPM - A Systematic Review

Authors:

Jose M. Perez Cogolludo, Francisco Ruiz and Mario Piattini

Abstract: Due to the rapid change in the business processes of organizations, Business Process Management (BPM) has come into being. BPM helps business analysts to manage all concerns related to business processes, but the gap between these analysts and people who build the applications is still large. The organization’s value chain changes very rapidly; to modify simultaneously the systems that support the business management process is impossible. MDE (Model Driven Engineering) is a good support for transferring these business process changes to the systems that implement these processes. Thus, by using any MDE approach, such as MDA, the alignment between business people and software engineering should be improved. To discover the different proposals that exist in this area, a systematic review was performed. As a result, the OMG’s Business Process Definition Metamodel (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.
Download

Paper Nr: 100
Title:

EXPLORING FEASIBILITY OF SOFTWARE DEFECTS ORTHOGONAL CLASSIFICATION

Authors:

Davide Falessi and Giovanni Cantone

Abstract: Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.
Download

Paper Nr: 100
Title:

EXPLORING FEASIBILITY OF SOFTWARE DEFECTS ORTHOGONAL CLASSIFICATION

Authors:

Davide Falessi and Giovanni Cantone

Abstract: Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.
Download

Paper Nr: 117
Title:

DEVELOPING A CONFIGURATION MANAGEMENT MODEL FOR USE IN THE MEDICAL DEVICE INDUSTRY

Authors:

Fergal M. Caffery, Rory O'Connor and Gerry Coleman

Abstract: This paper outlines the development of a Configuration Management model for the MEDical device software industry (CMMED). The paper details how medical device regulations associated with Configuration Management (CM) may be satisfied by adopting less than half of the practices from the CM process area of the Capability Maturity Model Integration (CMMI). It also investigates how the CMMI CM process area may be extended with additional practices that are outside the remit of the CMMI, but are required in order to satisfy medical device regulatory guidelines.
Download

Paper Nr: 271
Title:

ENGINEERING A COMPONENT LANGUAGE: COMPJAVA

Authors:

Hans Schmid and Marco Pfeifer

Abstract: After first great enthusiasm about the new generation of component languages like ArchJava, ComponentJ and ACOEL, a closer inspection and use of these languages identified together with their strong points some smaller, but disturbing drawbacks. These might impede a wider acceptance of component languages, which would be harmful since the integration of architecture description with a programming language increases the emphasis on, and consequently the quality of application architecture. Therefore, we took an engineering approach to the construction of a new Java-based component language without these drawbacks. That means, we derived general component language requirements; designed a first language version meeting the requirements and developed a compiler; used it in several projects; and re-iterated three times through the same cycle with improved language versions. The result, called CompJava, which should be fairly stable by now, is presented in the paper.
Download

Paper Nr: 271
Title:

ENGINEERING A COMPONENT LANGUAGE: COMPJAVA

Authors:

Hans Schmid and Marco Pfeifer

Abstract: After first great enthusiasm about the new generation of component languages like ArchJava, ComponentJ and ACOEL, a closer inspection and use of these languages identified together with their strong points some smaller, but disturbing drawbacks. These might impede a wider acceptance of component languages, which would be harmful since the integration of architecture description with a programming language increases the emphasis on, and consequently the quality of application architecture. Therefore, we took an engineering approach to the construction of a new Java-based component language without these drawbacks. That means, we derived general component language requirements; designed a first language version meeting the requirements and developed a compiler; used it in several projects; and re-iterated three times through the same cycle with improved language versions. The result, called CompJava, which should be fairly stable by now, is presented in the paper.
Download

Short Papers
Paper Nr: 87
Title:

A DETECTION METHOD OF FEATURE INTERACTIONS FOR TELECOMMUNICATION SERVICES USING NEW EXECUTION MODEL

Authors:

Sachiko Kawada, Masayuki Shimokura and Tadashi Ohta

Abstract: A service, which behaves normally, behaves differently when initiated with another service. This undesirable behavior is called a feature interaction. In investigating the international benchmark for detecting interactions in telecommunication services, it was found that many interactions that do not actually occur (called: “seeming interactions” in this paper) were mis-detected. The reason for mis-detection of seeming interactions is that interactions were detected using a state transition model which does not properly represent the process flow in a real system. Since seeming interactions cause an increase in time taken for solving interactions, avoiding mis-detection is an important issue. In this paper, a problem in implementing a detection system without mis-detecting seeming interactions is clarified and its solution is proposed. In addition, a new interaction detection method, which adopts the proposed solution and is based on a specification execution model which properly reflects the process flow in a real system, is proposed.
Download

Paper Nr: 87
Title:

A DETECTION METHOD OF FEATURE INTERACTIONS FOR TELECOMMUNICATION SERVICES USING NEW EXECUTION MODEL

Authors:

Sachiko Kawada, Masayuki Shimokura and Tadashi Ohta

Abstract: A service, which behaves normally, behaves differently when initiated with another service. This undesirable behavior is called a feature interaction. In investigating the international benchmark for detecting interactions in telecommunication services, it was found that many interactions that do not actually occur (called: “seeming interactions” in this paper) were mis-detected. The reason for mis-detection of seeming interactions is that interactions were detected using a state transition model which does not properly represent the process flow in a real system. Since seeming interactions cause an increase in time taken for solving interactions, avoiding mis-detection is an important issue. In this paper, a problem in implementing a detection system without mis-detecting seeming interactions is clarified and its solution is proposed. In addition, a new interaction detection method, which adopts the proposed solution and is based on a specification execution model which properly reflects the process flow in a real system, is proposed.
Download

Paper Nr: 98
Title:

ADVANCES ON TESTING SAFETY-CRITICAL SOFTWARE - Goal-driven Approach, Prototype-tool and Comparative Evaluation

Authors:

Guido Pennella, Christian Di Biagio, Gianfranco Pesce and Giovanni Cantone

Abstract: The reference company for this paper – a multination organization, Italian branch, that works in the domain of safety-critical systems – evaluated the major tools, which the market provides for testing safety-critical software, as not sufficiently featured for her quality improvement goals. Consequently, in order to investigate the space of possible solutions, if any, the company’s Research Lab. started an academic cooperation, which leaded to share knowledge and eventually to establish a common research team. Once we had transformed those goals in detailed technical requirements, and evaluated that it was possible to realize them conveniently in a tool, we passed to analyze, construct, and eventually utilize in field the prototype “Software Test Framework”. This tool allows non-intrusive measurements on different hard-soft targets of a distributed system running under one or more Unix standard OS, e.g. LynxOS, AIX, Solaris, and Linux. The tool acquires and graphically displays the real-time flow of data, so enabling users to verify and validate software products, diagnose and resolve emerging performance problems quickly, and enact regression testing. This paper reports on the characteristics of Software Test Framework, its architecture, and results from a case study. Based on comparison of results with previous tools, we can say that Software Test Framework is leading to a new concept of tool for the domain of safety-critical software.
Download

Paper Nr: 98
Title:

ADVANCES ON TESTING SAFETY-CRITICAL SOFTWARE - Goal-driven Approach, Prototype-tool and Comparative Evaluation

Authors:

Guido Pennella, Christian Di Biagio, Gianfranco Pesce and Giovanni Cantone

Abstract: The reference company for this paper – a multination organization, Italian branch, that works in the domain of safety-critical systems – evaluated the major tools, which the market provides for testing safety-critical software, as not sufficiently featured for her quality improvement goals. Consequently, in order to investigate the space of possible solutions, if any, the company’s Research Lab. started an academic cooperation, which leaded to share knowledge and eventually to establish a common research team. Once we had transformed those goals in detailed technical requirements, and evaluated that it was possible to realize them conveniently in a tool, we passed to analyze, construct, and eventually utilize in field the prototype “Software Test Framework”. This tool allows non-intrusive measurements on different hard-soft targets of a distributed system running under one or more Unix standard OS, e.g. LynxOS, AIX, Solaris, and Linux. The tool acquires and graphically displays the real-time flow of data, so enabling users to verify and validate software products, diagnose and resolve emerging performance problems quickly, and enact regression testing. This paper reports on the characteristics of Software Test Framework, its architecture, and results from a case study. Based on comparison of results with previous tools, we can say that Software Test Framework is leading to a new concept of tool for the domain of safety-critical software.
Download

Paper Nr: 102
Title:

GENERIC FEATURE MODULES: TWO-STAGED PROGRAM CUSTOMIZATION

Authors:

Sven Apel, Martin Kuhlemann and Thomas Leich

Abstract: With feature-oriented programming (FOP) and generics programmers have proper means for structuring software so that its elements can be reused and extended. This paper addresses the issue whether both approaches are equivalent. While FOP targets at large-scale building blocks and compositional programming, generics provide fine-grained customization at type-level. We contribute an analysis that reveals the individual capabilities of both approaches with respect to program customization. Therefrom, we extract guidelines for programmers in what situations which approach suffices. Furthermore, we present a fully implemented language proposal that integrates FOP and generics in order to combine their strengths. Our approach facilitates two-staged program customization: (1) selecting sets of features; (2) parameterizing features subsequently. This allows a broader spectrum of code reuse to be covered – reflected by proper language level mechanisms. We underpin our proposal by means of a case study.
Download

Paper Nr: 113
Title:

USING PRE-REQUIREMENTS TRACING TO INVESTIGATE REQUIREMENTS BASED ON TACIT KNOWLEDGE

Authors:

Andrew Stone and Pete Sawyer

Abstract: Pre-requirements specification tracing concerns the identification and maintenance of relationships between requirements and the knowledge and information used by analysts to inform the requirements’ formulation. However, such tracing is often not performed as it is a time-consuming process. This paper presents a tool for retrospectively identifying pre-requirements traces by working backwards from requirements to the documented records of the elicitation process such as interview transcripts or ethnographic reports. We present a preliminary evaluation of our tools performance using a case study. One of the key goals of our work is to identify requirements that have weak relationships with the source material. There are many possible reasons for this, but one is that they embody tacit knowledge. Although we do not investigate the nature of tacit knowledge in RE we believe that even helping to identify the probable presence of tacit knowledge is useful. This is particularly true for circumstances when requirements’ sources need to be understood during, for example, the handling of change requests.
Download

Paper Nr: 163
Title:

REACTIVE, DISTRIBUTED AND AUTONOMIC COMPUTING ASPECTS OF AS-TRM

Authors:

E. Vassev, H. Kuang, Olga Ormandjieva and J. Paquet

Abstract: The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic (self-managing) systems, and for continuous monitoring of their quality. In this paper, we draw upon our research experience and the experience of other autonomic computing researchers to discuss the main aspects of Autonomic Systems Timed Reactive Model (AS-TRM) architecture and demonstrate its reactive, distributed and autonomic computing nature. To our knowledge, ours is the first attempt to model reactive behavior in the autonomic systems.
Download

Paper Nr: 174
Title:

USING LINGUISTIC PATTERNS FOR IMPROVING REQUIREMENTS SPECIFICATION

Authors:

Carlos Videira, David Ferreira and Alberto Rodrigues da Silva

Abstract: The lack of quality results in the development of information systems is certainly a good reason to justify the presentation of new research proposals, especially those that address the most critical areas of that process, such as the requirements specification task. In this paper, we describe how linguistic patterns can be used to improve the quality of requirements specifications, using them as the basis for a new requirements specification language, called ProjectIT-RSL, and how a series of validation mechanisms can be applied to guarantee the consistency and correctness of the written requirements with the syntactic and semantic rules of the language.
Download

Paper Nr: 204
Title:

LEARNING EFFECTIVE TEST DRIVEN DEVELOPMENT - Software Development Projects in an Energy Company

Authors:

Wing A. Law

Abstract: The tests needed to prove, verify, and validate a software application are determined before the software application is developed. This is the essence of test driven development, an agile practice built upon sound software engineering principles. When applied effectively, this practice can have many benefits. The question becomes how to effectively adopt test driven development. This paper describes the experiences and lessons learned by two teams who adopted test driven development methodology for software systems developed at TransCanada. The overall success of test driven methodology is contingent upon the following key factors: experienced team champion, well-defined test scope, supportive database environment, repeatable software design pattern, and complementary manual testing. All of these factors and the appropriate test regime will lead to a better chance of success in a test driven development project.
Download

Paper Nr: 210
Title:

AN APPLICATION OF THE 5-S ACTIVITY THEORETIC REQUIREMENTS METHOD

Authors:

Robert Brown, Peter Hyland and Ian Piper

Abstract: Requirements analysis in highly interactive systems necessarily involves eliciting and analysing informal and complex stakeholder utterances. We investigate if Activity Theory may provide a useful basis for a new method. Preliminary results indicate that Activity Theory may cope well with problems of this kind, and may indeed offer some improvements.
Download

Paper Nr: 221
Title:

A SYSTEMATIC REVIEW MEASUREMENT IN SOFTWARE ENGINEERING - State-of-the-art in Measures

Authors:

Oswaldo Gómez, Hanna Oktaba, Mario Piattini and Félix García Rubio

Abstract: The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature. Nowadays, many companies need to answer the following questions: How to measure?, When to measure and What to measure?. There have been a lot of efforts made to attempt to answer these questions, and this has resulted in a large amount of data what is sometimes confusing and unclear information. This needs to be properly processed and classified in order to provide a better overview of the current situation. We have used a Measurement Software Ontology to classify and put the amount of data in this field in order. We have also analyzed the results of the systematic review, to show the trends in the software measurement field and the software process on which the measurement efforts have focused. It has allowed us to discover what parts of the process are not supported enough by measurements, to thus motivate future research in those areas.
Download

Paper Nr: 247
Title:

TOWARDS ANCHORING SOFTWARE MEASURES ON ELEMENTS OF THE PROCESS MODEL

Authors:

Bernhard Daubner, Bernhard Westfechtel and Andreas Henrich

Abstract: It is widely accepted that software measurement should be automated by proper tool support whenever possible and reasonable. While many tools exist that support automated measurement, most of them lack the possibility to reuse defined metrics and to conduct the measurement in a standardized way. This article presents an approach to anchor software measures on elements of the process model. This makes it possible to define the relevant software measures independently of a concrete project. At project runtime the work breakdown structure is used to establish a link between the measurement anchor points within the process model and the project entities that actually have to be measured. Utilizing the project management tool Maven, a framework has been developed that allows to automate the measurement process.
Download

Paper Nr: 253
Title:

UNIFIED DESCRIPTION AND DISCOVERY OF P2P SERVICES

Authors:

George Athanasopoulos, Aphrodite Tsalgatidou and Michael Pantazoglou

Abstract: Our era has been marked by the emergence of the service oriented computing (SOC) paradigm. This new trend has reshaped the way distributed applications are built and has influenced current computing paradigms, such as p2p and grid computing. SOC’s main objective is to leverage interoperability among applications and systems; however, the emergence of various types of services such as web, grid and p2p services has raised several interoperability concerns among these services as well as within each of these service models. In order to surpass these incompatibilities, appropriate middleware and mechanisms need to be developed so as to provide the necessary layers of abstraction and a unified framework that will obscure a service user from the underlying details of each service platform. Yet, for the development of such middleware and mechanisms to be effective, appropriate conceptual models need to be constructed. Within this paper, we briefly present a generic service model which was constructed to facilitate the unified utilization of heterogeneous services, with emphasis on its properties for the modeling of p2p services. Moreover, we illustrate how this model was instantiated for the representation of JXTA services and present the service description and discovery mechanisms that were built upon it. We regard this generic service model as a first step in achieving interoperability between incompatible types of services.
Download

Paper Nr: 256
Title:

TOWARDS A LANGUAGE INDEPENDENT REFACTORING FRAMEWORK

Authors:

Carlos López, Raúl Marticorena, Yania Crespo and Francisco Javier Pérez

Abstract: Using metamodels to keep source code information is one of the current trends in refactoring tools. This representation makes possible to detect refactoring opportunities, and to execute refactorings on metamodel instances. This paper describes an approach to language independent reuse in metamodel based refactoring detection and execution. We use an experimental metamodel, MOON, and analyze the problems of migrating from MOON to UML 2.0 metamodel or adapting from UML 2.0 to MOON. Some code refactorings can be detected and applied on basic UML abstractions. Nevertheless, other refactorings need information related to program instructions. “Action” concept, included in UML 2.0, is a fundamental unit of behaviour specification that allows to store program instructions and to obtain certain information related to this granularity level. Therefore, we compare the complexity of UML 2.0 metamodel with MOON metamodel as a solution for developing refactoring frameworks.
Download

Paper Nr: 258
Title:

A DYNAMIC ANALYSIS TOOL FOR EXTRACTING UML 2 SEQUENCE DIAGRAMS

Authors:

Paolo Falcarin and Marco Torchiano

Abstract: There is a wide range of formats and meta-models to represent the information extracted by reverse engineering tools. Currently UML tools with reverse engineering capabilities are not truly interoperable due to differences in the interchange format and cannot extract complete and integrated models. The forthcoming UML 2.0 standard includes a complete meta-model and a well defined interchange format (XMI). There is an available implementation of the meta-model, therefore it is a viable option to use UML 2.0 the modelling format for reverse engineered models. In this paper we propose a technique to automatically extract sequence diagrams from Java programs, compliant to the UML 2.0 specifications. The proposed approach takes advantage of the Eclipse platform and different plug-ins to provide an integrated solution: it relies on a new dynamic analysis technique, based on Aspect Oriented Programming; it recovers the interactions between objects also in presence of reflective calls and polymorphism.
Download

Paper Nr: 269
Title:

BUILDING MAINTENANCE CHARTS AND EARLY WARNING ABOUT SCHEDULING PROBLEMS IN SOFTWARE PROJECTS

Authors:

Sergiu Gordea and Markus Zanker

Abstract: Imprecise effort estimations are a well known problem of software project management that frequently leads to the setting of unrealistic deadlines. The estimations are even less precise when the development of new product releases is mixed with the maintenance of older versions of the system. Software engineering measurement should assess the development process and discover problems occurring into it. However, there are evidences indicating a low success rate of measurement programs mainly because they are not able to extract knowledge and present it in a form that is easy understandable for developers and managers. They are also not able to suggest corrective actions basing on the collected metric data. In our work we propose an approach for classifying time efforts into maintenance categories, and propose the usage of maintenance charts for controlling the development process and warning about scheduling problems. Identifying scheduling problems as soon as possible will allow managers to plan effective corrective actions and still cope with the planned release deadlines even if unpredicted development problems occur.
Download

Paper Nr: 270
Title:

A FRAMEWORK FOR THE DEVELOPMENT OF MONITORING SYSTEMS SOFTWARE

Authors:

I. Martínez-Marchena, Llanos Mora-lópez and M. Sidrach de Cardona

Abstract: This paper describes a framework for the development of software for monitoring installations. Usually, the monitoring of systems is carried out by building a programme for each installation, with no use of previously developed programmes or, alternatively, it is carried out by using SCADA programmes (Supervisory Control And Data Adquisition), although these tools are basically for controlling, rather than for monitoring; moreover, taking into account the small complexity of these type of installations, the use of a SCADA program is not justified. The proposed framework solves the monitorization of an installation in an easy way. In this framework the generation of a monitoring programme consists of three well established steps. The first step is to model the system or installation using a set of generic description rules and the XML language. The second step is to describe the communications among the different devices. To do this, we have used the OPC technology (OLE for process control). With this OPC technology, we have established an abstraction layer that makes it possible to communicate any devices in a generic way. We have built an OPC server for each device that does not depend on the type of device. In the third step, it is defined the way in which the monitored data will be stored and displayed. The framework also incorporates modules that make it possible to store and visualize all the data obtained from the different devices. We have used the proposed framework to build complete applications for monitoring three different solar energy installations.
Download

Paper Nr: 275
Title:

WEB METRICS SELECTION THROUGH A PRACTITIONERS’ SURVEY

Authors:

Julian Ruiz, Coral Calero and Mario Piattini

Abstract: There are a lot of web metrics proposals. However, most previous work does not include their practical application. The risk of doing so, is to limit all the effort made just to an academic exercise. In order to eliminate this gap as well as to be able to apply the work developed, it is necessary to involve the different stakeholders related to web technologies as an essential part of web metrics definition. So, it is crucial to know the perception they have about web metrics, especially those related to the development and maintenance of web sites and applications. In this paper, we present the work we have done to find out which web metrics are considered useful by web developers and maintainers. This study has been performed on the basis of the 385 web metrics classified in WQM, a Web Quality Model defined in a previous work, using as validation tool, a survey made by professionals of web technologies. As a result, we have found out that the most weighted metrics were related to usability. That means that web professionals give more importance to the user of metrics than to their own effort.
Download

Paper Nr: 287
Title:

MINING ANOMALIES IN OBJECT-ORIENTED IMPLEMENTATIONS THROUGH EXECUTION TRACES

Authors:

Paria Parsamanesh, Amir-abdollahi Foumani and Constantinos Constantinides

Abstract: In the context of a computer program, the term “anomaly” is used to refer to any phenomenon that can negatively affect software quality. Examples of anomalies in object-oriented programs include low cohesion of modular units, high coupling between modular units and the phenomenon of crosscutting. In this paper we discuss the theoretical component of a technique to identifying anomalies in object-oriented implementations based on observation of patterns of messages (invoked operations). Our technique is based on the capturing of execution traces (paths) into a relational database in order to extract knowledge of anomalies in the system, focusing on potential crosscutting concerns (aspects). In order to resolve ambiguities between candidate aspects we deploy dynamic programming to identify optimal solutions.
Download

Paper Nr: 13
Title:

SYSML-BASED WEB ENGINEERING - A Successful Way to Design Web Applications

Authors:

Haroon Altarwneh

Abstract: This paper discusses the importance of a new modelling language SysML (system modelling language) and shows how it differs from UML2.0 (unified modelling language) in the development of web-based applications. The development of Web applications has become more complex and challenging than most of us think. In many ways, it is also different and more complex than traditional software development and there is a lack of a proven methodology that guides software engineers in building web-based applications. In this paper we recommended using SysML for building and designing web-based applications
Download

Paper Nr: 60
Title:

SYSTEM TEST CASES FROM USE CASES

Authors:

Javier Jesus Gutierrez, María José Escalona Cuaresma, Manuel Mejias and Jesus Torres

Abstract: Use cases have become a widely used technique to define the functionality of a software system. This paper describes a new, formal and systematic approach for generating system test cases from use cases. This process has been designed specially for testing the system from the point of view of the actors, through it graphical user interfaces.
Download

Paper Nr: 66
Title:

A PRIMITIVE EXECUTION MODEL FOR HETEROGENEOUS MODELING

Authors:

Frédéric Boulanger and Guy Vidal-naquet

Abstract: Heterogeneous modeling is modeling using several modeling methods. Since many different modeling methods are used in different crafts, heterogeneous modeling is necessary to build a heterogeneous model of a system that takes the modeling habits of the designers into account. A model of computation is a formal description of the behavioral aspect of a modeling method. It is the set of rules that allows to compute the behavior of a system by composing the behaviors of its components. Heterogeneous modeling allows parts of the system to obey some rules while other parts obey other rules for the composition of their behaviors. Computing the behavior of a system which is modeled using several models of computation can be difficult if the meaning of each model of computation, and what happens at their boundary, is not well defined. We propose an execution model that provides a framework of primitive operations that allow to express how a model of computation is interpreted in order to compute the behavior of a model of a system. When models of computation are “implemented” in this execution model, it becomes possible to specify exactly what is the meaning of the joint use of several models of computation in the model of a system.
Download

Paper Nr: 96
Title:

VIEWPOINT FOR MAINTAINING UML MODELS AGAINST APPLICATION CHANGES

Authors:

Walter Cazzola, Ahmed Ghoneim and Gunter Saake

Abstract: The urgency that characterizes many requests for evolution forces the system administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the system unrespectful of the system changes. Software developers spend a lot of time on evolving the system and then on updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (UML diagrams in our case) updated when the system evolves. The UML diagrams are bound to the application and all the changes to it are reflected to the diagrams as well.
Download

Paper Nr: 99
Title:

INTRODUCTION TO CHARACTERIZATION OF MONITORS FOR TESTING SAFETY-CRITICAL SOFTWARE

Authors:

Christian Di Biagio, Guido Pennella, Anna Lomartire and Giovanni Cantone

Abstract: The goal of this paper is to characterize software technologies to test hard real-time software by focusing on measurement of CPU and memory loads, performance monitoring of processes and their threads, intrusiveness, and some other key features and capabilities, in the context of the Italian branch of a multinational organization, which works in the domain of safety-critical systems, from the points of view of the project managers of such an organization, on one side, and the applied researcher, on the other side. The paper first sketches on the state of the art in the field of testing technologies for safety-critical systems, then presents a characterization model, which is based on goals of the reference company, and then applies that model to major testing tools available.
Download

Paper Nr: 99
Title:

INTRODUCTION TO CHARACTERIZATION OF MONITORS FOR TESTING SAFETY-CRITICAL SOFTWARE

Authors:

Christian Di Biagio, Guido Pennella, Anna Lomartire and Giovanni Cantone

Abstract: The goal of this paper is to characterize software technologies to test hard real-time software by focusing on measurement of CPU and memory loads, performance monitoring of processes and their threads, intrusiveness, and some other key features and capabilities, in the context of the Italian branch of a multinational organization, which works in the domain of safety-critical systems, from the points of view of the project managers of such an organization, on one side, and the applied researcher, on the other side. The paper first sketches on the state of the art in the field of testing technologies for safety-critical systems, then presents a characterization model, which is based on goals of the reference company, and then applies that model to major testing tools available.
Download

Paper Nr: 152
Title:

A SCENARIO GENERATION METHOD USING A DIFFERENTIAL SCENARIO

Authors:

Masayuki Makino and Atsushi Ohnishi

Abstract: A generation method of scenarios using differential information between normal scenarios is presented. Behaviours of normal scenarios belonging to the same problem domain are quite similar. We derive the differential information between them and apply the information to generate new scenarios. Our method will be illustrated with an example.
Download

Paper Nr: 153
Title:

REVERSE ENGINEERING ELECTRONIC SERVICES - From e-Forms to Knowledge

Authors:

Costas Vassilakis, George Lepouras and Akrivi Katifori

Abstract: On their route to e-governance, public administrations have developed e-services. Each e-service encompasses a significant amount of knowledge in the form of examples, help texts, legislation excerpts, validation checks etc. This knowledge has been offered by domain experts in the phases of service analysis, design and implementation, being however bundled within the software, it cannot be readily retrieved and used in other organizational processes, including the development of new services. In this paper, we present an approach for reverse engineering e-services, in order to formulate knowledge items of a high level of abstraction, which can be made available to the employees of the organizations. Moreover, the knowledge items formulated in the reverse engineering process are stored into a knowledge-based e-service development platform, making them readily available for use in the development of other services.
Download

Paper Nr: 171
Title:

MODELLING THE UNEXPECTED BEHAVIOURS OF EMBEDDED SOFTWARE USING UML SEQUENCE DIAGRAMS

Authors:

Hee-Jin Lee, In-gwon Song, Sang-Uk Jeon, Doo-Hwan Bae and Jang-Eui Hong

Abstract: Real-time and embedded systems may be left on unexpected states because system’s user can generate some incident events in various conditions. Although the UML 2.0 sequence diagrams recently incorporate several modelling features for embedded software, they have some difficulties to depict unexpected behaviours of embedded software conveniently. In this paper, we propose some extensions to UML 2.0 sequence diagrams to model unexpected behaviours of embedded software. We newly introduce notations to describe exceptions and interrupts. Our new extensions make the sequence diagrams simple and easy to read in describing such unexpected behaviours. These features are explained and proved with an example of call-setup procedure of CDMA mobile phone.
Download