ICSOFT 2007 Abstracts


Area 1 - Distributed and Paralled Systems

Full Papers
Paper Nr: 208
Title:

CONSTRUCTION OF BENCHMARKS FOR COMPARISON OF GRID RESOURCE PLANNING ALGORITHMS

Authors:

Wolfgang Süß, Alexander Quinte, Wilfried Jakob and Karl-uwe Stucky

Abstract: The present contribution will focus on the systematic construction of benchmarks used for the evaluation of resource planning systems. Two characteristics for assessing the complexity of the benchmarks were developed. These benchmarks were used to evaluate the resource management system GORBA and the optimization strategies for resource planning applied in this system. At first, major aspects of GORBA, in particular two-step resource planning, will be described briefly, before the different classes of benchmarks will be defined. With the help of these benchmarks, GORBA was evaluated. The evaluation results will be presented and conclusions drawn. The contribution shall be completed by an outlook on further activities.
Download

Paper Nr: 238
Title:

A MODEL BASED APPROACH FOR DEVELOPING ADAPTIVE MULTIMODAL INTERACTIVE SYSTEMS

Authors:

Waltenegus Dargie, Anja Strunk, Matthias Winkler, Bernd Mrohs, Sunil Thakar and Wilfried Enkelmann

Abstract: Currently available mobile devices lack the flexibility and simplicity their users require of them. To start with, most of them rigidly offer impoverished, traditional interactive mechanisms which are not handy for mobile users. Those which are multimodal lack the grace to adapt to the current task and context of their users. Some of the reasons for such inflexibility are the cost, duration and complexity of developing adaptive multimodal interactive systems. This paper motivates and introduces a modelling and development platform – the EMODE platform – which enables the rapid development and deployment of adaptive and multimodal mobile interactive systems.
Download

Paper Nr: 550
Title:

DISTRIBUTED PATH RESTORATION ALGORITHM FOR ANONYMITY IN P2P FILE SHARING SYSTEMS

Authors:

Pilar Manzanares-Lopez, Juan P. Muñoz-Gea, Josemaría Malgosa-sanahuja, Juan Carlos Sánchez-aarnoutse and Joan Garcia-haro

Abstract: In this paper, a new mechanism to achieve anonymity in peer-to-peer (P2P) file sharing systems is proposed. As usual, the anonymity is obtained by means of connecting the source and destination peers through a set of intermediate nodes, creating a multiple-hop path. The main paper contribution is a distributed algorithm able to guarantee the anonymity even when a node in a path fails (voluntarily or not). The algorithm takes into account the inherent costs associated with multiple-hop communications and tries to reach a well-balanced solution between the anonymity degree and its associated costs. Some parameters are obtained analytically but the main network performances are evaluated by simulation.
Download

Short Papers
Paper Nr: 121
Title:

WEB SERVICE TRANSACTION MANAGEMENT

Authors:

Frans Henskens

Abstract: This paper describes extension of the functionality of conventional web browsers to produce a new enhanced web browser. Each instance of this enhanced browser is part of a federation of browser instances that use a directed graph-based technique to provide transaction and hence concurrency control over access to web services. These ‘super browsers’ communicate with web-based services across the Internet, application code that may be obtained from the Internet but then executes as a local program, and with other browser instances.
Download

Paper Nr: 134
Title:

A HYPER-HEURISTIC FOR SCHEDULING INDEPENDENT JOBS IN COMPUTATIONAL GRIDS

Authors:

Juan Antonio Gonzalez, Maria Serna and Fatos Xhafa

Abstract: In this paper we present the design and implementation of an hyper-heuristic for efficiently scheduling independent jobs in Computational Grids. An efficient scheduling of jobs to Grid resources depends on many parameters, among others, the characteristics of the Grid infrastructure and job characteristics (such as computing capacity, consistency of computing, etc.). Existing ad hoc scheduling methods (batch and immediate mode) have shown their efficacy for certain types of Grids and job characteristics. However, as stand alone methods, they are not able to produce the best planning of jobs to resources for different types of Grid resources and job characteristics. In this work we have designed and implemented a hyper-heuristic that uses a set of ad hoc (immediate and batch mode) scheduling methods to provide the scheduling of jobs to Grid nodes according to the Grid and job characteristics. The hyper-heuristic is a high level algorithm, which examines the state and characteristics of the Grid system (jobs and resources), and selects and applies the ad hoc method that yields the best planning of jobs to Grid resources. The resulting hyper-heuristic based scheduler can be thus used to develop network-aware applications that need efficient planning of jobs to resources. The Hyper-heuristic has been tested and evaluated in a dynamic setting through a prototype of a Grid simulator. The experimental evaluation showed the usefulness of the hyper-heuristic in planning of jobs to resources as opposed to planning without knowledge of the Grid and jobs characteristics.
Download

Paper Nr: 246
Title:

PERFORMANCE ANALYSIS OF SCHEDULING-BASED LOAD BALANCING FOR DISTRIBUTED AND PARALLEL SYSTEMS USING VISUALSIM

Authors:

Abu Asaduzzaman, Manira Rani and Darryl Koivisto

Abstract: The concurrency in a distributed and parallel system can be used to improve the performance of that system by properly distributing the tasks among the processors. However, the advantage of parallelism may be offset by the increased complexity of load balancing techniques. Scheduling is proven to be an effective technique for load balancing in any distributed and parallel system. Studies indicate that for application-specific systems static scheduling may be the potential choice due to its simplicity. In this paper, we analyze the performance of load balancing by static scheduling for distributed and parallel systems. Using VisualSim, we develop a simulation program that models a system with three processors working simultaneously on a single problem. We obtain the response time and completion time for different scheduling algorithms and task groups. Simulation results show that load balancing by scheduling has significant impact on the performance of distributed and parallel systems.
Download

Paper Nr: 527
Title:

MULTI-CRITERION GENETIC PROGRAMMING WITH NEGATIVE SELECTION FOR FINDING PARETO SOLUTIONS

Authors:

Jerzy Balicki

Abstract: Multi-criterion genetic programming (MGP) is a relatively new approach for a decision making aid and it can be applied to determine the Pareto solutions. This purpose can be obtained by formulation of a multi-criterion optimization problem that can be solved by genetic programming. An improved negative selection procedure to handle constraints in the MGP has been proposed. In the test instance, both a workload of a bottleneck computer and the cost of system are minimized; in contrast, a reliability of the distributed system is maximized.
Download

Paper Nr: 544
Title:

ADDING UNDERLAY AWARE FAULT TOLERANCE TO HIERARCHICAL EVENT BROKER NETWORKS

Authors:

Madhu S. D., Umesh Bellur and Erusu. Kranthi Kiran

Abstract: Recent studies have shown that the quality of service of overlay topologies and routing algorithms for event broker networks can be improved by the use of underlying network information. Hierarchical topologies are widely used in recent event-based publish-subscribe systems for reduced message traffic. We hypothesize that the performance and fault tolerance of existing hierarchical topology based event broker networks can be improved by augmenting the construction of the overlay and subsequent routing with the underlay information. In this paper we present a linear time algorithm for constructing a fault tolerant overlay topology for event broker networks that can tolerate single node and link failures and improve the routing performance by balancing network load. We test the algorithm on the SIENA event based middleware which follows the hierarchical model for event brokers. We present simulation results that support the claim that the use of underlay information can significantly increase the robustness of the overlay topology and performance of the routing algorithm for hierarchical event broker networks.
Download

Paper Nr: 570
Title:

REGULATION MECHANISM FOR CACHING IN PORTAL APPLICATIONS

Authors:

Mehregan Mahdavi, John Shepherd and Boualem Benatallah

Abstract: Web portals are emerging Web-based applications that provide a single interface to access different data or service providers. Caching data from different providers at the portal can increase the performance of the system in terms of throughput and user-perceived delay. The portal and its providers can collaborate in order to determine the candidate caching objects. The providers allocate a caching score to each object sent to the portal. The decision for caching an object is made at the portal mainly based on these scores. However, the fact that it is up to providers to calculate such caching scores may lead to inconsistencies between them. The portal should detect these inconsistencies and regulate them in order to achieve a fair and effective caching strategy.
Download

Paper Nr: 630
Title:

LOCATION MANAGEMENT IN DISTRIBUTED, SERVICE-ORIENTED COMMAND AND CONTROL SYSTEMS

Authors:

Thomas Nitsche

Abstract: In this paper we propose an efficient location management scheme for large amounts of mobile users and other objects in distributed, service-oriented systems. To efficiently observe geographic areas of interest (AOI) in command and control information systems (C2IS), i.e. to compute the AOI within a C2IS, we introduce the concept of region services. These services contain all objects of a fixed geographic region. To handle in-homogenous distributions of objects we propose a combination of regular and hierarchical regions. A user-specific C2IS instance can now directly and efficiently establish subscription-relations to the relevant objects around its AOI in order to obtain information about the position, status and behaviour of these objects. If objects including the current user itself now dynamically change their position we merely have to update the information relations to those few objects that enter or leave a region within the AOI, instead of having to consider all objects within the global information grid. Region services thus do not only improve the efficiency for generating a static common operational picture but can also handle any dynamic changes of object positions.
Download

Area 2 - Information Systems and Data Management

Full Papers
Paper Nr: 78
Title:

METRICS FOR MEASURING DATA QUALITY - Foundations for an Economic Data Quality Management

Authors:

Bernd Heinrich, Marcus Kaiser and Mathias Klier

Abstract: The article develops metrics for an economic oriented management of data quality. Two data quality dimensions are focussed: consistency and timeliness. For deriving adequate metrics several requirements are stated (e. g. normalisation, cardinality, adaptivity, interpretability). Then the authors discuss existing approaches for measuring data quality and illustrate their weaknesses. Based upon these considerations, new metrics are developed for the data quality dimensions consistency and timeliness. These metrics are applied in practice and the results are illustrated in the case of a major German mobile services provider.
Download

Paper Nr: 107
Title:

INTEGRATING BUSINESS PROCESSES AND INFORMATION SYSTEMS

Authors:

Giorgio Bruno

Abstract: While the need for a better integration between business processes and enterprise information systems is widely acknowledged, current notations for business processes are inclined to emphasize control-flow issues and omit to provide adequate links with two fundamental aspects of enterprise information systems, i.e. the human tasks and the information flow among the tasks. This paper presents a notation for business processes whose purpose is to overcome the above-mentioned limitations. This notation, called tk-nets (task-oriented nets) supports four interaction patterns between process elements and human tasks. It is exemplified with the help of a case study concerning a web-based application intended to manage the handling of paper submissions to conferences.
Download

Paper Nr: 118
Title:

AN APPROXIMATION-AWARE ALGEBRA FOR XML FULL-TEXT QUERIES

Authors:

Giacomo Buratti and Danilo Montesi

Abstract: XQuery Full-Text is the proposed standard language for querying XML documents using either standard or full-text conditions; while full-text conditions can have a boolean or a ranked semantics, standard conditions must be satisfied for an element to be returned. This paper proposes a more general formal model that considers structural, value-based and full-text conditions as desiderata rather than mandatory constraints. The goal is achieved defining a set of relaxation operators that, given a path expression or a selection condition, return a set of relaxed path expressions or selection conditions. Algebraic approximated operators are defined for representing typical queries and returns either elements that perfectly respect the conditions and elements that answer to a relaxed version of the original query. A score reflecting the level of satisfaction of the original query is assigned to each result of the relaxed query.
Download

Paper Nr: 151
Title:

A PREDICTIVE AUTOMATIC TUNING SERVICE FOR OBJECT POOLING BASED ON DYNAMIC MARKOV MODELING

Authors:

Nima Sharifimehr and Samira Sadaoui

Abstract: One of the most challenging concerns in the development of enterprise software systems is how to manage effectively and efficiently available resources. Object pooling service as a resource management facility significantly improves the performance of application servers. However, tuning object pool services is a complicated task that we address here through a predictive automatic approach. Based on dynamic markov models, which capture high-order temporal dependencies and locally optimize the required length of memory, we find patterns across object invocations that can be used for prediction purposes. Subsequently, we propose an effective automatic tuning solution, with reasonable time costs, which takes advantage of past and future information about activities of object pool services. Afterwards, we present experimental results which demonstrate the scalability and effectiveness of our novel tuning solution, namely predictive automatic tuning service.
Download

Paper Nr: 160
Title:

THE TOP-TEN WIKIPEDIAS - A Quantitative Analysis Using WikiXRay

Authors:

Felipe Ortega, Jesus M. Gonzalez-Barahona and Gregorio Robles

Abstract: In a few years, Wikipedia has become one of the information systems with more public (both producers and consumers) of the Internet. Its system and information architecture is relatively simple, but has proven to be capable of supporting the largest and more diverse community of collaborative authorship worldwide. In this paper, we analyze in detail this community, and the contents it is producing. Using a quantitative methodology based on the analysis of the public Wikipedia databases, we describe the main characteristics of the 10 largest language editions, and the authors that work in them. The methodology (which is almost completely automated) is generic enough to be used on the rest of the editions, providing a convenient framework to develop a complete quantitative analysis of the Wikipedia. Among other parameters, we study the evolution of the number of contributions and articles, their size, and the differences in contributions by different authors, inferring some relationships between contribution patterns and content. These relationships reflect (and in part, explain) the evolution of the different language editions so far, as well as their future trends.
Download

Paper Nr: 196
Title:

ANALYSIS OF ONTOLOGICAL INSTANCES - A Data Warehouse for the Semantic Web

Authors:

Roxana Danger Mercaderes and Rafael Berlanga

Abstract: New data warehouse tools for Semantic Web are becoming more and more necessary. The present paper formalizes one such a tool considering, on the one hand, the semantics and theorical foundations of Description Logic and, on the other hand, the current developments of information data generalization. The presented model is constituted by dimensions and multidimensional schemata and spaces. An algorithm to retrieve interesting spaces according to the data distribution is also proposed. Some ideas from Data Mining techniques are incorporated in order to allow users to discover knowledge from the Semantic Web.
Download

Paper Nr: 562
Title:

ENABLING AN END-USER DRIVEN APPROACH FOR MANAGING EVOLVING USER INTERFACES IN BUSINESS WEB APPLICATIONS - A Web Application Architecture using Smart Business Object

Authors:

Xufeng D. Liang and Athula Ginige

Abstract: As web applications become the centre-stage of today’s businesses, they require a more manageable and holistic approach to handle the rapidity and diversity with which changes occur in their web user interfaces. Adopting the End User Development (EUD) paradigm, we advocate an end-user driven approach to maintain evolving user interfaces of business web applications. Such an approach, demands a complementary web application architecture to enable a flexible, managed, and fine-grained control over the web user interfaces. In this paper, we proposed a web application architecture that embraces a dedicated UI Definition Layer, enforced by a web UI Model, for describing changes in the user interfaces of business web applications. This empowers business users to use intuitive web-based tools to effortlessly manage and create web user interfaces. The proposed architecture is realised through the Smart Business Object (SBO) technology we previously developed. We have also created a toolkit based on our proposed architecture. A tailored version of the toolkit has been utilised in an enterprise level web-based workflow application.
Download

Paper Nr: 564
Title:

VERSION CONTROL FOR RDF TRIPLE STORES

Authors:

Steve Cassidy and James Ballantine

Abstract: RDF, the core data format for the Semantic Web, is increasingly being deployed both from automated sources and via human authoring either directly or through tools that generate RDF output. As individuals build up large amounts of RDF data and as groups begin to collaborate on authoring knowledge stores in RDF, the need for some kind of version management becomes apparent. While there are many version control systems available for program source code and even for XML data, the use of version control for RDF data is not a widely explored area. This paper examines an existing version control system for program source code, Darcs, which is grounded in a semi-formal theory of patches, and proposes an adaptation to directly manage versions of an RDF triple store.
Download

Paper Nr: 579
Title:

MODELING WEB INFORMATION SYSTEMS FOR CO-EVOLUTION

Authors:

Buddhima De Silva and Athula Ginige

Abstract: When an information system is introduced to an organisation it changes the original business environment thus changes the original requirements. This can lead to changes to processes that are supported by the information system. Also when users get familiar with the system they ask for more functionality. This gives rise to a cycle of changes known as co-evolution. One way to facilitate co-evolution is to empower end-users to make changes to the web application to accommodate the required changes while using that web application. This can be achieved through meta-design paradigm. We model web applications using high level abstract concepts such as user, hypertext, process, data and presentation. We use set of smart tools to generate the application based on this high-level specification. We developed a hierarchical meta- model where an instance represent a web application. High level aspects are used to populate the attribute values of a meta-model instance. End-user can create or change a web application by specifying or changing the high level concepts in the meta-model. This paper discusses these high level aspects of web information systems. We also conducted a study to find out how end-users conceptualise a web application using these aspects. We found that end-users think naturally in terms of some of the aspects but not all. Therefore, in meta-model approach we provide default values for the model attributes which users can overwrite. This approach based on meta-design paradigm will help to realise the end-user development to support co-evolution.
Download

Paper Nr: 590
Title:

OPTIMIZATION OF DISTRIBUTED OLAP CUBES WITH AN ADAPTIVE SIMULATED ANNEALING ALGORITHM

Authors:

Jorge Loureiro and Orlando Belo

Abstract: The materialization of multidimensional structures is a sine qua non condition of performance for OLAP systems. Several proposals have addressed the problem of selecting the optimal set of aggregations for the centralized OLAP approach. But the OLAP structures may also be distributed to capture the known advantages of distributed databases. However, this approach introduces another term into the optimizing equation: space, which generates new inter-node subcubes’ dependencies. The problem to solve is the selection of the most appropriate cubes, but also its correct allocation. The optimizing heuristics face now with extra complexity, hardening its searching for solutions. To address this extended problem, this paper proposes a simulated annealing heuristic, which includes an adaptive mechanism, concerning the size of each move of the hill climber. The results of the experimental simulation show that this algorithm is a good solution for this kind of problem, especially when it comes to its remarkable scalability.
Download

Paper Nr: 594
Title:

HYPERSET/WEB-LIKE DATABASES AND THE EXPERIMENTAL IMPLEMENTATION OF THE QUERY LANGUAGE DELTA - Current State of Affairs

Authors:

Richard Molyneux and Vladimir Sazonov

Abstract: The hyperset approach to WEB-like or semistructured databases is outlined. WDB is presented either (i) as a finite edge-labelled graph or, equivalently, (ii) as system of (hyper)set equations or (iii) in a special XML-WDB format convenient both for distributed WDB and for including arbitrary XML elements in this framework. The current state of affairs on experimental implementation of a query language  (Delta) to such databases—the main result of this paper—is described, with consideration of further implementation work to be done.
Download

Short Papers
Paper Nr: 39
Title:

ESTIMATE VALIDITY REGIONS FOR NEAREST NEIGHBOR QUERIES

Authors:

Xing Gao, Ali Hurson and Krishna Kavi

Abstract: Users’ queries for data or services in a mobile computing environment are highly relevant to their current locations. A nearest neighbor (NN) query finds the data object closest to the user’s location; and hence, NN query issued at different locations may lead to different results. The nearest neighbor validity region (NNVR) is the area where an NN query result remains valid. A cached NN result can be used to answer semantically equivalent NN queries issued in the same NNVR. Our analysis discovers that NNVRs carry useful information about neighboring objects’ locations. This paper proposes an algorithm data mining the hidden information in cached NNVRs to increase the proxy caching performance. The experimental results and analysis have demonstrated the effectiveness of the proposed algorithm in reducing query response time and workload on the database server.
Download

Paper Nr: 123
Title:

SARIPOD: TOWARDS A MULTIAGENT POSSIBILISTIC SYSTEM FOR WEB INFORMATION RETRIEVAL

Authors:

Bilel Elayeb, Fabrice Evrard, Montaceur Zaghdoud and Mohamed Ben Ahmed

Abstract: We describe in this paper a multiagent possibilistic system for web information retrieval, called SARIPOD. This system is based on Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). The first HSW consists in structuring the "Google" search results in dense zones of web pages which strongly depend on each other. We thus reveal dense clouds of pages which "speak" more or less about the same subject and which all strongly answer the user’s query. The goal of the second HSW consists in considering the query as multiple in the sense that we don’t seek only the keyword in the web pages but also its synonyms. The PN generates the mixing of these two HSW in order to organize the searched documents according to user’s preferences. Indeed, SARIPOD is a new approach for Information Retrieval Model based on possibility and necessity measures. This model encodes relationship dependencies existing between query terms and web documents through naïve possibilistic networks and quantifies these relationships by two measures: possibility and necessity. The retrieved documents are those which are necessarily or possibly relevant given a user's query. The search process restores the plausibly or necessarily relevant documents for a user need.

Paper Nr: 139
Title:

LARGE SCALE RDF STORAGE SOLUTIONS EVALUATION

Authors:

Bela Stantic, Juergen Bock and Irina Astrova

Abstract: The increasing popularity of the Semantic Web and Semantic Technologies require sophisticated ways to store huge amounts of semantic data. RDF together with the rule base RDF Schema have proved themselves as good candidates for storing semantic data due to the simplicity and high abstraction level. A number of large scale RDF data storage solutions have been proposed. Several typical representative have been discussed and compared in this work, namely Sesame, Kowari, YARS, Redland and Oracle’s RDF MATCH table function. We present a comparison of those approaches with respect to consideration of context information, supported access protocols, query languages, indexing methods, RDF Schema awareness, and implementation. We also identify applicability as well as discuss advantages and disadvantages of particular approach. Furthermore, an overview of storage requirements and performance tests has been presented. A summary of performance analysis and recommendations are given and discussed.
Download

Paper Nr: 220
Title:

TURNING CONCEPTS INTO REALITY - Bridging Requirement Engineering and Model-Driven Generation of Web Applications

Authors:

Xufeng D. Liang, Christian Kop, Athula Ginige and Heinrich C. Mayr

Abstract: Today web application development is under the pressure of evolving business needs, compressed timelines, and limited resources. These dynamics demand a streamlined set of tools that turns concepts into reality and minimises the gap between the original business requirements and the final physical implementation of the web application. This paper will demonstrate how this gap can be reduced by the integration of two techniques, KCPM (Klagenfurt Predesign Conceptual Model) and SBO (Smart Business Object), allowing fully functional web applications to be auto-generated from a set of glossaries.
Download

Paper Nr: 220
Title:

TURNING CONCEPTS INTO REALITY - Bridging Requirement Engineering and Model-Driven Generation of Web Applications

Authors:

Xufeng D. Liang, Christian Kop, Athula Ginige and Heinrich C. Mayr

Abstract: Today web application development is under the pressure of evolving business needs, compressed timelines, and limited resources. These dynamics demand a streamlined set of tools that turns concepts into reality and minimises the gap between the original business requirements and the final physical implementation of the web application. This paper will demonstrate how this gap can be reduced by the integration of two techniques, KCPM (Klagenfurt Predesign Conceptual Model) and SBO (Smart Business Object), allowing fully functional web applications to be auto-generated from a set of glossaries.
Download

Paper Nr: 244
Title:

A DATA-DRIVEN DESIGN FOR DERIVING USABILITY METRICS

Authors:

Tamara Babaian, Wendy Lucas and Heikki Topi

Abstract: The complexity of Enterprise Information Systems can be overwhelming to users, yet they are an often overlooked domain for usability research. To better understand the ways in which users interact with these systems, we have designed an infrastructure for input logging that is built upon a data model relating system components, user inputs, and tasks. This infrastructure is aware of user representations, task representations, and the history of user interactions. The interface components themselves log user inputs, so that timing data and action events are automatically aligned and are linked to specific tasks. The knowledge gained about user interactions at varying levels of granularity, ranging from keystroke analysis to higher-level task performance, is a valuable resource for both assessing and enhancing system usability.
Download

Paper Nr: 263
Title:

PARADIGM SHIFT IN INTER-ORGANISATIONAL COLLABORATION - A Framework for Web based Dynamic eCollaboration

Authors:

Ioakim Marmaridis and Athula Ginige

Abstract: The proliferation of the World Wide Web (web) offers new ways for organisations to do business and collaborate with others to gain competitive advantage. Dynamic eCollaboration has the characteristics to keep up with the fast changing business landscape. It requires however a framework for collaboration that can also keep up with rapid change. In this paper we present the Dynamic eCollaboration model that brings the concepts of P2P collaboration to organisations. It fills this gap and offers a new avenue for organisations of all sizes to embrace collaboration and benefit from it. We also present our technology framework built to support Dynamic eCollaboration. The framework is component-based and extensible with an architecture that can scale. It incorporates a flexible security subsystem, a lightweight workflow engine optimised for web applications and a novel method for bundling and sharing web based information called Bitlet.
Download

Paper Nr: 534
Title:

A NOVEL ROBUST SCHEME OF WATERMARKING DATABASE

Authors:

Jia-Jin Le, Qin Zhu and Ying Zhu

Abstract: A scheme for watermarking relational database is proposed in this paper. It is applied to protect the copyright of numeric data. The chaos binary sequences are generated under the control of the privacy key, and are utilized as the watermark signal and the control signal for watermark embedding. Both the privacy key and the primary key determine the watermarking position, and the watermark is embedded into the numeric data by changing the parity of their low order digits, thus avoids the syndrome phenomena caused by the usual Least Significant Bit (LSB) watermarking scheme. The embedment of the watermark meets the requirement of the synchronous dynamic updating for the database, and the detection of the watermark needs no original database. Both the theoretical analysis and the practical experiments prove that this scheme possesses fine efficiency, imperceptibility and security, and it is robust against common attacks towards the watermark.
Download

Paper Nr: 534
Title:

A NOVEL ROBUST SCHEME OF WATERMARKING DATABASE

Authors:

Jia-Jin Le, Qin Zhu and Ying Zhu

Abstract: A scheme for watermarking relational database is proposed in this paper. It is applied to protect the copyright of numeric data. The chaos binary sequences are generated under the control of the privacy key, and are utilized as the watermark signal and the control signal for watermark embedding. Both the privacy key and the primary key determine the watermarking position, and the watermark is embedded into the numeric data by changing the parity of their low order digits, thus avoids the syndrome phenomena caused by the usual Least Significant Bit (LSB) watermarking scheme. The embedment of the watermark meets the requirement of the synchronous dynamic updating for the database, and the detection of the watermark needs no original database. Both the theoretical analysis and the practical experiments prove that this scheme possesses fine efficiency, imperceptibility and security, and it is robust against common attacks towards the watermark.
Download

Paper Nr: 541
Title:

DATA QUALITY IN XML DATABASES - A Methodology for Semi-structured Database Design Supporting Data Quality Issues

Authors:

Eugenio Verbo, Ismael Caballero, Eduardo Fernández-Medina and Mario Piattini

Abstract: As the use of XML as a technology for data exchange has widely spread, the need of a new technology to store semi-structured data in a more efficient way has been emphasized. Consequently, XML DBs have been created in order to store a great amount of XML documents. However, like in previous data models as the relational model, data quality has been frequently left aside. Since data plays a key role in organization efficiency management, its quality should be managed. With the intention of providing a base for data quality management, our proposal address the adaptation of a XML DB development methodology focused on data quality. To do that we have based on some key area processes of a Data Quality Maturity reference model for information management process definition.
Download

Paper Nr: 545
Title:

AN EFFICIENT ALGORITHM TO COMPUTE MAX/MIN VALUES IN SLIDING WINDOW FOR DATA STREAMS

Authors:

Ying Sha and Jianlong Tan

Abstract: With the development of Internet, more and more data-stream based applications emerged, where calculation of aggregate functions plays an important role. Many studies were conducted on aggregation functions; however, an efficient algorithm to calculate Max/Min values remains an open problem. Here, we propose a novel, exact method to computer Max/Min values for the numerical input data. Employing an incrementally calculating strategy on sliding windows, this algorithm gains a high efficiency. We analyze the algorithm and prove the time-complexity and space-complexity in worst cases. Experimental results confirm its high performance on a testing dataset.
Download

Paper Nr: 560
Title:

A FRAMEWORK FOR THE DEVELOPMENT AND DEPLOYMENT OF EVOLVING APPLICATIONS - The Domain Model

Authors:

Georgios Voulalas and Georgios Evangelidis

Abstract: Software development is an R&D intensive activity, dominated by human creativity and diseconomies of scale. Model-driven architecture improves productivity, portability, interoperability, maintenance, and documentation by introducing formal models that can be understood by computers. However, the problem of evolving requirements, which is more prevalent within the context of business applications, additionally calls for efficient mechanisms that ensure consistency between models and code and enable seamless and rapid accommodation of changes, without interrupting severely the operation of the deployed application. Having presented a framework that supports rapid development and deployment of evolving web-based applications, this paper elaborates on the Domain Model that is the cornerstone of the overall infrastructure.
Download

Paper Nr: 567
Title:

REUSING PAST QUERIES TO FACILITATE INFORMATION RETRIEVAL

Authors:

Gilles Hubert and Josiane Mothe

Abstract: This paper introduces a new approach of query reuse in order to help the user to retrieve relevant information. Past search experiences are a source of information that can be useful for a user trying to find information answering his information need. For example, a user searching about a new subject can benefit from past search experiences carried out by previous users about the same subject. The approach presented in this paper is based on collecting the different search attempts submitted to a search engine by a user trying to fulfil an information need. This approach takes mainly advantage of implicit links that exist between the different search attempts that try to satisfy a single information need. Search experiences are modelled according to the concepts defined in the domain of version management. This modelling provides multiple possibilities to reuse past experiences notably to recommend terms for query reformulation or documents judged relevant by other users.
Download

Paper Nr: 578
Title:

MULTI OBJECTIVE ANALYSIS FOR TIMEBOXING MODELS OF SOFTWARE DEVELOPMENT

Authors:

Vassilis Gerogiannis and Pandelis Ipsilandis

Abstract: In iterative/incremental software development, software deliverables are built in iterations - each iteration providing parts of the required software functionality. To better manage and monitor resources, plan and deliverables, iterations are usually performed during specific time periods, so called “time boxes”. Each time box is further divided into a sequence of stages and a dedicated development team is assigned to each stage. Iterations can be performed in parallel to reduce the project completion time by exploiting a “pipelining” concept, that is, when a team completes the tasks of a stage, it hands over the intermediate deliverables to the team executing the next stage and then starts executing the same stage in the next iteration. In this paper, we address the problem of optimizing the schedule of a software project that follows an iterative, timeboxing process model. A multi objective linear programming technique is introduced to consider multiple parameters, such as the project duration, the work discontinuities of development teams in successive iterations and the release (delivery) time of software deliverables. The proposed model can be used to generate alternative project plans based on the relative importance of these parameters.
Download

Paper Nr: 578
Title:

MULTI OBJECTIVE ANALYSIS FOR TIMEBOXING MODELS OF SOFTWARE DEVELOPMENT

Authors:

Vassilis Gerogiannis and Pandelis Ipsilandis

Abstract: In iterative/incremental software development, software deliverables are built in iterations - each iteration providing parts of the required software functionality. To better manage and monitor resources, plan and deliverables, iterations are usually performed during specific time periods, so called “time boxes”. Each time box is further divided into a sequence of stages and a dedicated development team is assigned to each stage. Iterations can be performed in parallel to reduce the project completion time by exploiting a “pipelining” concept, that is, when a team completes the tasks of a stage, it hands over the intermediate deliverables to the team executing the next stage and then starts executing the same stage in the next iteration. In this paper, we address the problem of optimizing the schedule of a software project that follows an iterative, timeboxing process model. A multi objective linear programming technique is introduced to consider multiple parameters, such as the project duration, the work discontinuities of development teams in successive iterations and the release (delivery) time of software deliverables. The proposed model can be used to generate alternative project plans based on the relative importance of these parameters.
Download

Paper Nr: 592
Title:

EARLY PERFORMANCE ANALYSIS IN THE DESIGN OF SPATIAL DATABASES

Authors:

Vincenzo D. Fatto, Massimiliano Giordano, Giuseppe Polese, Monica Sebillo and Genny Tortora

Abstract: The construction of spatial databases often requires considerable computing and storage resources, due to the inherent complexity of spatial data and their manipulation. Thus, it would be desirable to devise methods enabling a designer to estimate performances of a spatial database since from its early design stages. We present a method for estimating both the size of data and the cost of operations based on the conceptual schema of the spatial database. We also show the application of the method to the design of a spatial database concerning botanic data.
Download

Paper Nr: 593
Title:

TOWARDS A HOLISTIC INTEGRATION OF SOFTWARE LIFECYCLE PROCESSES USING THE SEMANTIC WEB

Authors:

Roy Oberhauser and Rainer Schmidt

Abstract: For comprehensive software lifecycle processes, a trichotomy continues to subsist between the software development processes, enterprise IT processes, and the software runtime environment. Currently, integrating software lifecycle processes requires substantial effort, and the information needed for the execution of (semi-)automated software lifecycle workflows is not readily accessible and is typically scattered across semantically heterogeneous sources. Consequently, an interrupted flow of information ensues between the development/maintenance phases and operational phases in the software lifecycle, resulting in ignorance, inefficiencies, and suboptimal product quality and support levels. Furthermore, today’s abstract IT (e.g., ITIL) and software processes are often derived into concrete processes and workflows manually, causing errors, extensive effort, and limiting widespread adoption of best practices. This paper describes an approach for improving information flow throughout the software lifecycle via the (semi-)automated realization of abstract software lifecycle processes and workflows in combination with Semantic Web technologies.
Download

Paper Nr: 89
Title:

ARCHITECTURE-CENTRIC DATA MINING MIDDLEWARE SUPPORTING MULTIPLE DATA SOURCES AND MINING TECHNIQUES

Authors:

Sai P. Lee and Hen Lai Ee

Abstract: In today’s market place, information stored in a consumer database is the most valuable asset of an organization. It houses important hidden information that can be extracted to solve real-world problems in engineering, science, and business. The possibility to extract hidden information to solve real-world problems has led to increasing application of knowledge discovery in databases, and hence the emergence of a variety of data mining tools in the market. These tools offer different strengths and capabilities, helping decision makers to improve business decisions. In this paper, we provide a high-level overview of a proposed data mining middleware whose architecture provides great flexibility for a wide spectrum of data mining techniques to support decision makers in generating useful knowledge to help in decision making. We describe features that we consider important to be supported by the middleware such as providing a wide spectrum of data mining algorithms and reports through plugins. We also briefly explain both the high-level architecture of the middleware and technologies that will be used to develop it.
Download

Paper Nr: 104
Title:

DESIGN AND IMPLEMENTATION OF DATA STREAM PROCESSING APPLICATIONS

Authors:

Edwin Kwan, Janusz Getta and Ehsan Vossough

Abstract: Processing of data streams requires the continuous processing of end-user applications over the long and steadily increasing sequences of data items. This work considers the design and implementation of data stream processing applications in the domains where the limited computational resources, constraints imposed on the implementation techniques and specific properties of applications exclude the use of a general purpose data stream management system. The implementation techniques described in the paper include the representation of atomic application as sequences of operation in an XML based language and translation of XML specifications into the programs in an object-oriented programming language.
Download

Paper Nr: 180
Title:

WEB-BASED DATA MINING SERVICES - A Solution Proposal

Authors:

Serban Ghenea and Cornelia Oprean

Abstract: The paper presents the results obtained in building a web-based solution that provides to registered users, accessing a portal on the Internet, the possibility to perform complex business analysis tasks, using data mining algorithms and services implemented by Microsoft SQL Server 2005 Analysis Services. The database platform sustains the web operation of a complete ERP system, offering support for back-office management and establishing a B2B environment that automates collaborative business processes.
Download

Paper Nr: 581
Title:

COLOR IMAGE PROFILE COMPARISON AND COMPUTING

Authors:

Imad El-zakhem, Amine A. Younes, Isis Truck, Hanna Greige and Herman Akdag

Abstract: This paper describes a method that analyzes the content of images while building their colorimetric profile as perceived by the user. First, images are being processed relying on a standard or initial set of parameters using the fuzzy set theory and the HLS color space (Hue, Lightness, Saturation). These parameters permit to describe and qualify the colors and their properties. Each image is processed pixel by pixel and is affected to a detailed initial colorimetric profile. Secondly, we present a method that will recalculate the amount of colors in the image based on another set of parameters, so the colorimetric profile of the image is being modified accordingly. Avoiding the repetition of the process at the pixel level is the main target of this phase, because reprocessing each image is time consuming and turned to be not feasible. Finally we present the software that processes images and that recalculates their colorimetric profiles with some examples.
Download

Paper Nr: 602
Title:

ASYNCHRONOUS REPLICATION CONFLICT CLASSIFICATION, DETECTION AND RESOLUTION FOR HETEROGENEOUS DATA GRIDS

Authors:

Eva Kühn, Angelika Ruhdorfer and Vesna Sesum-Cavic

Abstract: Data replication is a well-known technique in distributed systems, which offers many advantages such as higher data availability, load balancing, fault-tolerance, etc. It can serve to implement data grids where large amounts of data are shared. Besides all advantages, it is necessary to point to the problems, called replication conflicts that arise due to the replication strategies. In this paper, we present an infrastructure how to cope with replication of heterogeneous data in general for conflict detection and resolution and we illustrate its usefulness by means of an industrial business case implementation for the domain of relational databases and show further extensions for more complex resolution strategies. The implementation deals with the special case of asynchronous database replication in a peer-to-peer (multi-master) scenario, the possible conflicts in this particular domain and their classification, the ways of conflict detection, and shows some possible solution methods.
Download

Paper Nr: 618
Title:

A CONTEXT-BASED APPROACH FOR LINGUISTIC MATCHING

Authors:

Youssef B. Idrissi and Julie Vachon

Abstract: As currently implemented by most data mapping systems, linguistic matching often boils down to string comparison or to a synonym look-up in a dictionary. But these solutions have proved to be inefficient for dealing with highly heterogeneous data sources. To cope with data source heterogeneity more efficiently, we introduce INDIGO, a system which computes semantic matching by taking into account data sources’ context. The distinctive feature of INDIGO consists in enriching data sources with semantic information extracted from their individual development artifacts before mapping them. As explained in this article, experiments conducted on two case studies proved the relevance of this approach.
Download

Paper Nr: 624
Title:

MODELLING OF SUSPENDED SEDIMENT - In Nile River using ANN

Authors:

Abdelazim M. Negm, M. M. Elfiky, T. M. Owais and M. H. Nassar

Abstract: Artificial neural network (ANN) prediction models can be considered as an efficient tool in predictions once they are trained from examples or patterns. These types of ANN models need large amount of data which should be at hand before thinking to develop such models. In this paper, the capability of ANN model to predict suspended sediment in 2-D flow field is investigated. The data used for training the network are generated from a pre-verified 2-D hydrodynamic and a 2-D suspended sediment models which were recently developed by the authors. About two-thirds of the data are used for training the network while the rest of the data are used for validating and testing the developed ANN model. Field data measured by hydraulic research Institute are used to compare the results of the ANN model. The conjugate gradient learning algorithm is adopted. The results of the developed ANN model proved that the technique is reliable in such field compared to both the results of the previously developed models and the field data provided that the trained network is used to generate prediction within the range of training data.
Download

Paper Nr: 639
Title:

MINING THE WEB FOR LEARNING THE ONTOLOGY

Authors:

Bassam Aoun and Marie Khair

Abstract: The Semantic Web is a network of information linked up in such a way as to be easily processed by machines, on a global scale. To reach semantic web, current web resources should be automatically translated into semantic web resources. This is usually performed through semantic web mining, which aims at combining the two fast-developing research areas, the Semantic Web and Web Mining. A major step to be performed is the ontology-learning phase, where rules are mined from unstructured text and used later on to fill the ontology. Making sure that all rules are found and no additional and inaccurate rules are inserted, remains a critical issue since it constitutes the basis for building the semantic web. The mostly used algorithm for this task is the Apriori algorithm, which is inherited from classical data mining. However, due to the nature of the semantic web, some important rules can be dropped. This paper presents an enhanced version of the Apriori algorithm, En_Apriori, which uses the Apriori algorithm in combination with the maximal association and the X2 test to generate association rules from web/textual documents. This provides a major refinement to the classical ontology learning approach.
Download

Rejecteds
Paper Nr: 262
Title:

CONCEPTUAL FRAMEWORK FOR XML SCHEMA MAPPING

Authors:

Amar Zerdazi and Mariano Sidrach-de-Cardona

Abstract: The interoperability of heterogeneous data sources is an important issue in many applications such as mediation systems or web-based systems. In these systems, each data source exports a schema and each application defines a target schema representing its needs. The way instances of the target schema are derived from the sources is described through mappings. Generating such mappings is a difficult task, especially when the schemas are semi structured. In this paper, we propose an approach for mapping generation for XML schema; the basic idea is to drive direct as well as complex matches with their associated transformation operations from the computed element similarities. The representation of a mapping element in a source-to-target mapping clearly declares both the semantic correspondences as well as the access paths to access and load data from source into a target schema.

Paper Nr: 557
Title:

AUTOMATIC TRANSFORMATION OF SQL RELATIONAL DATABASES TO OWL ONTOLOGIES

Authors:

Irina Astrova, Windri Saifudin and Windri Saifudin

Abstract: This paper proposes a novel approach to automatic transformation of relational databases to ontologies, where the quality of transformation is also considered. The proposed approach is implemented in a tool called QUALEG DB. This tool parses an SQL script and generates an OWL file that contains an ontology, including definitions (classes, properties and restrictions) and instances (values and individuals). The proposed approach can be used for making the vast amount of relational database information on the Web machine-processable.

Paper Nr: 636
Title:

A Strategic Analytics Methodology

Authors:

Marcel Van Rooyen and Simeon Simoff

Abstract: Abstract. The application of data mining and business analytics are becoming the “industry standard” for companies, which aim at gaining strategic advantage in the global marketplace. Consequently, there have been developed several methodologies enabling data mining as a process. Among them the Cross-Industry Standard Process for Data Mining (Chapman, Clinton et al., p.94) and SAS Data Mining Project Methodology (SDMPM) have been the most widely applied. However, as a result of the reflection on the application of these methodologies in business environments and on real world cases, we identified that both methodologies provide limited business support against specific evaluation criteria. In this paper we demonstrate the impact of these limitations on a Telco data mining project. As one of the solutions we introduce a data mining project methodology with improved business decision support – The Strategic Analytics Methodology (SAM). The advantage of the proposed methodology is demonstrated through its application in the same project.

Area 3 - Knowledge Engineering

Full Papers
Paper Nr: 40
Title:

A CASE-BASED DIALOGUE SYSTEM FOR INVESTIGATING THERAPY INEFFICACY

Authors:

Rainer Schmidt and Olga Vorobieva

Abstract: ISOR is a Case-Based Reasoning system for long-term therapy support in the endocrine domain and in psychiatry. ISOR performs typical therapeutic tasks, such as computing initial therapies, initial dose recommendations, and dose updates. ISOR deals especially with situations where therapies become ineffective. Causes for inefficacy have to be found and better therapy recommendations should be computed. In addition to former already solved cases, ISOR uses further knowledge forms, especially medical histories of query patients themselves and prototypes. Furthermore, the knowledge base consists of therapies, conflicts, instructions etc. So, different forms and steps of retrieval are performed, while adaptation occurs as an interactive dialog with the user.
Download

Paper Nr: 116
Title:

AGENTS THAT HELP TO DETECT TRUSTWORTHY KNOWLEDGE SOURCES IN KNOWLEDGE MANAGEMENT SYSTEMS

Authors:

Juan Pablo Soto, Aurora Vizcaíno, Javier Portillo and Mario Piattini

Abstract: Knowledge Management is a critical factor for companies worried about increasing their competitive advantage. Because of this companies are acquiring knowledge management tools that help them manage and reuse their knowledge. One of the mechanisms most commonly used with this goal is that of Knowledge Management Systems (KMS). However, sometimes KMS are not very used by the employees, who consider that the knowledge stored is not very valuable. In order to avoid it, in this paper we propose a three-level multi-agent architecture based on the concept of communities of practice with the idea of providing the most trustworthy knowledge to each person according to the reputation of the knowledge source. Moreover a prototype that demostrates the feasibility of our ideas is described.
Download

Paper Nr: 119
Title:

BUILDING AN ONTOLOGY THAT HELPS IDENTIFY CRIMINAL LAW ARTICLES THAT APPLY TO A CYBERCRIME CASE

Authors:

El Hassan Bezzazi

Abstract: We present in this paper a small formal cybercrime ontology by using concrete tools. The purpose is to show how law articles and legal cases could be defined so that the problem of case resolution is reduced to a classification problem as long as cases are seen as subclasses of articles. Secondly, we show how counterfactual reasoning may be held over it. Lastly, we investigate the implementation of an hybrid system which is based both on this ontology and on a non-monotonic rule based system which is used to execute, in a rule based way, an external ontology dealing with a technical domain in order to clarify some of the technical concepts.
Download

Paper Nr: 155
Title:

TOWARDS AUTOMATED INFERENCING OF EMOTIONAL STATE FROM FACE IMAGES

Authors:

Ioanna-Ourania Stathopoulou and George A. Tsihrintzis

Abstract: Automated facial expression classification is very important in the design of new human-computer interaction modes and multimedia interactive services and arises as a difficult, yet crucial, pattern recognition problem. Recently, we have been building such a system, called NEU-FACES, which processes multiple camera images of computer user faces with the ultimate goal of determining their affective state. In here, we present results from an empirical study we conducted on how humans classify facial expressions, corresponding error rates, and to which degree a face image can provide emotion recognition from the perspective of a human observer. This study lays related system design requirements, quantifies statistical expression recognition performance of humans, and identifies quantitative facial features of high expression discrimination and classification power.
Download

Paper Nr: 188
Title:

INCONSISTENCY-TOLERANT KNOWLEDGE ASSIMILATION

Authors:

Hendrik Decker

Abstract: A recently introduced notion of inconsistency tolerance for integrity checking is revisited. Two conditions that enable an easy verification or falsification of inconsistency tolerance are discussed. Based on a method-independent definition of inconsistency-tolerant updates, this notion is then extended to a family of knowledge assimilation tasks. These include integrity maintenance, view updating and repair of integrity violation. Many knowledge assimilation approaches turn out to be inconsistency-tolerant without needing any specific knowledge about the given status of integrity of the underlying database.
Download

Paper Nr: 537
Title:

A MULTI-OBJECTIVE GENETIC ALGORITHM FOR CUTTING-STOCK IN PLASTIC ROLLS INDUSTRY

Authors:

José R. Varela Arias, César Muñoz, María Rita Sierra Sánchez and Inés González

Abstract: In this paper, we confront a variant of the cutting-stock problem with multiple objectives. It is an actual problem of an industry that manufactures plastic rolls under customers’ demands. The starting point is a solution calculated by a heuristic algorithm, termed SHRP that aims mainly at optimizing the two main objectives, i.e. the number of cuts and the number of different patterns; then the proposed multi-objective genetic algorithm tries to optimize other secondary objectives such as changeovers, completion times of orders weighted by priorities and open stacks. We report experimental results showing that the multi-objective genetic algorithm is able to improve the solutions obtained by SHRP on the secondary objectives and also that it offers a number of non dominated solutions, so that the expert can chose one of them according to his preferences at the time of cutting the orders of a set of customers.
Download

Paper Nr: 554
Title:

IT-BASED PURPOSE-DRIVEN KNOWLEDGE VISUALIZATION

Authors:

Wladimir Bodrow and Vladimir Magalashvili

Abstract: Knowledge visualization is currently under investigation from different points of view especially because of its importance for Artificial Intelligence, Knowledge Management, Business Intelligence etc. The concepts and technology of knowledge visualization in the presented research are considered from a purpose perspective which focuses on the interdependencies between different knowledge elements. This way the influence of these elements on each other in every particular situation can be visualized. This is crucial e.g. for decision making.
Download

Paper Nr: 559
Title:

EMPIRICAL VALIDATION ONKNOWLEDGE PACKAGING SUPPORTING KNOWLEDGE TRANSFER

Authors:

Pasquale Ardimento, Maria T. Baldassarre, Marta Cimitile and Giuseppe Visaggio

Abstract: Transfer of research results, as well as technological innovation, within an enterprise is a key success factor. The introduction of research results aims to improve efficacy and effectiveness of the production processes with respect to business goals, and also to better adapt the products to the market needs. Nevertheless, it is often difficult to transfer research results in production systems because it is necessary, among others, that knowledge be explicit and understandable by stakeholders. Such transfer is demanding, as so many researchers have been studying alternative ways to classic approaches such as books and papers that favour knowledge acquisition on behalf of users. In this context, we propose the concept of Knowledge Package (KP) with a specific structure as alternative. We have carried out an experiment which compared the efficacy of the proposed approach with the classic ones, along with the comprehensibility of the information enclosed in a KP rather than in a set of Papers. The experiment has pointed out that knowledge packages are more effective than traditional ones, for knowledge transfer.
Download

Paper Nr: 572
Title:

TOWARDS A GENERAL ONTOLOGYOF COMPUTER PROGRAMS

Authors:

Pascal Lando, Anne Lapujade, Gilles Kassel and Frederic Furst

Abstract: Over the past decade, ontology research has investigated the field of computer programs. This work has aimed at defining conceptual descriptions of the programs so as to master their design and use. Unfortunately, these efforts have only been partially successful. In this paper, we present the basis of a Core Ontology of Programs and Software (COPS) which integrates the field’s main concepts. But, above all, we emphasize the method used to build the ontology. In fact, COPS specializes not only the DOLCE foundational ontology (“Descriptive Ontology for Linguistic and Cognitive Engineering”, Masolo et al., 2003) but also core ontologies of domains (e.g. artefacts, documents) situated on a higher abstraction level. This approach enables us to take into account the “dual nature” of computer programs, which can be considered as both syntactic entities (well-formed expressions in a programming language) and artefacts whose function is to enable computers to process information.
Download

Short Papers
Paper Nr: 63
Title:

THEORETICAL FRAMEWORK FOR COOPERATION AND COMPETITION IN EVOLUTIONARY COMPUTATION

Authors:

Eugene Eberbach and Mark Burgin

Abstract: In the paper the theoretical framework for cooperation and competition of coevolved population members working toward a common goal is presented. We use a formal model of Evolutionary Turing Machine and its extensions to justify that in general evolutionary algorithms belong to the class of super-recursive algorithms. Parallel and Parallel Weighted Evolutionary Turing Machine models have been proposed to capture properly cooperation and competition of the whole population expressed as an instance of multiobjective optimization.
Download

Paper Nr: 71
Title:

KNOWLEDGE BASED CONCEPTS FOR DESIGN SUPPORT OF AN ARTIFICIAL ACCOMMODATION SYSTEM

Authors:

Klaus P. Scherer

Abstract: When conceiving medical information and diagnosis systems, knowledge based systems are used to diagnose failures based on specific patient data. The knowledge is evaluated based on statistical data from the paste and the present information is derived by statistical approaches (Bayes theorem) and analogues cases to interpret the individual patient related situation. An analogue methodical situation is that of the conceptualisation of a new technical system, where the system components with the properties will be configured in such a manner, that a target function is guaranteed under consideration of any constraints. In both situations, the system (human being, technical system) has to be described in a natural language and must be formalised. Based on these formulas logical conclusions can be drawn. Useful representations are formalised knowledge representation methods. For logical conclusions the predicate calculus of first order is used. For information access by both experts and users, comfortable natural language based concepts and the employment of graphical tools are very important to manage the complex knowledge.
Download

Paper Nr: 147
Title:

CHI SQUARE FEATURE EXTRACTION BASED SVMS ARABIC TEXT CATEGORIZATION SYSTEM

Authors:

Abdelwadood Mesleh

Abstract: This paper aims to implement a Support Vector Machines (SVMs) based text classification system for Arabic language articles. This classifier uses CHI square method as a feature selection method in the pre-processing step of the Text Classification system design procedure. Comparing to other classification methods, our classification system shows a high classification effectiveness for Arabic articles term of Macroaveraged F1 = 88.11 and Microaveraged F1 = 90.57.
Download

Paper Nr: 106
Title:

ENTERPRISE ONTOLOGY AND FEATURE MODEL INTEGRATION - Approach and Experiences from an Industrial Case

Authors:

Kurt Sandkuhl, Christer Thörn and Wolfram Webers

Abstract: Based on an industrial application case from automotive industries, this paper discusses integration of an existing feature model into an existing enterprise ontology. Integration is discussed on conceptual and on implementation level. The main conclusion of the work is that while integrating enterprise ontologies and feature models is quite straightforward on a conceptual level, it causes various challenges when implementing the integration with Prote´ge´. As ontologies have a clearly richer descriptive power than feature models, the mapping on a notation level poses no serious technical problems. The main difference of the implementation approaches presented is where to actually place a feature. The first approach follows the information modeling tradition by considering features as model entities with a certain meta-model. The second approach integrates all features and relations directly on the concept level, i.e. features are considered independent concepts.
Download

Paper Nr: 149
Title:

FORMAL METHOD FOR AUTOMATIC AND SEMANTIC MAPPING OF DISTRIBUTED SERVICE-ONTOLOGIES

Authors:

Nacima Mellal and Richard Dapoiny

Abstract: Many distributed heterogenous systems exchange information between them. Currently, most of them are described in terms of ontologies. When ontologies are distributed, arises the problem of achieving sematic interoperability. This is undertaken by a process which defines rules to relate these ontologies, called “Ontology Mapping” in order to achieve a given goal. This paper describes a methodology for automatic and semantic mapping of ontologies. Our main interest is focused on ontologies describing services of systems. These ontologies are called “Service Ontologies”. So, we investigate an approach where the mapping of ontologies provides full semantic integration between distributed service ontologies using Information Flow model.
Download

Paper Nr: 189
Title:

INCREASE PERFORMANCE BY COMBINING MODELS OF ANALYSIS ON REAL DATA

Authors:

Dumitru D. Burdescu and Cristian Mihaescu

Abstract: In this paper we investigate several state-of-the-art methods of combining models of analysis. Data is obtained from an e-Learning platform and is represented by user’s activities like downloading course materials, taking tests and exams, communicating with professors and secretaries and other. Combining multiple models of analysis may have as result important information regarding the performance of the e-Learning platform regarding student’s learning performance or capability of the platform to classify students according to accumulated knowledge. This information may be valuable in adjusting platform’s structure, like number or difficulty of questions, to increase performance from presented points of view.
Download

Paper Nr: 580
Title:

TOWARDS A MULTIMODELING APPROACH OF DYNAMIC SYSTEMS FOR DIAGNOSIS

Authors:

Marc Le Goc and Emilie Masse

Abstract: This paper presents the basis of a multimodeling methodology that uses a CommonKADS conceptual model to interpret the diagnosis knowledge with the aim of representing the system with three models: a structural model describing the relations between the components of the system, a functional model describing the relations between the values the variables of the system can take (i.e. the functions) and a behavioural model describing the states of the system and the discrete events firing the state transitions. The relation between these models is made with the notion of variable: a variable used in a function of the functional model is associated with an element of the structural model and a discrete event is defined as the affectation of a value to a variable. This methodology is presented in this paper with a toy but pedagogic problem: the technical diagnosis of a car. The motivating idea is that using the same level of abstraction that the expert can facilitate the problem solving reasoning.
Download

Paper Nr: 606
Title:

OFF-LINE SIGNATURE VERIFICATION - Comparison of Stroke Extraction Methods

Authors:

Bence Kővári, Áron Horváth, Zsolt Kertész and Csaba Illés

Abstract: Stroke extraction is a necessary part of the majority of semantic based off-line signature verification systems. This paper discusses some stroke extraction variants which can be efficiently used in such environments. First the different aspects and problems of signature verification are discussed in conjunction with off-line analysis methods. It is shown, that on-line analysis methods perform usually better than off- line methods because they can make use of the temporal information (and thereby get a better perception of the semantics of the signature). To improve the accuracy of off-line signature verification methods the extraction of semantic information is necessary. Three different approaches are introduced to reconstruct the original strokes of a signature. One purely based on simple image processing algorithms, one with some more intelligent processing and one with a pen model. The methods are examined and compared with regard to their benefits and drawbacks on further signature processing.
Download

Paper Nr: 610
Title:

MATHEMATICAL FRAMEWORK FOR GENERALIZATION AND INSTANTIATION OF KNOWLEDGE

Authors:

Marek Reformat

Abstract: Abstract data types. abstract:Templates, patterns, and blueprints are constructs that humans use to represent highly abstract knowledge. Quality of such processes as reasoning, speaking, running, and driving depends on people’s abilities to process these constructs. Recently, they have been named protoforms. On the other hand, concrete pieces of knowledge can be seen as instances of the protoforms. A very important task is to find mechanisms that will be able to organize and control protoforms and their instances. They would provide methods for defining properties of protoforms and their instances, describing their interactions, and controling ways how they can be merged. The paper describes a concept of applying category theory to describe protoforms and their instances in a more formal way.
Download

Area 4 - Programing Languages

Full Papers
Paper Nr: 91
Title:

A LANGUAGE FOR SPECIFYING INFORMATIONAL GRAPHICS FROM FIRST PRINCIPLES

Authors:

Stuart Shieber and Wendy Lucas

Abstract: Information visualization tools, such as commercial charting packages, provide a standard set of visualizations for tabular data, including bar charts, scatter plots, pie charts, and the like. For some combinations of data and task, these are suitable visualizations. For others, however, novel visualizations over multiple variables would be preferred but are unavailable in the fixed list of standard options. To allow for these cases, we introduce a declarative language for specifying visualizations on the basis of the first principles on which (a subset of) informational graphics are built. The functionality we aim to provide with this language is presented by way of example, from simple scatter plots to versions of two quite famous visualizations: Minard’s depiction of troop strength during Napoleon’s march on Moscow and a map of the early ARPAnet from the ancient history of the Internet. Benefits of our approach include flexibility and expressiveness for specifying a range of visualizations that cannot be rendered with standard commercial systems.
Download

Paper Nr: 240
Title:

A SPACE-EFFICIENT ALGORITHM FOR PAGING UNBALANCED BINARY TREES

Authors:

Rui E. Tavares and Elias P. P. Duarte Jr

Abstract: This work presents a new approach for paging large unbalanced binary trees which frequently appear in computational biology. The proposed algorithm aims at reducing the number of pages accessed for searching, and at decreasing the amount of unused space in each page as well as reducing the total number of pages required to store a tree. The algorithm builds the best possible paging when it is possible and employs an efficient strategy based on bin packing for allocating trees that are not complete. The complexity of the algorithm is presented. Experimental results are reported and compared with other approaches, including balanced trees. The comparison shows that the proposed approach is the only one that presents an average number of page accesses for searching close to the optimal and, at the same time, the page filling percentage is also close to the optimal.
Download

Paper Nr: 528
Title:

A PATTERN FOR STATIC REFLECTION ON FIELDS - Sharing Internal Representations in Indexed Family Containers

Authors:

Andreas Priesnitz and Sibylle Schupp

Abstract: Reflection allows defining generic operations in terms of the constituents of objects. These definitions incur overhead if reflection takes place at run time, which is the common case in popular languages. If performance matters, some compile-time means of reflection is desired to obviate that penalty. Furthermore, the information provided by static reflection can be utilised for class generation, e.g., to optimize internal representation. We demonstrate how to provide static reflection on class field properties by means of generic components in an OO language with static meta-programming facilities. Surprisingly, a major part of the solution is not specific to the particular task of providing reflection. We define the internal representation of classes by a reworked implementation of a generic container that models the concept of a statically indexed family. The proposed features of this implementation are also beneficial to its use as a common container.
Download

Paper Nr: 561
Title:

ITKBOARD: A VISUAL DATAFLOW LANGUAGE FOR BIOMEDICAL IMAGE PROCESSING

Authors:

Hoang Le, Rongxi Li, Sébastien Ourselin and John Potter

Abstract: Experimenters in biomedical image processing rely on software libraries to provide a large number of standard filtering and image handling algorithms. The Insight Toolkit (ITK) is an open-source library that provides a complete framework for a range of image processing tasks, and is specifically aimed at segmentation and registration tasks for both two and three dimensional images. This paper describes a visual dataflow language, ITKBoard, designed to simplify building, and more significantly, experimenting with ITK applications. The ease with which image processing experiments can be interactively modified and controlled is an important aspect of the design. The experimenter can focus on the image processing task at hand, rather than worry about the underlying software. ITKBoard incorporates composite and parameterised components, and control constructs, and relies on a novel hybrid dataflow model, combining aspects of both demand and data-driven execution.
Download

Paper Nr: 597
Title:

THE DEBUGGABLE INTERPRETER DESIGN PATTERN

Authors:

Jan Vrany and Alexandre Bergel

Abstract: The use of Interpreter and Visitor design patterns has been widely adopted to implement programming language interpreters due to their expressive and simple design. However, no general approach to conceive a debugger is commonly adopted. This paper presents the debuggable interpreter design pattern as a general approach to extend a language interpreter with debugging facilities such as step-over and step-into. Moreover, it enables multiple debuggers coexisting and extends the Interpreter and Visitor design patterns with a few hooks and a debugging service. SmallJS, an interpreter for Javascript-like language, serves as an illustration.
Download

Short Papers
Paper Nr: 144
Title:

TEST COVERAGE ANALYSIS FOR OBJECT ORIENTED PROGRAMS - Structural Testing through Aspect Oriented Instrumentation

Authors:

Fabrizio Baldini, Giacomo Bucci, Leonardo Grassi and Enrico Vicario

Abstract: The introduction of Object Oriented Technologies in test centered processes has emphasized the importance of finding new methods for software verification. Testing metrics and practices, developed for structured programs, have to be adapted in order to address the prerogatives of object oriented programming. In this work, we introduce a new approach to structural coverage evaluation in the testing of OO software. Data flow paradigm is adopted and reinterpreted through the definition of a new type of structure, used to record def/use information for test critical class member variables. In the final part of this paper, we present a testing tool that employs this structure for code based coverage analysis of Java and C++ programs.
Download

Paper Nr: 526
Title:

ON DIGITAL SEARCH TREES - A Simple Method for Constructing Balanced Binary Trees

Authors:

Franjo Plavec, Zvonko Vranesic and Stephen Brown

Abstract: This paper presents digital search trees, a binary tree data structure that can produce well-balanced trees in the majority of cases. Digital search tree algorithms are reviewed, and a novel algorithm for building sorted trees is introduced. It was found that digital search trees are simple to implement because their code is similar to the code for ordinary binary search trees. Experimental evaluation was performed and the results are presented. It was found that digital search trees, in addition to being conceptually simpler, often outperform other popular balanced trees such as AVL or red-black trees. It was found that good performance of digital search trees is due to better exploitation of cache locality in modern computers.
Download

Paper Nr: 640
Title:

ASPECT ORIENTATION VS. OBJECT ORIENTATION IN SOFTWARE PROGRAMMING - An Exploratory Case-study

Authors:

Anna Lomartire, Gianfranco Pesce and Giovanni Cantone

Abstract: Aspect orientation is a software paradigm that is claimed to be more effective and efficient than Object orientation when software development and maintenance interventions are taken in consideration that affect transversally the application structure, namely Aspects. In order to start with providing evidence able to confirm or disconfirm that opinion in our context - software processes that we enact, and products that we develop at our University Data Center - before launching a controlled experiment, which would require the investment of large effort, we conducted a preliminary explorative investigation that we arranged as a case study. We started from a Web-based object-oriented application, which engineering students in Informatics had constructed under our supervision. We specified new user needs, which realization was expected to impact on many of the application’s classes and relationships. Hence, we applied another student to realize those extensive requirements by using both Aspect orientation and Object orientation. Results show that, in the average, both the completion time and the size of the additional code advantage significantly the Aspect orientation, for maintenance interventions that are transversal to the application’s structure, with respect to the characteristics of the experiment object utilized, the specified enhancement maintenance requirements, and the subject involved with performing in the role of programmer. Although the exploratory nature of the study, the limited generality of the utilized application, and the fact that just one programmer was utilized as experimental subjects, the experiment results push us to verify the findings by conducting further investigation involving a wider set of programmers and applications with different characteristics.
Download

Area 5 - Software Engineering

Full Papers
Paper Nr: 114
Title:

MODERN CONCEPTS FOR HIGH-PERFOMANCE SCIENTIFIC COMPUTING - Library Centric Application Design

Authors:

Rene Heinzl, Philipp Schwaha and Siegfried Selberherr

Abstract: During the last decades various high-performance libraries were developed written in fairly low level languages, like FORTRAN, carefully specializing codes to achieve the best performance. However, the objective to achieve reusable components has been regularly eluded by the software community ever since. The fundamental goal of our approach is to create a high-performance mathematical framework with reusable domain-specific abstractions which are close to the mathematical notations to describe many problems in scientific computing. Interoperability driven by strong theoretical derivations of mathematical concepts is another important goal of our approach.
Download

Paper Nr: 132
Title:

AN AGILE MODEL DRIVEN ARCHITECTURE-BASED CONTRIBUTION TO WEB ENGINEERING

Authors:

Alejandro Gómez Cuesta, Juan Carlos Granja and Rory O'Connor

Abstract: The rise of the number and complexity of web applications is ever increasing. Web engineers need advanced development methods to build better systems and to maintain them in an easy way. Model-Driven Architecture (MDA) is an important trend in the software engineering field based on both models and its transformations to automatically generate code. This paper describes a a methodology for web application development, providing a process based on MDA which provides an effective engineering approach to reduce effort. It consists of defining models from metamodels at platform-independent and platform-specific levels, from which source code is automatically generated.
Download

Paper Nr: 141
Title:

ROLE-BASED CLUSTERING OF SOFTWARE MODULES - An Industrial Size Experiment

Authors:

Philippe Dugerdil and Sebastien Jossi

Abstract: Legacy software system reverse engineering has been a hot topic for more than a decade. One of the key problems is to recover the architecture of the system i.e. its components and the communications between them. Generally, the code alone does not provide much clue on the structure of the system. To recover this architecture, we proposed to use the artefacts and activities of the Unified Process to guide the search. In our approach we first recover the high-level specification of the program. Then we instrument the code and “run” the use-cases. Next we analyse the execution trace and rebuild the run-time architecture of the program. This is done by clustering the modules based on the supported use-case and their roles in the software. In this paper we present an industrial validation of this reverse-engineering process. First we give a summary of our methodology. Then we show a step-by-step application of this technique to real-world business software and the result we obtained. Finally we present the workflow of the tools we used and implemented to perform this experiment. We conclude by giving the future directions of this research.
Download

Paper Nr: 159
Title:

AN INTEGRATED TOOL FOR SUPPORTING ONTOLOGY DRIVEN REQUIREMENTS ELICITATION

Authors:

Motohiro Kitamura, Ryo Hasegawa, Haruhiko Kaiya and Motoshi Saeki

Abstract: Since requirements analysts do not have sufficient knowledge on a problem domain, i.e. domain knowledge, the technique how to make up for lacks of domain knowledge is a key issue. This paper proposes the usage of a domain ontology as domain knowledge during requirements elicitation processes and the technique to create a domain ontology for a certain problem domain by using text-mining techniques.
Download

Paper Nr: 159
Title:

AN INTEGRATED TOOL FOR SUPPORTING ONTOLOGY DRIVEN REQUIREMENTS ELICITATION

Authors:

Motohiro Kitamura, Ryo Hasegawa, Haruhiko Kaiya and Motoshi Saeki

Abstract: Since requirements analysts do not have sufficient knowledge on a problem domain, i.e. domain knowledge, the technique how to make up for lacks of domain knowledge is a key issue. This paper proposes the usage of a domain ontology as domain knowledge during requirements elicitation processes and the technique to create a domain ontology for a certain problem domain by using text-mining techniques.
Download

Paper Nr: 198
Title:

DETECTING PATTERNS IN OBJECT-ORIENTED SOURCE CODE – A CASE STUDY

Authors:

Andreas Wierda, Eric Dortmans and Lou Somers

Abstract: Pattern detection methods discover recurring solutions in a system’s implementation, for example design patterns in object-oriented source code. Usually this is done with a pattern library. This has the disadvantage that the precise implementation of the patterns must be known in advance. The method used in our case study does not have this disadvantage. It uses a mathematical technique called Formal Concept Analysis and is applied to find structural patterns in two subsystems of a printer controller. The case study shows that it is possible to detect frequently used structural design constructs without upfront knowledge. However, even the detection of relatively simple patterns in relatively small pieces of software takes a lot of computing time. Since this is due to the complexity of the applied algorithms, applying the method to large software systems like the complete controller is not practical. They can be applied to its subsystems though, which are about five to ten percent of its size.
Download

Paper Nr: 241
Title:

AUTO-COLLEAGUE - A Collaborative Learning Environment for UML

Authors:

Maria Virvou and Kalliopi Tourtoglou

Abstract: In this paper we present AUTO-COLLEAGUE, a collaborative learning environment for UML. AUTO-COLLEAGUE is a Computer-Supported Collaborative Learning (CSCL) system. It is based on a multi-dimensional User-Modeller component that describes user characteristics related to the UML domain knowledge, the performance types, the personality and the needs of the learner. The system constantly monitors, records and reasons about each learner’s actions. As a result of this process, AUTO-COLLEAGUE provides adaptive advice and help to users so that they may use UML more efficiently and collaborate with other members of their team more constructively.
Download

Paper Nr: 525
Title:

USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE

Authors:

Katerina Kabassi, Maria Virvou and Efthimios Alepis

Abstract: Decision making theories seem very promising for improving human-computer interaction. However, the actual process of incorporating multi-criteria analysis into an intelligent user interface involves several development steps that are not trivial. Therefore, we have employed and tested the effectiveness of a unifying life-cycle framework that may be used for the application of many different multi-criteria decision making theories. The life-cycle framework is called MBIUI and in this paper we show how we have used it for employing a multi-criteria decision making theory, called Simple Additive Weighting, in an affective bi- modal educational system. More specifically, we describe the experimental studies for designing, implementing and testing the decision making theory. The decision making theory has been adapted in the user interface for combining evidence from two different modes and providing affective interaction.
Download

Paper Nr: 553
Title:

HOW “DEVELOPER STORIES” IMPROVES ARCHITECTURE - Facilitating Knowledge Sharing and Embodiment, and Making Architectural Changes Visible

Authors:

Rolf N. Jensen, Niels Platz and Gitte Tjørnehøj

Abstract: Within the field of Software Engineering emergence of agile methods has been a hot topic since the late 90s. eXtreme Programming (XP) (Beck, 1999) was one of the first agile methods and is one of the most well-known. However research has pointed to weaknesses in XP regarding supporting development of viable architectures. To strengthen XP in this regard a new practice: Developer Stories (Jensen et al., 2006) was introduced last year mainly based on a theoretical argumentation. This paper reports from extensive experimentation with, and elaboration of the new practice. Results from this experimentation shows that using Developer Stories increases the likelihood of developing a viable architecture through a series of deliberate choices, through creating disciplined and recurring activities that: 1) Facilitate sharing and embodying of knowledge about architectural issues, and 2) heighten visibility of refactorings for both customers and developers.
Download

Paper Nr: 566
Title:

AN ONTOLOGICAL SW ARCHITECTURE FOR THE DEVELOPMENT OF COOPERATIVE WEB PORTALS

Authors:

Giacomo Bucci, Valeriano Sandrucci, Enrico Vicario and Saverio Mecca

Abstract: Ontological technologies comprise a rich framework of languages and components off the shelf, which devise a paradigm for the organization of SW architectures with high degree of interoperability, maintainability and adaptability. In particular, this fits the needs for the development of semantic web portals, where pages are organized as a generic graph, and navigation is driven by the inherent semantics of contents. We report on a pattern-oriented executable SW architecture for the construction of portals enabling semantic access, querying, and contribution of conceptual models and concrete elements of information. By relying on the automated configuration of an Object Oriented domain layer, the architecture reduces the creation of a cooperative portal to the definition of an ontological domain model.
Download

Paper Nr: 586
Title:

SPECIFICATION AND PROOF OF LIVENESS PROPERTIES IN B EVENT SYSTEMS

Authors:

Olfa Mosbahi and Jacques Jaray

Abstract: In this paper, we give a framework for defining an extension to the event B method. The event B method allows us to state only invariance properties, but in some applications such as automated or distributed systems, fairness and eventuality properties must also be considered. We first extend the expressiveness of the event B method to deal with the specification of these properties. Then, we give a semantics of this extended syntax over traces, in the same spirit as the temporal logic of actions TLA does. Finally, we give verification rules of these properties. We denote by temporal B model, the B model extended with liveness properties. We illustrate our method on a case study related to automated system.
Download

Paper Nr: 591
Title:

DIFFERENCING AND MERGING OF SOFTWARE DIAGRAMS - State of the Art and Challenges

Authors:

Sabrina Förtsch and Bernhard Westfechtel

Abstract: For long, fine-grained version control for software documents has been neglected severely. Typically, software configuration management systems support the management of text or binary files. Unfortunately, text-based tools for fine-grained version control are not adequate for software documents produced in earlier phases in the software life cycle. Frequently, these documents have a graphical syntax; therefore we will call them software diagrams. This paper discusses the current state of the art in fine-grained version control (differencing and merging) for software diagrams with an emphasis on UML diagrams.
Download

Paper Nr: 591
Title:

DIFFERENCING AND MERGING OF SOFTWARE DIAGRAMS - State of the Art and Challenges

Authors:

Sabrina Förtsch and Bernhard Westfechtel

Abstract: For long, fine-grained version control for software documents has been neglected severely. Typically, software configuration management systems support the management of text or binary files. Unfortunately, text-based tools for fine-grained version control are not adequate for software documents produced in earlier phases in the software life cycle. Frequently, these documents have a graphical syntax; therefore we will call them software diagrams. This paper discusses the current state of the art in fine-grained version control (differencing and merging) for software diagrams with an emphasis on UML diagrams.
Download

Paper Nr: 608
Title:

VCODEX: A DATA COMPRESSION PLATFORM

Authors:

Kiem-phong Vo

Abstract: Vcodex is a platform to compress and transform data. A standard interface, data transform, is defined to represent any algorithm or technique to encode data. Although primarily geared toward data compression, a data transform can perform any type of processing including encryption, portability encoding and others. Vcodex provides a core set of data transforms implementing a wide variety of compression algorithms ranging from general purpose ones such as Huffman or Lempel-Ziv to structure-driven ones such as reordering fields and columns in relational data tables. Such transforms can be reused and composed together to build more complex compressors. An overview of the software and data architecture of Vcodex will be presented. Examples and experimental results show how compression performance beyond traditional approaches can be achieved by customizing transform compositions based on data semantics.
Download

Short Papers
Paper Nr: 41
Title:

GOAL-ORIENTED AUTOMATIC TEST CASE GENERATORS FOR MC/DC COMPLIANCY

Authors:

Emine G. Aydal, Jim Woodcock and Ana Cavalcanti

Abstract: Testing is a crucial phase of the software development process. Certification standards such as DO-178B impose certain steps to be accomplished in testing phase and certain testing coverage criteria to be met in order to certify a software as Level-A software. Modified Condition/Decision Coverage, listed as one of these requirements in DO-178B, is one of the most difficult targets to achieve for testers and software developers. This paper presents the state-of-the-art goal-oriented automatic test case generators and evaluates them in the context of MC/DC satisfaction. It also aims to guide the production of MC/DC-compliant test case generators by pointing out the strengths and weaknesses of the current tools and by highlighting the further expectations.
Download

Paper Nr: 57
Title:

SOFTWARE PROCESS CONVERSION RULES IN IMPPROS - Quality Models Conversion for a Software Process Implementation Environment

Authors:

Sandro Oliveira, Alexandre Vasconcelos and Tiago Soares

Abstract: The software process conversion is a technique based on the mapping of the existing relationship between the content of the quality norms/models. The basic estimated of the conversion is to obtain making an adaptation of the software processes without the necessary effort to specify new models, guaranteeing the unicity and the consistency. For a company to reach a definitive market, its software process will have to be guided by patterns defined for a norm, and if it glimpses the penetration in other markets perhaps it is necessary the guide for other norms so different. This paper presents a process to convert software processes using quality models/norms, and a discussion of some rules used to support the execution of this process in a software development context. This process is part of a software process implementation environment, called ImPProS, developed at CIn/UFPE – Center of Informatics/Federal University of Pernambuco.
Download

Paper Nr: 75
Title:

LINKING SOFTWARE QUALITY TO SOFTWARE ENGINEERING ACTIVITIES, RESULTS FROM A CASE-STUDY

Authors:

Joseph Trienekens, Rob Kusters and Dennis C. Brussel

Abstract: Specification of software quality characteristics, such as reliability and usability, is an important aspect of software development. However, of equal importance is the implementation of quality during the design and construction of the software. This paper links software quality specification to software quality implementation using a multi-criteria decision analysis technique. The approach is validated in a case-study, at the Royal navy in The Netherlands.
Download

Paper Nr: 100
Title:

REFORMULATING COMPONENT IDENTIFICATION AS DOCUMENT ANALYSIS PROBLEM - Towards Automated Component Procurement

Authors:

Hans-Gerhard Gross, Marco Lormans and Jun Zhou

Abstract: One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can reformulate component identification as a text analysis problem and apply latent semantic analysis for automatically identifying suitable existing components in large repositories, based on the descriptions of required component features. In this article, we motivate our choice of this technique for feature identification, describe how it can be applied to feature tracing problems, and discuss the results that we achieved with the application of this technique in a number of case studies.
Download

Paper Nr: 133
Title:

ASSL SPECIFICATION OF RELIABILITY SELF-ASSESSMENT IN THE AS-TRM

Authors:

E. Vassev, Olga Ormandjieva and J. Paquet

Abstract: This article is an introduction to our research towards a formal framework for tackling reliability in reactive autonomic systems with self-monitoring functionality. The Autonomic System Specification Language (ASSL) is a framework for formally specifying and generating autonomic systems. With ASSL, we can specify high-level behavior policies, which shows that it is very appropriate language for specifying reliability models as part of overall system behavior. In this paper, we show how ASSL can be used to specify reliability self-assessment in the Autonomic System Timed Reactive Model (AS-TRM). The reliability self-assessment is performed at two levels: autonomic element (local) and system (global). It depends on the configuration of the system and is concerned with the uncertainty analysis of the AS-TRM as it evolves. An appropriate architecture for supporting reliability self-assessment, along with a communication mechanism to implement the reactive and autonomic behavior, are specified with ASSL.
Download

Paper Nr: 171
Title:

SCMM-TOOL - Tool for Computer Automation of the Information Security Management Systems

Authors:

Luis Enrique Sánchez Crespo, Daniel Villafranca Alberca, Eduardo Fernández-Medina and Mario Piattini

Abstract: For enterprises to be able to use information technologies and communications with guarantees, it is necessary to have an adequate security management system and tools which allow them to manage it. In addition, security management system must have highly reduced costs for its implementation and maintenance in small and medium-sized enterprises (from here on refered to as SMEs) to be feasible. In this paper, we will show the tool we have developed using our model for the development, implementation and maintenance of a security management system, adapted to the needs and resources of a SME. Furthermore, we will state how this tool lets enterprises with limited resources manage their security system very efficiently. This approach is being directly applied to real cases, thus obtaining a constant improvement in its application.
Download

Paper Nr: 173
Title:

TEST FRAMEWORKS FOR ELUSIVE BUG TESTING

Authors:

William Howden and Cliff Rhyne

Abstract: Elusive bugs can be particularly expensive because they often survive testing and are released in a deployed system. They are characterized as involving a combination of properties. One approach to their detection is bounded exhaustive testing (BET). This paper describes how to implement BET using a variation of JUnit, called BETUnit. The idea of a BET pattern is also introduced. BET patterns describe how to solve certain problems in the application of BETUnit. Classes of patterns include BET test generation and BET oracle design. Examples are given of each.
Download

Paper Nr: 207
Title:

RESOURCE SUBSTITUTION FOR THE REALIZATION OF MOBILE INFORMATION SYSTEMS

Authors:

Hagen Höpfner and Christian Bunse

Abstract: Recent advances in wireless technology have led to mobile computing, a new dimension in data communication and processing. Market observers predict an emerging market with millions of mobile users carrying small, battery-powered terminals equipped with wireless connection, and as a result, the way people use information resources is predicted. However,the realization of mobile information systems (mIS) is affected by the users need to handle complex data sets as well as the restrictions of used devices and networks. Hence, software engineering has to bridge the gab between both worlds, and thus, has to balance given resources. Extensive wireless data transmissions, that is expensive, slow, and energy intensive can - for example - be reduced if mobile clients cache received data locally. In this short paper we discuss, which and how resources are substitutable in order to enable more complex, more reliable and more efficient mIS. Therefore, we analyze the resources used for data management with mobile devices and show how they can be considered by software development approaches in order to implement mIS.
Download

Paper Nr: 209
Title:

INTEGRATING A DISTRIBUTED INSPECTION TOOL WITHIN AN ARTEFACT MANAGEMENT SYSTEM

Authors:

Andrea De Lucia, Fausto Fasano, Genny Tortora and Giuseppe Scanniello

Abstract: We propose a web based inspection tool addressing the problem of software inspection within a distributed development environment. This tool implements an inspection method that tries to minimise the synchronous collaboration among team members using an asynchronous discussion to resolve the conflicts before the traditional synchronous meeting. The tool also provides automatic merging and conflict highlighting functionalities to support the reviewers during the pre-meeting refinement phase. Information about the inspection progress, which can be a valuable support to make inspection process related decisions is also provided. The inspection tool has been integrated within an artefact management system, thus allowing the planning, scheduling, and enactment of the inspection within the development process and integrating the review phase within the overall artefact lifecycle.
Download

Paper Nr: 522
Title:

A STATISTICAL NEURAL NETWORK FRAMEWORK FOR RISK MANAGEMENT PROCESS - From the Proposal to its Preliminary Validation for Efficiency

Authors:

Salvatore A. Sarcià, Giovanni Cantone and Victor R. Basili

Abstract: This paper enhances the currently available formal risk management models and related frameworks by providing an independent mechanism for checking out their results. It provides a way to compare the historical data on the risks identified by similar projects to the risk found by each framework Based on direct queries to stakeholders, existing approaches provide a mechanism for estimating the probability of achieving software project objectives before the project starts (Prior probability). However, they do not estimate the probability that objectives have actually been achieved, when risk events have occurred during project development. This involves calculating the posterior probability that a project missed its objectives, or, on the contrary, the probability that the project has succeeded. This paper provides existing frameworks with a way to calculate both prior and posterior probability. The overall risk evaluation, calculated by those two probabilities, could be compared to the evaluations that each framework has found within its own process. Therefore, the comparison is performed between what those frameworks assumed and what the historical data suggested both before and during the project. This is a control mechanism because, if those comparisons do not agree, further investigations could be carried out. A case study is presented that provides an efficient way to deal with those issues by using Artificial Neural Networks (ANN) as a statistical tool (e.g., regression and probability estimator). That is, we show that ANN can automatically derive from historical data both prior and posterior probability estimates. This paper shows the verification by simulation of the proposed approach.
Download

Paper Nr: 531
Title:

A COMPARISON OF STRUCTURED ANALYSIS AND OBJECT ORIENTED ANALYSIS - An Experimental Study

Authors:

Davide Falessi, Giovanni Cantone and Claudio Grande

Abstract: Despite the fact that object oriented paradigm is actually widely adopted for software analysis, design, and implementation, there are still a large number of companies that continue to utilize the structured approach to develop software analysis and design. The fact is that the current worldwide agreement for object orientation is not supported by enough empirical evidence on advantages and disadvantages of object orientation vs. other paradigms in different phases of the software development process. In this work we describe an empirical study focused on comparing the time required for analyzing a data management system by using both object orientation and a structural technique. We choose the approach indicated by the Rational Unified Process, and the Structured Analysis and Design Technique, as instances of object oriented and structured analysis techniques, respectively. The empirical study that we present considers both an uncontrolled and a controlled experiment with Master students. Its aim is to analyze the effects of those techniques to software analysis both for software development from scratch, and enhancement maintenance, respectively. Results show no significant difference in the time required for developing or maintaining a software application by applying those two techniques, whatever is the order of their application. However we found two major tendencies regarding object orientation: 1) it is more sensitive to subjects’ peculiarities, and 2) it is able to provide some reusability advantages already at the analysis level. Since such result concerns a one-hour-size enhancement maintenance, we expect significant benefits from using object orientation, in case of real-size extensions.
Download

Paper Nr: 538
Title:

A STABILITY AND EFFICIENCY ORIENTED RESCHEDULING APPROACH FOR SOFTWARE PROJECT MANAGEMENT

Authors:

Yujia Ge and Lijun Bai

Abstract: Rescheduling gains more attention in recent years by researchers who focus their study on scheduling problem under uncertain situations. But in software engineering circumstances, it has not been widely explored. In this paper we propose a GA-based approach for rescheduling by applying a multi-objective objective function considering both efficiency and stability to produce new schedules after managers take control options to catch up their initial schedules. We also conducted case studies by simulation data. The results show the effectiveness of the rescheduling method in supporting decision making in a dynamic environment.
Download

Paper Nr: 540
Title:

INCLUDING IMPROVEMENT OF THE EXECUTION TIME IN A SOFTWARE ARCHITECTURE OF LIBRARIES WITH SELF-OPTIMISATION

Authors:

Luis-Pedro García González, Javier Cuenca Muñoz and Domingo Giménez Cánovas

Abstract: The design of hierarchies of libraries helps to obtain modular and efficient sets of routines to solve problems of specific fields. An example is ScaLAPACK’s hierarchy in the field of parallel linear algebra. To facilitate the efficient execution of these routines, the inclusion of self-optimization techniques in the hierarchy has been analysed. The routines at a level of the hierarchy use information generated by routines from lower levels. But sometimes, the information generated at one level is not accurate enough to be used satisfactorily at higher levels, and a remodelling of the routines is necessary. A remodelling phase is proposed and analysed with a Strassen matrix multiplication.
Download

Paper Nr: 542
Title:

ON GENERATING TILE SYSTEM FOR A SOFTWARE ARCHITECTURE CASE OF A COLLABORATIVE APPLICATION SESSION

Authors:

Chafia Bouanaka, Aicha Choutri and Faiza Belala

Abstract: Tile logic, an extension of rewriting logic, where synchronization, coordination and interaction can be naturally expressed, is showed to be an appropriate formal semantic framework for software architecture specification. Based on this logic, we define a notion of dynamic connection between software components. Then, individual components are viewed as entirely independent elements and free from any static interconnection constraints. We also fill out the usual component description, expressed in terms of Provided/Required services, with functionalities specification of such services. Starting from State/Transition UML diagrams, representing requirements of the underlying distributed system, our objective consists of offering a common semantic framework for architectural description as well as behavioural specification of that system. Direct consequences of the proposed approach are dynamic reconfiguration and components mobility which become straightforward aspects. A simple, but comprehensive, case study, the collaborative application session, is used to illustrate all stages of our proposed approach.
Download

Paper Nr: 547
Title:

A METHOD TO MODEL GUIDELINES FOR DEVELOPING RAILWAY SAFETY-CRITICAL SYSTEMS WITH UML

Authors:

Dieu O. Ossami, Jean-marc Mota, L. Thiry, J.-M. Perronne, J.-L. Boulanger and G. Mariano

Abstract: There are today an abundance of standards concerned with the development and certification of railway safety- critical systems. They recommend the use of different techniques to describe system requirements and to pursue safety strategies. One problem shared by standards is that they only prescribe what should be done or use but they provide no guidance on how recommendations can be fulfilled. The purpose of this paper is to investigate a methodology to model guidelines for building certifiable UML models that cater for the needs and recommendations of railway standards. The paper will explore some of the major tasks that are typical of development guidelines and will illustrate practical steps for achieving these tasks.
Download

Paper Nr: 549
Title:

COMPONENT BASED METHODOLOGY FOR QOS-AWARE NETWORK DESIGN

Authors:

Cédric Teyssié, David Espès and Zoubir Mammeri

Abstract: New services (such as VoIP) and their quality requirements have dramatically increased the complexity of the underlying networks. Quality of Service support is a challenge for next generation networks. Design methods and modeling languages can help reduce the complexity of the integration of QoS. UML is successfully used in several domains. In this paper, we propose a QoS component oriented methodology based on UML. This methodology reduces network-design complexity by separating design considerations into functional and non-functional parts. It also provides a design cycle and proposes abstraction means where QoS is integrated. As UML is not adapted for modeling non-functional elements, we combine UML strengths and a QoS specification language (QSL).
Download

Paper Nr: 551
Title:

AUTOMATIC TEST MANAGEMENT OF SAFETY-CRITICAL SYSTEMS: THE COMMON CORE - Behavioural Emulation of Hard-soft Components

Authors:

Antonio Grillo, Giovanni Cantone, Christian Di Biagio and Guido Pennella

Abstract: In order to solve problems that the usage a human-managed test process caused, the reference company for this paper - Italian branch of a multinational organization which works in the domain of large safety-critical systems - evaluated the opportunity, as offered by major technology that the market provides, of using automatic test management. That technology resulted not sufficiently featured for the company’s quality and productivity improvement goals, and we were charged for investigating in deep and eventually satisfying the company’s test-management needs of automation. Once we had transformed those goals in technical requirements and evaluated that it was possible to realize them conveniently in a software system, we passed to analyze, construct, and eventually evaluate in field the “Automatic Test Management” system, ATM. This paper is concerned with the ATM subsystem’s Common Core, CC. This allows the behavioral emulation of hard-soft components - as part of a distributed real components scenario placed under one or more Unix standard operative systems - once we describe those behaviors by using the Unified Modeling Language. This paper reports on the ATM-CC’s distinctive characteristics and architecture overview. Results from a case study show that, in order to enact a given suite of tests by the ATM-CC, the amount of time required is more or less the same for the first test run, but it becomes around ten times less for the following test runs, than the time required for managing the execution of those tests by hand.
Download

Paper Nr: 552
Title:

SIMULATION METHODOLOGIES FOR SCIENTIFIC COMPUTING - Modern Application Design

Authors:

Philipp Schwaha, Markus Schwaha, Rene Heinzl, Enzo Ungersboeck and Siegfried Selberherr

Abstract: We discuss methodologies to obtain solutions to complex mathematical problems derived from physical models. We present an approach based on series expansion, using discretization and averaging, and a stochastic approach. Various forms based on the Boltzmann equation are used as model problems. Each of the methodologies comes with its own strengths and weaknesses, which are briefly outlined. We also provide short code snippets to demonstrate implementations of key parts, that make use of our generic scientific simulation environment, which combines high expressiveness with high runtime performance.
Download

Paper Nr: 555
Title:

SECURE REFACTORING - Improving the Security Level of Existing Code

Authors:

Katsuhisa Maruyama

Abstract: Software security is ever-increasingly becoming a serious issue; nevertheless, a large number of software programs are still defenseless against malicious attacks. This paper proposes a new class of refactoring, which is called secure refactoring. This refactoring is not intended to improve the maintainability of existing code. Instead, it helps programmers to increase the protection level of sensitive information stored in the code without changing its observable behavior. In this paper, four secure refactorings of Java source code and their respective mechanics based on static analysis are presented. All transformations of the proposed refactorings can be designed to be automated on our refactoring browser which supports the application of traditional refactorings.
Download

Paper Nr: 558
Title:

SOFTWARE DEFECT PREDICTION: HEURISTICS FOR WEIGHTED NAÏVE BAYES

Authors:

Burak Turhan and Ayşe Başar Bener

Abstract: Defect prediction is an important topic in software quality research. Statistical models for defect prediction can be built on project repositories. Project repositories store software metrics and defect information. This information is then matched with software modules. Naïve Bayes is a well known, simple statistical technique that assumes the ‘independence’ and ‘equal importance’ of features, which are not true in many problems. However, Naïve Bayes achieves high performances on a wide spectrum of prediction problems. This paper addresses the ‘equal importance’ of features assumption of Naïve Bayes. We propose that by means of heuristics we can assign weights to features according to their importance and improve defect prediction performance. We compare the weighted Naïve Bayes and the standard Naïve Bayes predictors’ performances on publicly available datasets. Our experimental results indicate that assigning weights to software metrics increases the prediction performance significantly.
Download

Paper Nr: 565
Title:

NEW DESIGN TECHNIQUES FOR ENHANCING FAULT TOLERANT COTS SOFTWARE WRAPPERS

Authors:

Luping Chen and John May

Abstract: Component-based systems can be built by assembling components developed independently of the systems. Middleware code that connects the components is usually needed to assemble them into a system. The ordinary role of the middleware is simple glue code, but there is an opportunity to design it as a safety wrapper to control the integration of the components to help assure system dependability. This paper investigates some architectural designs for the safety wrappers using a nuclear protection system example. It integrates new fault-tolerant techniques based on diagnostic assertions and diverse redundancy into the middleware designs. This is an attractive option where complete trust in component reliability is impossible or costly to achieve.
Download

Paper Nr: 568
Title:

A FORMAL APPROACH TO DEPLOY HETEROGENEOUS SOFTWARE COMPONENTS IN A PLC

Authors:

Mohamed Khalgui and Emanuele Carpanzano

Abstract: This paper deals with an industrial control application following different component-based technologies. This application, considered as a network of heterogeneous components, has to be deployed in a multi-tasking PLC. It has classically to respect temporal constraints according to specifications. To deploy the components in feasible OS tasks of the controller, we propose to fix a formal component model allowing their homogeneous design. We enrich, in particular, this model to unify well known technologies. The application is considered then as a network of homogeneous components. We propose to transform this network into a real-time tasks system with precedence constraints to exploit previous results on real-time deployment.
Download

Paper Nr: 577
Title:

EVOLUTION STYLES IN PRACTICE - Refactoring Revisited as Evolution Style

Authors:

Olivier Le Goaer, Mourad Oussalah, Dalila Tamzalit and Abdelhak Djamel Seriai

Abstract: The evolution of pure software systems remains a time-consuming and error-prone activity. But whatever the considered domain, recurring practices can be captured and reused to alleviate the subsequent amounts of effort. In this paper we propose to treat domain-specific problems-solutions pairs as first-class entities called “evolution styles”. As such, an evolution style is endowed with an instantiation mechanism and can be considered at different conceptual levels. Applied on arbitrary domains, an evolution style is intended to evolve a family of applications whereas its instances evolve given applications. The evolution style’s format is a component triple where each component is highly reusable. In this way, evolution styles are scalable knowledge fragments able to support large and complex evolutions, readily available to be played and replayed.
Download

Paper Nr: 582
Title:

MACRO IMPACT ANALYSIS USING MACRO SLICING

Authors:

László Vidács, Árpád Beszédes and Rudolf Ferenc

Abstract: The expressiveness of the C/C++ preprocessing facility enables the development of highly configurable source code. However, the usage of language constructs like macros also bears the potential of resulting in highly incomprehensible and unmaintainable code, which is due to the flexibility and the “cryptic” nature of the pre-processor language. This could be overcome if suitable analysis tools were available for preprocessor-related issues, however, this is not the case (for instance, none of the modern Integrated Development Environments provides features to efficiently analyze and browse macro usage). A conspicuous problem in software maintenance is the correct (safe and efficient) management of change. In particular, due to the aforementioned reasons, determining efficiently the impact of a change in a specific macro definition is not yet possible. In this paper, we describe a method for the impact analysis of macro definitions, which significantly differs from the previous approaches. We reveal and analyze the dependencies among macro-related program points using the so-called macro slices.
Download

Paper Nr: 587
Title:

A FORMAL APPROACH FOR THE DEVELOPMENT OF AUTOMATED SYSTEMS

Authors:

Olfa Mosbahi, Leila Jemni Ben Ayed and Jacques Jaray

Abstract: This paper deals with the use of two verification approaches : theorem proving and model checking. We focus on the event B method by using its associated theorem proving tool (Click n Prove), and on the language TLA+ by using its model checker TLC. By considering the limitation of the event B method to invariance properties, we propose to apply the language TLA+ to verify liveness properties on a software behavior. We extend first of all the expressivity of a B model (called temporal B model) to deal with the specification of fairness and eventuality properties. Second, we give transformation rules from a temporal B model into a TLA+ module. We present in particular, our prototype system called B2TLA+ , that we have developed to support this transformation. Finally, we verify these properties thanks to the TLC model checker.
Download

Paper Nr: 611
Title:

A PRODUCT LINE OF SOFTWARE REUSE COST MODELS

Authors:

Mustafa Korkmaz and Ali Mili

Abstract: In past work, we had proposed a software reuse cost model that combines relevant stakes and stakeholders in an integrated ROI-based model. In this paper we extend our earlier work in two directions: conceptually, by capturing aspects of the model that were heretofore unaccounted for; practically, by proposing a product line that supports a wide range of cost modeling applications.
Download

Paper Nr: 613
Title:

A MODEL-DRIVEN ENGINEERING APPROACH TO REQUIREMENTS ENGINEERING - How These Disciplines May Benefit Each Other

Authors:

Begoña Moros, Cristina Vicente-Chicote and Ambrosio Toval

Abstract: The integration of Model Driven Engineering (MDE) principles into Requirements Engineering (RE) could be beneficial to both MDE approaches and RE. On the one hand, the definition of a requirements metamodel would allow requirements engineers to integrate all RE concepts in the same model and to know which elements are part of the RE process and how they are related. Besides, this requirement metamodel could be used as a common conceptual model for requirements management tools supporting the RE process. On the other hand, this requirements metamodel could be related to other metamodels describing analysis and design artefacts. This would align requirements to models and, as a consequence, requirements could be more easily integrated into the current MDE approach. To achieve this, the traditional RE process, focused on a document-based requirements specification, should be changed into a requirements modelling process. Thus, in this paper we propose a requirements modelling language (metamodel) aimed at easing the integration of requirements into a MDE approach. This metamodel, called REMM, is the basis of a requirements graphical modelling tool also implemented as part of this work. This tool allows requirements engineers to depict all the elements involved in the RE process and to trace relationships between them.
Download

Paper Nr: 614
Title:

INTEGRATING SOFTWARE ARCHITECTURE CONCEPTS INTO THE MDA PLATFORM

Authors:

Alti Adel, Tahar Khammaci, Adel Smeda and Bennouar Djamal

Abstract: Architecture Description Languages (ADLs) provide an abstract representation of software systems. Achieving a concrete mapping of such representation into the implementation is one of the principal aspects of MDA (Model Driven Architecture). Integration of ADLs within MDA confers to the MDA platform a higher level of abstraction and a degree of reuse of ADLs. Indeed they have significantly different platform metamodels which make the definition of mapping rules complex. This complexity is clearly noticeable when some software architecture concepts cannot be easily mapped to MDA platform. In this paper, we propose to integrate software architecture within MDA. We define also strategy for direct transformation using a UML profile. It represents both software architecture model (PIM) and MDA platform model (PSM) in UML meta-model then elaborates transformation rules between results UML meta-models. The goal is to automate the process of deriving implementation platform from software concepts.
Download

Paper Nr: 615
Title:

A SOFTWARE TOOL FOR REQUIREMENTS SPECIFICATION - On using the STORM Environment to Create SRS Documents

Authors:

Sergiu Dascalu, Eric Fritzinger, Kendra Cooper and Narayan Debnath

Abstract: STORM, presented in this paper, is a UML-based software engineering tool designed for the purpose of automating as much of the requirements specification phase as possible. The main idea of the STORM approach is to combine adequate requirements writing with robust use case modelling in order to expedite the process leading up to the actual design of the software. This paper presents a description of our approach to software requirements specification as well as an overview of STORM’s design concepts, organizing principles, and modes of operation. Also included are examples of the tool’s use, a comparison between STORM and similar CASE tools, and a discussion of needed features for software environments that support text aspects of requirements and use case modelling.
Download

Paper Nr: 632
Title:

ADDRESSING SECURITY REQUIREMENTS THROUGH MULTI-FORMALISM MODELLING AND MODEL TRANSFORMATION

Authors:

Miriam Zia, Ernesto Posse and Hans Vangheluwe

Abstract: Model-based approaches are increasingly used in all stages of complex systems design. In this paper, we use multi-formalism modelling and model transformation to address security requirements. Our methodology supports the verification of security properties using the model checker FDR2 on CSP (Communicating Sequential Processes) models. This low-level constraint checking is performed through model refinements, from a behavioural description of a system in the Statecharts formalism. The contribution of this paper lies in the combination of various formalisms and transformations between them. In particular, mapping Statecharts onto CSP models allows for combination of the deterministic system model with non-deterministic models of a system’s environment (including, for example, possible user attacks). The combination of system and environment models is used for model checking. To bridge the gap between these Statechart and CSP models, we introduce kiltera, an intermediate language that defines the system in terms of interacting processes. kiltera allows for simulation, real-time execution, as well as translation into CSP models. An e-Health application is used to demonstrate our approach.
Download

Paper Nr: 634
Title:

A CASE STUDY ON THE APPLICABILITY OF SOFTWARE RELIABILITY MODELS TO A TELECOMMUNICATION SOFTWARE

Authors:

Hassan Artail, Fuad Mrad and Mohamad Mortada

Abstract: Faults can be inserted into the software during development or maintenance, and some of these faults may persist even after integration testing. Our concern is about quality assurance that evaluates the reliability and availability of the software system through analysis of failure data. These efforts involve estimation and prediction of next time to failure, mean time between failures, and other reliability-related parameters. The aim of this paper is to empirically apply a variety of software reliability growth models (SRGM) found in the CASRE (Computer Aided Software Reliability Estimation) tool onto real field failure data taken after the deployment of a popular billing software used in the telecom industry. The obtained results are assessed and conclusions are made concerning the applicability of the different models to modeling faults encountered in such environments after the software has been deployed.
Download

Paper Nr: 28
Title:

UNDERSTANDING PRODUCT LINES THROUGH DESIGN PATTERNS

Authors:

Daniel Cabrero Moreno, Javier Garzás and Mario Piattini

Abstract: Many proposals concerning design and implementation of Software Product Lines have been studied in the last few years. This work points out how and why different Design Patterns are used in the context of Product Lines. This will be achieved by reviewing how often those patterns appear in different proposed solutions and research papers for Product Lines for a given set of sources. This information will help us identify which specific problems need to be solved in the context of Product Lines. In addition, we will discuss how this information can be useful to identify gaps in new research.
Download

Paper Nr: 35
Title:

REQUIREMENTS DEFINITIONS OF REAL-TIME SYSTEM USING THE BEHAVIORAL PATTERNS ANALYSIS (BPA) APPROACH - The Elevator Control System

Authors:

Assem El-Ansary

Abstract: This paper presents a new event-oriented Behavioral Pattern Analysis (BPA) modeling approach. In BPA, events are considered the primary objects of the world model. Events are more effective alternatives to use cases in modeling and understanding the functional requirements. The Event defined in BPA is a real-life conceptual entity that is unrelated to any implementation. The BPA Behavioral Patterns are temporally ordered according to the sequence of the real world events. The major contributions of this research are: The Behavioral Pattern Analysis (BPA) modeling approach. Validation of the hypothesis that the Behavioral Pattern Analysis (BPA) modeling approach is a more effective alternative to Use Case Analysis (UCA) in modeling the functional requirements of Human-Machine Safety-Critical Real-time Systems.
Download

Paper Nr: 38
Title:

IMPLEMENTING A VALUE-BASED APPROACH TO SOFTWARE PROCESS AND PRODUCT ASSESSMENT

Authors:

Pasi Ojala

Abstract: Recently more and more attention has been focused on the costs of SPI as well as on the cost-effectiveness and productivity of software development. This study outlines the main concepts and principles of a value-based approach and presents an industrial case where value assessment based on value-based approach has been used in practise. The results of the industrial case show that even though there is still much to do in making the economic-driven view complete in software engineering, the value-based approach outlines a way towards a more comprehensive understanding of it. For companies the value assessment offers useful help when struggling with cost-effectiveness and productivity related problems.
Download

Paper Nr: 44
Title:

HARDWARE PROJECT MANAGEMENT - What we Can Learn from the Software Development Process for Hardware Design?

Authors:

Rolf Drechsler and Andreas Breiter

Abstract: Nowadays hardware development process is more and more software oriented. Hardware description languages (HDLs), like VHDL or Verilog, are used to describe the hardware on the register-transfer level (RTL) or on even higher levels of abstraction. Considering ASICs of more than 10 million gates and a HDL to gate ratio of approximately 1:10 to 1:100, i.e. one line of HDL code on the RTL corresponds to 10 to 100 gates in the netlist, the HDL description consists of several hundred thousand lines of code. While classical hardware design focuses purely on the development of efficient tools to support the designer, in industrial work processes the development cycle becomes more and more important. In this paper we discuss an approach, where known concepts from software engineering and project management are studied and transferred to the hardware domain. Several aspects are pointed out that should ensure high quality designs and by this the paper presents a way working towards a more robust design process by a tight integration of hardware design and project management. The intention of this work is not to provide an exhaustive discussion, but many points are addressed that with increasing circuit complexity will become more and more important for successful ASIC design.
Download

Paper Nr: 51
Title:

TOWARDS A UNIFIED SECURITY/SAFETY FRAMEWORK - A Design Approach to Embedded System Applications

Authors:

Miroslav Sveda and Radimir Vrba

Abstract: This paper presents a safety and security-based approach to networked embedded system design that offers reusable design patterns for various domain-dedicated applications. After introducing proper terminology, it deals with industrial, sensor-based applications development support aiming at distributed components interconnected by wired Internet and/or wireless sensor networks. The paper presents a dependability-driven approach to embedded networks design for a class of Internet-based applications. It discusses an abstract framework stemming from embedded system networking technologies using wired and wireless LANs, and from the IEEE 1451.1 smart transducer interface standard supporting client-server and publish-subscribe communication patterns with group messaging based on IP multicast that mediate safe and secure access to smart sensors through Internet and Zigbee. The case study demonstrates how clients can access groups of wireless smart pressure and temperature sensors and safety valves through Internet effectively using developed system architecture, which respects prescribed requirements for application dependent safety and security.
Download

Paper Nr: 62
Title:

TOWARDS A NEW CODE-BASED SOFTWARE DEVELOPMENT CONCEPT ENABLING CODE PATTERNS

Authors:

Klaus Meffert and Ilka Philippow

Abstract: Modern software development is driven by many critical forces. Among them are fast deployment requirements and easy-to-maintain code. These forces are contradicted by the rising complexity of the technological landscape among others. We introduce a concept aiding in lowering these negative aspects for code-based software development. Protagonists of our work are explicit semantics in source code and newly introduced code pattern templates, which enable code transformations. Throughout this paper, the term code pattern includes architectural patterns, design patterns, and refactoring operations. Enabling automated transformations stands for providing means of executing possibly premature transformations.
Download

Paper Nr: 68
Title:

TOWARDS A KNOWLEDGE BASE TO IMPROVE REUSABILITY OF DESIGN PATTERN

Authors:

Cédric Bouhours, Hervé Leblanc and Christian Percebois

Abstract: In this paper, we propose to take directly into account the knowledge of experts during a design review activity. Such activity requires an ability to analyze and to transform models, in particular to inject design patterns. Our approach consists in identifying model fragments which can be replaced by design patterns. We name these fragments “alternative models” because they solve the same problem as the pattern, but with a more complex or different structure than the pattern. In order to classify and to explain the design defects of this alternative models base, we propose the concept of strong point. A strong point is a key design feature which permits the pattern to resolve a problem most efficiently.
Download

Paper Nr: 83
Title:

THE MISSING LAYER - Deficiencies in Current Rich Client Architectures, and their Remedies

Authors:

Brendan Lawlor and Jeanne Stynes

Abstract: There is an architectural deficit in most rich client applications currently undertaken: In n-tier applications the presentation layer is represented as a single layer. This fits badly with business layers that are increasingly organized along Service Oriented Architecture lines. In n-tier systems in general, and SOA systems in particular, the client’s role is to combine a number of services into a single application. Low-level patterns, mostly based on MVC, can support the design of individual components, each one communicating with a particular back end service. No commonly understood pattern is currently evident that would allow these components to be combined into a loosely coupled application. This paper outlines a rich client architecture that addresses this gap by adding a client application layer.
Download

Paper Nr: 85
Title:

SOFTWARE ENGINEERING LESSIONS LEARNED FROM DEVELOPING AND MAINTAINING WEBSITES

Authors:

Tammy H. Chan and Zhen Hua Liu

Abstract: Developing, maintaining and enhancing software features and functions for production websites are challenging software engineering activities. There are many aspects of software engineering practices and methodologies that are different in developing software features and systems for 24x7 production website compared with developing classical standalone software systems or client-server systems. This experience paper describes software engineering lessons that we have learned from developing, enhancing and maintaining software features for production websites and summarizes the key software engineering principles and practices that are essential for delivering successful 24x7 E-commerce based production websites.
Download

Paper Nr: 101
Title:

CLOSING THE BUSINESS-APPLICATION GAP IN SOA - Challenges and Solution Directions

Authors:

Boris Shishkov, Jan G. Dietz and Marten van Sinderen

Abstract: Adequately resolving the business-software gap in the context of SOA (Service-Oriented Architecture) appears to be a non-trivial task, mainly because of the dual essence of the service concept: (i) services are inevitably business-restricted because they operate in real-life environments; (ii) services are also technology-restricted because the software components realizing them have to obey the restrictions of their complex technology-driven environments. Hence, the existence of these two restriction directions makes the (SOA-driven) business-software alignment challenging – here current business-software mapping mechanisms can only play a limited role. With regard to this, the contribution of the current paper is two-fold: 1. it analyzes SOA and its actual challenges, from a business-software-alignment perspective, deriving essential SOA application desirable properties; 2. it proposes software services specification directions, particularly concerning the (SOA-driven) business-software mapping. This contribution is expected to be useful in current software development.
Download

Paper Nr: 105
Title:

A STUDY ON SOFTWARE PROJECT COACHING MODEL USING TSP IN SAMSUNG

Authors:

Taehee Gwak and Yoonjung Jang

Abstract: Reasonable planning for the project, monitoring of the project status, and controlling of the project with appropriate corrective actions are key factors for successful software project management. If the project leader gets helps and supports from an expert who have the know-how of software project management instead of depending on only their own discretion, these activities can be conducted much more effectively. The TSPP (Team Software Process) is a process framework developed to provide guidelines on software development and management activities for teams. Samsung Electronics introduced the PSP/TSP technology to meet visualization and high efficiency needs in software development, in 2003. In this paper, we propose the software project coaching model based on the TSP/PSP experiences to support the project leader and the team members efficiently and analyze the applying results and effects.
Download

Paper Nr: 163
Title:

A METHODOLOGY TO FINALIZE THE REQUIREMENTS FOR A PROJECT WITH MULTIPLE STAKE- HOLDERS - Presenting Software Engineering Workshop as a Solution

Authors:

Ashutosh Parashar and Selvakumaran Mannappan

Abstract: Implementing software projects for large corporations, more often than not, involves large number of stakeholders, each with their own set of requirements, which makes requirements finalization very difficult. The authors propose Solution Envisioning Workshop (SEW) as a solution and present the practice from the context of a large project executed for a European banking giant. The project had a very large and diverse set of stake-holders- around 300 member banks as the client organizations, interfacing requirements with around ten separate systems/ projects, active involvement of central departments of the organization as active stakeholders. The paper elaborates on the approach taken towards implementing the SEW, the preparatory & follow-up activities, the benefits, limitations and the lessons learnt. They conclude that the SEW approach results in creating better understanding, much faster requirement finalization. Quantitative and qualitative inputs are provided to corroborate the findings.
Download

Paper Nr: 164
Title:

SCHEME FOR COMPARING RESULTS OF DIVERSE SOFTWARE VERSIONS

Authors:

Viktor Mashkov and Jaroslav Pokorny

Abstract: The paper presents a scheme for comparing the results produced by diversely designed SW versions in order to select and deliver presumably correct result. It also allows to determine all faulty versions of SW and all faulty comparators. As compared to the majority voting scheme, it requires a lesser number of result comparisons and is able, in most situations, to deliver presumably correct service even if the number of faulty SW versions is greater than the number of correct ones. The scheme is based on system-level diagnosis technique, particularly, on the comparison-based testing model. The proposed scheme can be used for designing fault-tolerant diverse servers and for improving adjudicator in N-version programming technique.
Download

Paper Nr: 165
Title:

PRIORITIZATION OF PROCESSES FOR SOFTWARE PROCESS IMPROVEMENT IN SMALL SOFTWARE ENTERPRISES

Authors:

Francisco Pino, Félix García Rubio and Mario Piattini

Abstract: In this article a set of processes which are considered to be of high-priority when initiating the implementation of a Software Process Improvement –SPI– project in Very Small Software Enterprises – VSEs–, is presented. The objective is to present the VSEs with a strategy to deal with the first processes that must be considered when they undertake an SPI project. The processes proposed in this article are fundamentally based on the analysis and contrast of several pieces of research carried out by the COMPETISOFT project. The fundamental principle of the proposal is that process improvement must be connected with the other software process management responsibilities.
Download

Paper Nr: 184
Title:

E-LEARNING FOR HEALTH ISSUES BASED ON RULE-BASED REASONING AND MULTI-CRITERIA DECISION MAKING

Authors:

Katerina Kabassi, Maria Virvou and George Tsihrintzis

Abstract: The paper presents an e-learning system called INTATU, which provides education on Atheromatosis. Atheromatosis is a disease that is of interest not only to doctors, but also to common users without any medical background. For this purpose, the system maintains and processes information about the users’ interests and background knowledge and provides individualized learning for the domain of Atheromatosis. More specifically, the reasoning mechanism in INTATU uses a novel combination of rule-based reasoning and a multi-criteria decision making theory called SAW for selecting the theory topics that appear to be most appropriate for a particular user with respect to his/her background knowledge and interest.
Download

Paper Nr: 205
Title:

MODEL-DRIVEN DEVELOPMENT OF GRAPHICAL TOOLS - Fujaba Meets GMF

Authors:

Thomas Buchmann, Alexander Dotor and Bernhard Westfechtel

Abstract: In this paper we describe and evaluate our combination of the Fujaba CASE-Tool with the Graphical Modeling Framework (GMF) of the Eclipse IDE. We created an operational model with Fujaba and used it as input for a GMF editor generation process. This allows us to introduce a new approach for generating fully operational models including graphical editors for model representation and transformation. By making our developement process explicit this paper acts as a guide for applying this approach to other projects as well.
Download

Paper Nr: 205
Title:

MODEL-DRIVEN DEVELOPMENT OF GRAPHICAL TOOLS - Fujaba Meets GMF

Authors:

Thomas Buchmann, Alexander Dotor and Bernhard Westfechtel

Abstract: In this paper we describe and evaluate our combination of the Fujaba CASE-Tool with the Graphical Modeling Framework (GMF) of the Eclipse IDE. We created an operational model with Fujaba and used it as input for a GMF editor generation process. This allows us to introduce a new approach for generating fully operational models including graphical editors for model representation and transformation. By making our developement process explicit this paper acts as a guide for applying this approach to other projects as well.
Download

Paper Nr: 532
Title:

AN IMPROVEMENT TO THE MIXED MDA-SOFTWARE FACTORY APPROACH: A REAL CASE

Authors:

Gustavo Muñoz and Juan Carlos Granja

Abstract: In this article, we will offer a solution to the mixed MDA–software factory model which enables greater satisfaction of the requirements of a product line based on work by Gary Chastek’s team with the application of the required transformations for generating the three components necessary for the creation of product families using the mixed approach. In order to validate the chosen representation and transformations, we will focus on a real case which appeared in a previous article by Muñoz et al. (2006). Interesting option is to explore in greater depth the requirements of the family of programs that we want to create and to obtain the product line, framework and specific language from these. For this purpose, we will use Chastek et al.’s representation system (2001) which allows us to represent the requirements using three CIM models and a dictionary of specific terms. The mixed MDA–software factory approach (Muñoz, J., Pelechano, V.) enables the advantages of both approaches to be enjoyed using the PIM models as a starting point.
Download

Paper Nr: 533
Title:

AN EXPERIMENTAL EVALUATION OF SOFTWARE PERFORMANCE MODELING AND ANALYSIS TECHNIQUES

Authors:

Julie Street and Robert Pettit Iv

Abstract: In many software development projects, performance requirements are not addressed until after the application is developed or deployed, resulting in costly changes to the software or the acquisition of expensive high-performance hardware. Many techniques exist for conducting performance modeling and analysis during the design phase; however, there is little information on their effectiveness. This paper presents an experiment that compared the predicted data from the UML Profile for Schedulability, Performance, and Time (SPT) paired with statistical simulation and coloured Petri nets (CPNs) for a sample implementation. We then discuss the results from applying these techniques.
Download

Paper Nr: 548
Title:

COSA: AN ARCHITECTURAL DESCRIPTION META-MODEL

Authors:

Sylvain Maillard, Adel Smeda and Mourad Oussalah

Abstract: As software systems grow, their complexity augments dramatically. In consequence their understandability and evolvability are becoming a difficult task. To cope with this complexity, sophisticated approaches are needed to describe the architecture of these systems. Architectural description is much more visible as an important and explicit analysis design activity in software development. The architecture of a software system can be described using either an architecture description language (ADL) or an object-oriented modeling language. In this article, we present a hybrid model, based on the two approaches, to describe the architecture of software systems. The principal contribution of this approach is, on the one hand to extend ADLs with object-oriented concepts and mechanisms, and on the other hand to describe connectors as entities of first class that can treat the complex dependences among components.
Download

Paper Nr: 563
Title:

A COMPUTERIZED TUTOR FOR ARCHITECTING SOFTWARE - Supporting the Creative Aspects of Software Development

Authors:

Jose L. Fernandez-Sanchez and Javier Carracedo-pais

Abstract: CASE tools must be more user-oriented, and support creative problem-solving aspects of software engineering as well as rigorous modelling based on standard notations such as UML. Knowledge based systems and particularly intelligent agents provide the technology to implement user-oriented CASE tools. Here we present an intelligent agent implemented as a CASE tool module. The agent guides the software architect through the architecting process, suggesting him the actions to be performed and the methodology rules that apply to the current problem context.
Download

Paper Nr: 571
Title:

A CASE STUDY OF DISTRIBUTED AND EVOLVING APPLICATIONS USING SEPARATION OF CONCERNS

Authors:

Hamid Mcheick, Hafedh Mili and Rakan Mcheik

Abstract: Researchers and practitioners have noted that the most difficult task is not development software in the first place but rather changing it afterwards because the software’s requirements change, the software needs to execute more efficiently, etc. For instance, changing the architecture of an application from a stand-alone application, to a distributed one is still an issue. Generally speaking, we should encapsulate distribution logic in components through the borders of aspects oriented techniques (separation of concerns) in which we define an aspect as a software artefact that addresses a concern. Although, theses aspects can be offered by the same object that changes its behaviour during lifetime. We investigate through a case study the following ideas. Firstly, what we need like modifications to transform local application to distributed one, using a number of target platforms (RMI, EJBs, etc.)? Secondly, we analyze aspects oriented development techniques to detect what is the best technique that corresponds for changes requested to integrate a new requirements such as distribution.
Download

Paper Nr: 575
Title:

DETECTING ASPECTUAL BEHAVIOR IN UML INTERACTION DIAGRAMS

Authors:

Amir-abdollahi Foumani and Constantinos Constantinides

Abstract: In this paper we discuss an approach to detect potential aspectual behavior in UML interaction diagrams. We use a case study to demonstrate how our approach can be realized: We adopt a production system to represent the static and dynamic behavior of a design model. Derivation sentences generated from the production representation of the dynamic model allow us to apply certain strategies in order to detect aspectual behavior which we categorize into “horizontal” and “vertical.” Our approach can aid developers by providing indications over their designs where restructuring may be desired.
Download

Paper Nr: 600
Title:

RE-USING EXPERIENCE IN INFORMATION SYSTEMS DEVELOPMENT

Authors:

Paulo Tomé, Ernesto Costa and Luís Amaral

Abstract: Information Systems Development (ISD) is an important organization activity that generally involves the development of models. This paper describes a framework, supported by Case-Based-Reasoning (CBR) method, that enables the use of experience in model development in the context of ISD process.
Download

Paper Nr: 609
Title:

V3STUDIO: A COMPONENT-BASED ARCHITECTURE DESCRIPTION META-MODEL - Extensions to Model Component Behaviour Variability

Authors:

Cristina Vicente-Chicote, Diego Alonso and Franck Chauvel

Abstract: This paper presents a Model-Driven Engineering approach to component-based architecture description, which provides designers with two variability modelling mechanisms, both of them regarding component behaviour. The first one deals with how components perform their activities (the algorithm they follow), and the second one deals with how these activities are implemented, for instance, using different Commercial Off-The-Shelf (COTS) products. To achieve this, the basic V3 Studio meta-model, which allows designers to model both the structure and behaviour of component-based software systems, is presented. V3 Studio takes many of its elements from the UML 2.0 meta-model and offers three loosely coupled views of the system under development, namely: a structural view (component diagrams), a coordination view (state-machine diagrams), and a data-flow view (activity diagrams). The last two of them, concerning component behaviour, are then extended in this paper to incorporate the two variability mechanisms previously mentioned.
Download