ICSOFT 2023 Abstracts


Area 1 - Foundational and Trigger Technologies

Full Papers
Paper Nr: 33
Title:

Inspect-GPU: A Software to Evaluate Performance Characteristics of CUDA Kernels Using Microbenchmarks and Regression Models

Authors:

Gargi Alavani and Santonu Sarkar

Abstract: While GPUs are popular for High-Performance Computing(HPC) applications, the available literature is inadequate for understanding the architectural characteristics and quantifying performance parameters of NVIDIA GPUs. This paper proposes “Inspect-GPU”, a software that uses a set of novel, architecture-agnostic microbenchmarks, and a set of architecture-specific regression models to quantify instruction latency, peakwarp and throughput of a CUDA kernel for a particular NVIDIA GPU architecture. Though memory access is critical for GPU performance, memory instruction execution details, such as its runtime throughput, are not revealed. We have developed a memory throughput model providing unpublished crucial insights. Inspect-GPU builds this throughput model for a particular GPU architecture. Inspect-GPU has been tested on multiple GPU architectures: Kepler, Maxwell, Pascal, and Volta. We have demonstrated the efficacy of our approach by comparing it with two popular performance analysis models. Using the results from Inspect-GPU, developers can analyze their CUDA applications, apply optimization, and model GPU architecture and its performance.
Download

Short Papers
Paper Nr: 83
Title:

Study on Adversarial Attacks Techniques, Learning Methods and Countermeasures: Application to Anomaly Detection

Authors:

Anis Bouaziz, Manh-Dung Nguyen, Valeria Valdés, Ana R. Cavalli and Wissam Mallouli

Abstract: Adversarial attacks on AI systems are designed to exploit vulnerabilities in the AI algorithms that can be used to manipulate the output of the system, resulting in incorrect or harmful behavior. They can take many forms, including manipulating input data, exploiting weaknesses in the AI model, and poisoning the training samples used to develop the AI model. In this paper, we study different types of adversarial attacks, including evasion, poisoning, and inference attacks, and their impact on AI-based systems from different fields. A particular emphasis is placed on cybersecurity applications, such as Intrusion Detection System (IDS) and anomaly detection. We also depict different learning methods that allow us to understand how adversarial attacks work using eXplainable AI (XAI). In addition, we discuss the current state-of-the-art techniques for detecting and defending against adversarial attacks, including adversarial training, input sanitization, and anomaly detection. Furthermore, we present a comprehensive analysis of the effectiveness of different defense mechanisms against different types of adversarial attacks. Overall, this study provides a comprehensive overview of challenges and opportunities in the field of adversarial machine learning, and serves as a valuable resource for researchers, practitioners, and policymakers working on AI security and robustness. An application for anomaly detection, especially malware detection is presented to illustrate several concepts presented in the paper.
Download

Paper Nr: 66
Title:

TERA-Scaler for a Proactive Auto-Scaling of e-Business Microservices

Authors:

Souheir Merkouche and Chafia Bouanaka

Abstract: In this research work, we present a novel multicriteria auto-scaling strategy aiming at reducing the operational costs of microservice-based e-business systems in the cloud. Our proposed solution, TERA-Scaler for instance, is designed to be aware of dependencies and to minimize resource consumption while maximizing system performance. To achieve these objectives, we adopt a proactive formal approach that leverages predictive techniques to anticipate the future state of the system components, enabling earlier scaling of the system to handle future loads. We implement the proposed auto-scaling process for e-business microservices using Kubernetes, and conduct experiments to evaluate the performance of our approach. The results show that TERA-Scaler outperforms the Kubernetes horizontal pod autoscaler, achieving a 39.5% reduction in response time and demonstrating the effectiveness of our proposed strategy.
Download

Paper Nr: 120
Title:

A Process Mining Methodology for People Analytics: With a Case Study on Recruitment Analysis

Authors:

Emiel Caron and Nijs Niessen

Abstract: In today’s competitive business environment, acquiring data about Human Resource (HR) processes and optimizing operational excellence are some of the main objectives. Process mining as a specific form of HR Analytics aims at translating data, stored in the HR information systems of the organizations into insights about the organization’s HR processes. These insights are then translated into improvements for HR processes and to enhancements in the compliance to rules and regulations. However, so far there is no concrete and comprehensive framework for the execution of process mining analysis that can be used practically by the HR professionals. In this paper, we develop a methodology for HR Analytics, based on PM2; a step-by-step method with the best practices and approaches to process mining projects in the HR domain. Additionally, we establish definitions such as the HR event log, the business process, etc. that capture the specifics of the HR domain. Finally, we demonstrate the effectiveness of the proposed methodology by applying it to a case study on the recruitment process. The results show that the methodology successfully identifies areas for improvement and provides insights that can enhance the overall HR recruitment process.
Download

Paper Nr: 141
Title:

A Blockchain Self-Sovereign Solution for Secure Generation, Exchange and Management of User Identity Data

Authors:

Nicolae Ghibu, Augustin Jianu, Alexandru Lupascu and Ştefan Popescu

Abstract: In this day and age the need for a secure and easy to use communication system between individuals and service providers has grown past the point of it being a matter of comfort and has entered the field of necessity. The traditional paradigm for such a system used to be a centralized digital model, where the platform owner was the main trusted 3rd party and maintained a log for each individual. This manner of providing and exchanging identity data, through centralized digital entities, is inefficient in terms of duplication, at best, and it presents considerable risks for the users, at worst. The proposed system aims to be a practical solution to these problems, being a self-sovereign identity scheme that can be used by governments, businesses, and users that need to make a claim verification alike.
Download

Area 2 - Software Engineering and Systems Development

Full Papers
Paper Nr: 15
Title:

Towards Resolving Security Smells in Microservices, Model-Driven

Authors:

Philip Wizenty, Francisco Ponce, Florian Rademacher, Jacopo Soldani, Hernán Astudillo, Antonio Brogi and Sabine Sachweh

Abstract: Resolving security issues in microservice applications is crucial, as many IT companies rely on microservices to deliver their core businesses. Security smells denote possible symptoms of such security issues. However, detecting security smells and reasoning on how to resolve them through refactoring is complex and costly, mainly because of the intrinsic complexity of microservice architectures. This paper presents the first idea towards supporting a model-driven resolution of microservices’ security smell. The proposed method relies on LEMMA to model microservice applications by suitably extending LEMMA itself to enable the modeling of microservices’ security aspects. The proposed method then enables processing LEMMA models to automatically detect security smells in modeled microservice applications and recommend the refactorings known to resolve the identified security smells. To assess the feasibility of the proposed method, this paper also introduces a proof-of-concept implementation of the proposed LEMMA-based, automated microservices’ security smell detection and refactoring.
Download

Paper Nr: 39
Title:

A 3D Descriptive Model for Designing Multimodal Feedbacks in any Virtual Environment for Gesture Learning

Authors:

Djadja D. Djadja, Ludovic Hamon and Sébastien George

Abstract: This paper addresses the problem of creation and re-usability of pedagogical feedbacks, in Virtual Learning Environments (VLE), adapted to the needs of teachers for gesture learning. One of the main strengths of VLE is their ability to provide multimodal (i.e. visual, haptic, audio, etc.) feedbacks to help the learners in evaluating their skills, the task progress or its good execution. The feedback design strongly depends on the VLE and the pedagogical strategy. In addition, past studies mainly focus on the impact of the feedback modality on the learning situation, without considering others design elements (e.g. triggering rules, features of the motion to learn, etc.). However, most existing gesture-based VLEs are not editable without IT knowledge and therefore, failes in considering the evolution of pedagogical strategies. Consequently, this paper presents the GEstural FEedback EDitor (GEFEED) allowing non-IT teachers to create their multimodal and pedagogical feedbacks into any VLE developed under Unity3D. This editor operationalises a three dimensional descriptive model (i.e. feedback virtual representation, its triggering rules, involved 3D objects) of a pedagogical feedback dedicated to gesture leaning. Five types of feedbacks are proposed (i.e. visual color or text, audio from a file or a text and haptic vibration) and can be associated with four kinds of triggers (i.e. time, contact between objects, static spatial configuration, motion metric). In the context of a dilution task in biology, an experimental study is conducted in which teachers generate their feedbacks according to pre-defined or chosen pedagogical objectives. The results mainly show : (a) the acceptance of GEEFED and the underlying model and (b), the most used types of modalities (i.e. visual color, vibration, audio from text), triggering rules (i.e. motion metric, spatial configuration and contact) and (c), the teacher satisfaction in reaching their pedagogical objectives.
Download

Paper Nr: 50
Title:

Indentation in Source Code: A Randomized Control Trial on the Readability of Control Flows in Java Code with Large Effects

Authors:

Johannes Morzeck, Stefan Hanenberg, Ole Werger and Volker Gruhn

Abstract: Indentation is a well-known principle for writing code. It is taught to progammers and applied in software projects. The typical argument for indentation is that it makes code more readable. However, taking a look into the literature reveals that the scientific foundatation for indentation is rather weak. The present work introduces a four factor experiment with focus on indentation in control flows. In the experiment, 20 participants (10 students and 10 professional developers) were asked to determine the results of given Java code consisting of if-statements and printouts. Measured was the time required to answer the question correctly. The experiment reveals that indentation has a strong (p < .001) and large (η 2 p = .832) positive effect on the readability in terms of answering time. On average participants required 179% more time on non-indented code to answer the question (where the different treatment combinations varied on average between 142% and 269%). Additionally, participants were asked about their subjective impressions on the tasks using the standardized NASA TLX questionnaire (using the categories mental demand, performance, effort, and frustration). It turned out that participants subjectively perceived non–indented code with respect to all categories more negative (p < .001, .4 < η2p < .79).
Download

Paper Nr: 59
Title:

Towards Integration of Sustainable User Experience Aspects in Systems Design: A Human-Centered Framework

Authors:

Urooj Fatima and Katrien De Moor

Abstract: The usage of software systems has immense impacts on human psychological well-being (a primary user experience outcome). The World Health Organization refers to well-being as “a positive state”, encompassing e.g., a good quality of life. Studies have shown that the human well-being is dependent on the satisfaction of certain psychological needs. However, normally, the software system development processes capture requirements that are needed to fulfill the purpose of the system itself and not the psychological requirements of humans who use and interact with these systems on an everyday basis. In order to address this challenge, this paper contributes a framework: Sustainable User eXperiences Enabled Human-centered (SUXEH) framework that addresses human psychological needs explicitly as one of the main modelling efforts in the early stages of the development process. The framework does so in a way that eases the integration of sustainable user experience aspects (mediated by the human needs) in the systems design phase of the overall development process. The framework is illustrated using a case study, the Taxi System.
Download

Paper Nr: 84
Title:

Multi-Output Learning for Predicting Evaluation and Reopening of GitHub Pull Requests on Open-Source Projects

Authors:

Peerachai Banyongrakkul and Suronapee Phoomvuthisarn

Abstract: GitHub’s pull-based development model is widely used by software development teams to manage software complexity. Contributors create pull requests for merging changes into the main codebase, and integrators review these requests to maintain quality and stability. However, a high volume of pull requests can overwhelm integrators, causing feedback delays. Previous studies have built predictive models using traditional machine learning techniques with tabular data, but these may lose meaningful information. Additionally, relying solely on acceptance and latency predictions may not be sufficient for integrators. Reopened pull requests can add maintenance costs and burden already-busy developers. This paper proposes a novel multi-output deep learning-based approach that early predicts acceptance, latency, and reopening of pull requests, effectively handling various data sources, including tabular and textual data. Our approach also applies SMOTE and VAE techniques to address the highly imbalanced nature of the pull request reopening. We evaluate our approach on 143,886 pull requests from 54 open-source projects across four well-known programming languages. The experimental results show that our approach significantly outperforms the randomized baseline. Moreover, our approach improves accuracy by 8.68%, precision by 1.01%, recall by 11.49%, and F1-score by 6.77% in acceptance prediction, and MMAE by 6.07% in latency prediction, while improving balanced accuracy by 9.43%, AUC by 9.37%, and TPR by 30.07% in reopening prediction over the existing approach.
Download

Paper Nr: 100
Title:

Source-Code Embedding-Based Software Defect Prediction

Authors:

Diana-Lucia Miholca and Zsuzsanna Oneţ-Marian

Abstract: Software defect prediction is an essential software development activity, a highly researched topic and yet a still difficult problem. One of the difficulties is that the most prevalent software metrics are insufficiently relevant for predicting defects. In this paper we are proposing the use of Graph2Vec embeddings unsupervisedly learnt from the source code as basis for prediction of defects. The reliability of the Graph2Vec embeddings is compared to that of the alternative embeddings based on Doc2Vec and LSI through a study performed on 16 versions of Calcite and using three classification models: FastAI, as a deep learning model, Multilayer Perceptron, as an untuned conventional model, and Random Forests with hyperparameter tuning, as a tuned conventional model. The experimental results suggest a complementarity of the Graph2Vec, Doc2Vec and LSI-based embeddings, their combination leading to the best performance for most software versions. When comparing the three classifiers, the empirical results highlight the superiority of the tuned Random Forests over FastAI and Multilayer Perceptron, which confirms the power of hyperparameter optimization.
Download

Short Papers
Paper Nr: 16
Title:

Incremental Reliability Assessment of Large-Scale Software via Theoretical Structure Reduction

Authors:

Wenjing Liu, Zhiwei Xu, Limin Liu and Yunzhan Gong

Abstract: Problems of software quality assurance and behavior prediction of large-scale software systems have high importance due to the fact that software systems are getting more prevalent in almost all areas of human activities, and always include an large number of modules. To continuously offer significant changes or major improvements over the existing system, software upgrading is inevitable. This involves additional difficulty to assess reliability and guarantee the quality assurance of the large-scale system. The existing reliability assessment methods cannot continuously yet effectively assess the software reliability because the program structure of the software is not taken into account to drive the assessment process. Thus, it is highly desired to estimate the software reliability in an incremental way. This paper incorporates theoretical sequentialization and reduction of the program structure into sampling-based software reliability evaluation. Specifically, we leverage importance sampling to evaluate reliability rates of sequence structures, branch structures and loop structures in the software, as well as transition probabilities among these structures. In addition, we sequentialize program structures to support the aggregation of reliability assessment results corresponding to different structures. Finally, a real-world case study is provided as a practical application of the proposed incremental assessment model.
Download

Paper Nr: 18
Title:

Execution Patterns for Quantum Applications

Authors:

Daniel Georg, Johanna Barzen, Martin Beisel, Frank Leymann, Julian Obst, Daniel Vietz, Benjamin Weder and Vladimir Yussupov

Abstract: Continuously evolving quantum service offerings vary in development and deployment requirements they impose on quantum application developers. Further, since quantum applications often require classical pre-and post-processing steps, in addition to quantum computing knowledge, expertise in cloud service models, integration, and deployment automation is needed. Thus, to reduce the required complexity and management overhead, applications often need to be tailored for quantum offerings suited for the desired execution scenario. However, clear guidelines that facilitate deciding between diverse quantum offerings are currently missing. In this work, we bridge this gap by (i) documenting five patterns that capture different execution semantics for quantum applications. Furthermore, we (ii) analyze existing quantum offerings and document their support for the captured patterns to facilitate the decision making process when implementing quantum applications.
Download

Paper Nr: 22
Title:

Semantic Coverage: Measuring Test Suite Effectiveness

Authors:

Samia Al Blwi, Amani Ayad, Besma Khaireddine, Imen Marsit and Ali Mili

Abstract: Several syntactic measures have been defined in the past to assess the effectiveness of a test suite: statement coverage, condition coverage, branch coverage, path coverage, etc. There is ample analytical and empirical evidence to the effect that these are imperfect measures: exercising all of a program’s syntactic features is neither necessary nor sufficient to ensure test suite adequacy; not to mention that it may be impossible to exercise all the syntactic features of a program (re: unreachable code). Mutation scores are often used as reliable measures of test suite effectiveness, but they have issues of their own: some mutants may survive because they are equivalent to the base program not because the test suite is inadequate; the same mutation score may mean vastly different things depending on whether the killed mutants are distinct from each other or equivalent; the same test suite and the same program may yield different mutation scores depending on the mutation operators that we use. Fundamentally, whether a test suite T is adequate for a program P depends on the semantics of the program, the specification that the program is tested against, and the property of correctness that the program is tested for (total correctness, partial correctness). In this paper we present a formula for the effectiveness of a test suite T which depends exactly on the semantics of P, the correctness property that we are testing P for, and the specification against which this correctness property is tested; it does not depend on the syntax of P, nor on any mutation experiment we may run. We refer to this formula as the semantic coverage of the test suite, and we investigate its properties.
Download

Paper Nr: 26
Title:

Can ChatGPT Generate Code Tasks? An Empirical Study on Using ChatGPT for Generating Tasks for SQL Queries

Authors:

Ole Werger, Stefan Hanenberg, Ole Meyer, Nils Schwenzfeier and Volker Gruhn

Abstract: It is now widely accepted that ML models can solve tasks that deal with the generation of source code. Now it is interesting to know whether the related tasks can be generated as well. In this paper, we evaluate how well ChatGPT can generate tasks that deal with generating simple SQL statements. To do this, ChatGPT generated for 10 different database schemas tasks with three different difficulty levels (easy, medium, hard). The generated tasks are then evaluated for suitability and difficulty by exam-correction-experienced raters. With a substantial raters agreement (α=.731), 90.67% of the tasks were considered appropriate (p<.001). However, while raters agreed that tasks, that ChatGPT considers as more difficult, are actually more difficult (p<.001), there is in general no agreement between ChatGPT’s task difficulty and rated difficulty (α=.310). Additionally, we checked in an N-of-1 experiment, whether the use of ChatGPT helped in the design of exams. It turned out that ChatGPT increased the time required to design an experiment by 40% (p=.036; d=-1.014). Altogether the present study rather raises doubts whether ChatGPT is in its current version a practical tool for the design of source code tasks.
Download

Paper Nr: 28
Title:

Exploiting Relations, Sojourn-Times, and Joint Conditional Probabilities for Automated Commit Classification

Authors:

Sebastian Hönel

Abstract: The automatic classification of commits can be exploited for numerous applications, such as fault prediction, or determining maintenance activities. Additional properties, such as parent-child relations or sojourn-times between commits, were not previously considered for this task. However, such data cannot be leveraged well using traditional machine learning models, such as Random forests. Suitable models are, e.g., Conditional Random Fields or recurrent neural networks. We reason about the Markovian nature of the problem and propose models to address it. The first model is a generalized dependent mixture model, facilitating the Forward algorithm for 1st- and 2nd-order processes, using maximum likelihood estimation. We then propose a second, non-parametric model, that uses Bayesian segmentation and kernel density estimation, which can be effortlessly adapted to work with nth-order processes. Using an existing dataset with labeled commits as ground truth, we extend this dataset with relations between and sojourn-times of commits, by re-engineering the labeling rules first and meeting a high agreement between labelers. We show the strengths and weaknesses of either kind of model and demonstrate their ability to outperform the state-of-the-art in automated commit classification.
Download

Paper Nr: 35
Title:

Understanding Compiler Effects on Clone Detection Process

Authors:

Lerina Aversano, Mario L. Bernardi, Marta Cimitile, Martina Iammarino and Debora Montano

Abstract: Copying and pasting code snippets, with or without intent, is a very common activity in software development. These have both positive and negative aspects because they save time, but cause an increase in costs for software maintenance. However, often the copied code changes due to bug fixes or refactorings, which could affect clone detection. In this regard, this study aims to investigate whether the transformations performed by the compiler on the code can determine the appearance of a set of previously undetectable clones. The proposed approach involves the extraction of software quality metrics on both decompiled and source code to bring to light any differences due to the presence of undetectable clones on the source code. Experiments were conducted on five open-source Java software systems. The results show that indeed compiler optimizations lead to the appearance of a set of previously undetected clones, which can be called logical clones. This phenomenon in Java appears to be marginal as it amounts to 5% more clones than normal, therefore a statistically negligible result in small projects, but in the future, it would be interesting to extend the study to other programming languages to evaluate any different cases.
Download

Paper Nr: 36
Title:

Differentiated Monitor Generation for Real-Time Systems

Authors:

Behnaz Rezvani and Cameron Patterson

Abstract: Safety-critical real-time systems require correctness to be validated beyond the design phase. In these systems, response time is as critical as correct functionality. Runtime verification is a promising approach for validating the correctness of system behaviors during runtime using monitors derived from formal system specifications. However, practitioners often lack formal method backgrounds, and no standard notation exists to capture system properties that serve their needs. To encourage the adoption of formal methods in industry, we present GROOT, a runtime monitoring tool for real-time systems that automatically generates efficient monitors from structured English statements. GROOT is designed with two branches, one for functional requirements and one for specifications with metric time constraints, which use appropriate formalisms to synthesize monitors. This paper introduces TIMESPEC, a structured English dialect for specifying timing requirements. Our tool also automates formal analysis to certify the C monitors’ construction. We apply GROOT to timing specifications from an industrial component and a simulated autonomous system in Simulink.
Download

Paper Nr: 62
Title:

Effectiveness of Data Augmentation and Ensembling Using Transformer-Based Models for Sentiment Analysis: Software Engineering Perspective

Authors:

Zubair R. Tusar, Sadat B. Sharfuddin, Muhtasim Abid, Md. N. Haque and Md. I. Mostafa

Abstract: Sentiment analysis for software engineering has undergone numerous research to efficiently develop tools and approaches for Software Engineering (SE) artifacts. State-of-the-art tools achieved better performance using transformer-based models like BERT, and RoBERTa to classify sentiment polarity. However, existing tools overlooked the data imbalance problem and did not consider the efficiency of ensembling multiple pre-trained models on SE-specific datasets. To overcome those limitations, we used context-specific data augmentation using SE-specific vocabularies and ensembled multiple models to classify sentiment polarity. Using four gold-standard SE-specific datasets, we trained our ensembled models and evaluated their performances. Our approach achieved an improvement ranging from 1% to 26% on weighted average F1 scores and macro-average F1 scores. Our findings demonstrate that the ensemble models outperform the pre-trained models on the original datasets and that data augmentation further improves the performance of all the previous approaches.
Download

Paper Nr: 77
Title:

ADM: An Agile Template for Requirements Documentation

Authors:

Hind Kalfat, Mourad Oussalah and Azeddine Chikh

Abstract: Requirement documentation is one of the main activities conducted during software requirements engineering which contributes to the success of the project if done effectively. In agile, teams tend to produce minimal documentation because they are much more focused on software development. Yet, this is also due to the lack of clear guidelines on what needs to be documented and how it should be done. This paper proposes an approach based on three key axes: documentation, agile, and metrics. We use in our design a metamodel to provide various document templates that are tailored to specific user needs. These templates can be adapted to different contexts such as traditional or agile development. In order to address the issue of requirements documentation in agile context, we propose a custom document template to help agile teams while creating the software requirements documentation.
Download

Paper Nr: 92
Title:

Risk Management in IT Project in the Framework of Agile Development

Authors:

Aneta Poniszewska-Marańda and Justyna Kuna

Abstract: IT projects are increasingly complex and as a result they are subject to failure. Most of them do not meet deadlines, user requirements and run over budget. Risk management in IT projects is a crucial process, which is often understated. Agile methodologies do not give detailed guidelines for risk management. Risks are not sufficiently considered in a proactive way. As a result, there is a need to look for methods and practices possible to implement in Agile in order to improve the chances of success. Some authors suggest applying traditional practises into agile approaches. The paper introduces the risk management in agile methodologies by proposing the Scrum Risk Management process. It presents the roles, events and artefacts of the method together with the application tool as a complement to risk management process. The proposed method was also used in practice in the real IT projects.
Download

Paper Nr: 111
Title:

Beyond Traditional Web Technologies for Locally Web-Services Migration

Authors:

Clay Palmeira da Silva and Nizar Messai

Abstract: COVID-19 raised our dependency on computers and mobile devices to perform daily tasks. Thus, we saw a user face the challenges of managing multiple services/device environments. Therefore, this multiple services/devices scenario becomes significant while we still face the lack of interoperability between operating systems, services, and applications on devices. Additionally, big tech companies are fighting to pursue new web technologies for their services. Additionally, they require more resources from devices to keep running. Beyond that, we had to learn and adapt to a new set of server-side services for web meetings, such as Microsoft Teams, Zoom, Google Meet, and Webex. However, we did not see any propositions on the user side for the same web services that could continuously and undisruptive provides a similar user experience of web-service migration in a multiple-devices scenario regardless of the operating system. Therefore, this paper focuses on web services based on REST, RESTful, or GraphQL to analyze their performance using the System Usability Scale. Additionally, we focus on a real-life experiment on multiple-device environments for synchronizing web services’ user instances without continuously depending on a cloud-based system. We presented results from 35 users, where we measured various metrics over the previously mentioned web services through three different devices. Moreover, we revisited The CUBE architecture, enhancing features that allow us to obtain new results. The results demonstrate that when the users used the CUBE, they had a better QoS experience, low latency, and better response time. Moreover, the CUBE provides computational performance up to ≈69% faster than the traditional cloud-based synchronization procedure.
Download

Paper Nr: 118
Title:

Integration of Heterogeneous Components for Co-Simulation

Authors:

Jawher Jerray, Rabea Ameur-Boulifa and Ludovic Apvrille

Abstract: Because of their complexity, embedded systems are designed with sub-systems or components taken in charge by different development teams or entities and with different modeling frameworks and simulation tools, depending on the characteristics of each component. Unfortunately, this diversity of tools and semantics makes the integration of these heterogeneous components difficult. Thus, to evaluate their integration before their hardware or software is available, one solution would be to merge them into a common modeling framework. Yet, such a holistic environment supporting many computation and computation semantics seems hard to settle. Another solution we investigate in this paper is to generically link their respective simulation environments in order to keep the strength and semantics of each component environment. The paper presents a method to simulate heterogeneous components of embedded systems in real-time. These components can be described at any abstraction level. Our main contribution is a generic glue that can analyze in real-time the state of different simulation environments and accordingly enforce the correct communication semantics between components.
Download

Paper Nr: 130
Title:

Towards Readability-Aware Recommendations of Source Code Snippets

Authors:

Athanasios Michailoudis, Themistoklis Diamantopoulos and Andreas Symeonidis

Abstract: Nowadays developers search online for reusable solutions to their problems in the form of source code snippets. As this paradigm can greatly reduce the time and effort required for software development, several systems have been proposed to automate the process of finding reusable snippets. However, contemporary systems also have certain limitations; several of them do not support queries in natural language and/or they only output API calls, thus limiting their ease of use. Moreover, the retrieved snippets are often not grouped according to the APIs/libraries used, while they are only assessed for their functionality, disregarding their readability. In this work, we design a snippet mining methodology that receives queries in natural language and retrieves snippets, which are assessed not only for their functionality but also for their readability. The snippets are grouped according to their used API calls (libraries), thus enabling the developer to determine which solution is best fitted for his/her own source code, and making sure that it will be easily integrated and maintained. Upon providing a preliminary evaluation of our methodology on a set of different programming queries, we conclude that it can be effective in providing reusable and readable source code snippets.
Download

Paper Nr: 132
Title:

Towards Interpretable Monitoring and Assignment of Jira Issues

Authors:

Dimitrios-Nikitas Nastos, Themistoklis Diamantopoulos and Andreas Symeonidis

Abstract: Lately, online issue tracking systems like Jira are used extensively for monitoring open-source software projects. Using these systems, different contributors can collaborate towards planning features and resolving issues that may arise during the software development process. In this context, several approaches have been proposed to extract knowledge from these systems in order to automate issue assignment. Though effective under certain scenarios, these approaches also have limitations; most of them are based mainly on textual features and they may use techniques that do not extract the underlying semantics and/or the expertise of the different contributors. Furthermore, they typically provide black-box recommendations, thus not helping the developers to interpret the issue assignments. In this work, we present an issue mining system that extracts semantic topics from issues and provides interpretable recommendations for issue assignments. Our system employs a dataset of Jira issues and extracts information not only from the textual features of issues but also from their components and their labels. These features, along with the extracted semantic topics, produce an aggregated model that outputs interpretable recommendations and useful statistics to support issue assignment. The results of our evaluation indicate that our system can be effective, leaving room for future research.
Download

Paper Nr: 143
Title:

Software Development Life Cycle for Engineering AI Planning Systems

Authors:

Ilche Georgievski

Abstract: AI planning is concerned with the automated generation of plans in terms of actions that need to be executed to achieve a given user goal. Considering the central role of this ability in AI and the prominence of AI planning in research and industry, the development of AI planning software and its integration into production architectures are becoming important. However, building and managing AI planning systems is a complex process with its own peculiarities, and requires expertise. On the one hand, significant engineering challenges exist that relate to the design of planning domain models and system architectures, deployment, integration, and system performance. On the other hand, no life cycle or methodology currently exists that encompasses all phases relevant to the development process to ensure AI planning systems have high quality and industrial strength. In this paper, we propose a software development life cycle for engineering AI planning systems. It consists of ten phases, each described in terms of purpose and available tools and approaches for its execution. We also discuss several open research and development challenges pertaining to the life cycle and its phases.
Download

Paper Nr: 24
Title:

Automatic Generation of an Informative Marketing Technological Platform

Authors:

Francesco Pilotti, Daniele Di Valerio, Martina Marinelli, Gaetanino Paolone, Samanta Vellante and Daniela D’Alessandro

Abstract: In the current global context, Small- and Medium-sized Enterprises (SMEs) must face challenges in order to reach and maintain a fitting competitive level, and improve their performance. Today, information is one of the most important resources for them. Being able to correctly manage information is a key factor, also with reference to digital marketing activities. The paper presents InfoMkBuilder, a tool able to automatically generate Informative Marketing (IM) technological platforms, i.e., digital platforms for SMEs, in order for them to carry out IM campaigns and strategies. IM is a novel digital marketing model based on valuable information definition and delivery. InfoMkBuilder supports SMEs in overcoming and/or sidestepping their limitations and barriers in adopting digital technologies. The tool implements Model-Driven Architecture (MDA) transformations and a Unified Process (UP) methodological approach to generate the IM technological platforms, and is able to deploy them on the Cloud.
Download

Paper Nr: 30
Title:

Model-Based Documentation of Architectures for Cloud-Based Systems

Authors:

Marvin Wagner and Maritta Heisel

Abstract: In recent years, the importance of cloud-based systems highly increased. Users can access these systems remotely, e.g. for sharing data with others. Furthermore, complete applications can be realized directly in the web browser. Designing such systems is a challenging task for software architects, which can be supported by following a model-based approach. The structure of an architectural model can be defined in a metamodel, thus providing an unambiguous system description. The so created model can not only be used in the subsequent steps of software development, e.g. during implementation, but also for further analysis of privacy and security issues. In this context, we provide three contributions in this paper. We first define a metamodel that defines the semantics of a cloud-based system. We derived the elements from our experience in industrial projects. Second, we offer a step-wise method to model a cloud-based system. As input, we make use of a pattern that describes the system’s context. Third, we provide a graphical editor as tool support to assist cloud architects in applying our approaches.
Download

Paper Nr: 42
Title:

A Java Testing Framework Without Reflection

Authors:

Lorenzo Bettini

Abstract: Java reflection allows a program to inspect the structure of objects at run-time and provides a powerful mechanism to achieve many interesting dynamic features in several Java frameworks. However, reflection breaks the static type safety properties of Java programs and introduces a run-time overhead; thus, it might be better to avoid reflection when possible. In this paper, we present a novel Java testing framework where reflection is never used: we implement the framework only with the Object-Oriented and functional programming mechanisms provided by Java. We will show that implementing and using such a framework is easy, and we avoid the run-time overhead of reflection. Our framework can be used with existing testing libraries and is meant to be extendable.
Download

Paper Nr: 45
Title:

Enhancing Game Usability: A Framework for Small-to-Medium-Sized Game Development Businesses

Authors:

Sayeda R. Akthar, Muhammad R. Islam, Nabila Islam, Farzana Sadia and Mahady Hasan

Abstract: The field of Human-Computer Interaction (HCI) research has developed numerous procedures and techniques to ensure high usability when developing software. Usability in video games is also considered to be important, so it is crucial to ensure that game systems comply with usability standards. However, research has overlooked software development, specifically the stage of game development. Additionally, large gaming firms and game developers tend to keep their procedures and techniques secret. Professional reports have revealed the "ugly face" of the gaming industry. Bangladesh’s slowly expanding game development sector comprises a significant number of small-to-medium-sized businesses and start-ups. These businesses require practical, developer-centric solutions to guarantee usability as they may lack the funds to engage usability specialists to oversee it. This article discusses the concept of game usability, including current research on usability techniques for game production. The study involved conducting polls, heuristic scoring studies, and speaking with several game developers. Twelve usability heuristics were created based on the Nielsen usability technique and the stages of game software development, which help in preventing typical usability problems in games. A preliminary analysis of the heuristics indicates that they can assist in identifying game-specific usability problems that may otherwise go unnoticed.
Download

Paper Nr: 58
Title:

An Exploratory Study on the Evidence of Hackathons' Role in Solving OSS Newcomers' Challenges

Authors:

Ahmed I. Mahmoud, Alexander Nolte and Dietmar Pfahl

Abstract: Background: OSS projects face various challenges. One major challenge is to onboard and integrate newcomers to the project. Aim: We aim to understand and discuss the challenges newcomers face when joining an OSS project and present evidence on how hackathons can mitigate those challenges. Method: We conducted two searches on digital libraries to (1) explore challenges faced by newcomers to join OSS projects, and (2) collect evidence on how hackathons were used to address them. We defined four evidence categories (positive, inconclusive, and no evidence) to classify evidence how hackathons address challenges. In addition, we investigated whether a hackathon event was related to an OSS project or not. Result: We identified a range of newcomer challenges that were successfully addressed using hackathons. However, not all of the solutions we identified were applied in the context of OSS. Conclusion: There seems to be potential in using hackathons to overcome newcomers’ challenges in OSS projects and allow them to integrate faster into the project.
Download

Paper Nr: 71
Title:

Quality Engineering Framework for Functional Safety Automotive Projects

Authors:

Mădălin-Dorin Pop and Dianora Igna

Abstract: Safety evaluations represent an important aspect in the development of functional safety (FuSa) automotive projects. The present paper aims to propose a quality engineering framework for automotive projects that uses as input the Failure Mode Effects Analysis (FMEA) and Fault Tree Analysis (FTA) and further applies the Dependent Failure Analysis (DFA) method. More focused attention is also directed toward the impact analysis step during the development phase of a project and not only. By combining both these goals and with the help of the APIS IQ-RM interface, in the end, the paper presents a case study on how to improve an existing system. Furthermore, the proposed approach ensures complete traceability within the project by adding links between the APIS model and the representative test cases for each component.
Download

Paper Nr: 85
Title:

The Rise of Remote Project Management - A New Norm?: A Survey on IT Organizations in Bangladesh

Authors:

Azaz Ahamed, Touseef A. Khan, Nafiz Sadman, Mahfuz I. Hannan, Nujhat Nahar and Mahady Hasan

Abstract: The rise of remote work has brought about a significant shift in the way software development projects are managed. With teams spread out across different locations and time zones, project managers must adapt to new challenges to ensure the success of their projects. These challenges include difficulties in communication, coordination, and motivation. It is seen that project managers are using a range of tools and tactics, including agile methodologies, online communication tools, and best practices for remote work, to address these issues. Other strategies may be required to successfully handle remote software development projects as conventional ones are not always sufficient. In this paper, an in-depth and exploratory survey has been conducted on a sample size of 250 employees from various IT organizations in Bangladesh. The results are analyzed to understand the benefits and challenges that come with the Work from Anywhere (WFX) approach to software development projects. The survey data is compared and analyzed against an extensive list of research papers in a similar field and categorized in three dimensions: tools and productivity, work-life balance, and career growth. The results support a strong correlation between WFX with increased productivity and better health.
Download

Paper Nr: 88
Title:

A Hybrid Approach to Overcome Requirements Challenges in the Software Industry

Authors:

Md. T. Hasan, Nabil A. Bakar, Nujhat Nahar, Mahady Hasan and M. Rokonuzzaman

Abstract: This research paper presents a hybrid approach to overcome the challenges related to inadequate or insufficient client involvement and understanding during the software requirements phase. The aim of this study is to investigate the factors that contribute to this challenge and propose a solution that combines traditional and agile methodologies. To accomplish this, a survey was conducted to collect responses from industry professionals in the software development sector. The survey results showed that inadequate or insufficient client involvement and understanding is a common issue that leads to delays and misunderstandings in software development projects. To address this challenge, the proposed hybrid approach combines the traditional requirements engineering process with agile techniques such as user stories, prototypes, and continuous feedback loops. The hybrid approach aims to improve communication and collaboration between the client and the development team, ensuring that the software’s requirements are well-understood and documented. The results of this study indicate that the proposed hybrid approach is effective in overcoming the challenges related to inadequate or insufficient client involvement and understanding. The findings of this research have practical implications for software development organizations, highlighting the importance of adopting a hybrid approach to ensure successful software development projects.
Download

Paper Nr: 97
Title:

Source Code Implied Language Structure Abstraction through Backward Taint Analysis

Authors:

Zihao Wang, Pei Wang, Qinkun Bao and Dinghao Wu

Abstract: This paper presents a novel approach for inferring the language implied by a program’s source code, without requiring the use of explicit grammars or input/output corpora. Our technique is based on backward taint analysis, which tracks the flow of data in a program from certain sink functions back to the source functions. By analyzing the data flow of programs that generate structured output, such as compilers and formatters, we can infer the syntax and structure of the language being expressed in the code. Our approach is particularly effective for domain-specific languages, where the language implied by the code is often unique to a particular problem domain and may not be expressible by a standard context-free grammar. To test the effectiveness of our technique, we applied it to libxml2. Our experiments show that our approach can accurately infer the implied language of some complex programs. Using our inferred language models, we can generate high-quality corpora for testing and validation. Our approach offers a new way to understand and reason about the language implied by source code, and has potential applications in software testing, reverse engineering, and program comprehension.
Download

Paper Nr: 101
Title:

CI/CD Process Development for Blockchain Environment Based on Hyperledger Infrastructure

Authors:

Maciej Kopa, Michał Pawlak, Aneta Poniszewska-Marańda, Tomasz Krym, Łukasz Chomatek, Joanna Ochelska-Mierzejewska, Bożena Borowska, Adam Czyżewski and Krzysztof Stepień

Abstract: Blockchain is a promising and quickly developing technology with the potential to disrupt numerous industries. Hyperledger Fabric, an open-source blockchain platform can be used in conjunction with continuous integration and continuous delivery (CI/CD) pipelines. The paper presents the well-organized blockchain development environment for software engineers, its configuration and use in the form of CI/CD pipeline and its integration with Hyperledger Fabric. A case study illustrates the implementation of these technologies in realworld scenario to provide a comprehensive understanding of how Hyperledger Fabric and CI/CD pipelines can be leveraged to improve the efficiency of private blockchain network development process.
Download

Paper Nr: 104
Title:

Towards Good Practices for Collaborative Development of ML-Based Systems

Authors:

Cristiana Moroz-Dubenco, Bogdan-Eduard-Mădălin Mursa and Mátyás Kuti-Kreszács

Abstract: The field of Artificial Intelligence (AI) has rapidly transformed from a buzzword technology to a fundamental aspect of numerous industrial software applications. However, this quick transition has not allowed for the development of robust best practices for designing and implementing processes related to data engineering, machine learning (ML)-based model training, deployment, monitoring, and maintenance. Additionally, the shift from academic experiments to industrial applications has resulted in collaborative development between AI engineers and software engineers who have reduced expertise in established practices for creating highly scalable and easily maintainable processes related to ML models. In this paper, we propose a series of good practices that have been developed as the result of the collaboration between our team of academic researchers in AI and a company specializing in industrial software engineering. We outline the challenges faced and describe the solutions we designed and implemented by surveying the literature and deriving new practices based on our experience.
Download

Paper Nr: 105
Title:

Adopting the Actor Model for Antifragile Serverless Architectures

Authors:

Marcel Mraz, Hind Bangui, Bruno Rossi and Barbora Buhnova

Abstract: Antifragility is a novel concept focusing on letting software systems learn and improve over time based on sustained adverse events such as failures. The actor model has been proposed to deal with concurrent computation and has recently been adopted in several serverless platforms. In this paper, we propose a new idea for supporting the adoption of supervision strategies in serverless systems to improve the antifragility properties of such systems. We define a predictive strategy based on the concept of stressors (e.g., injecting failures), in which actors or a hierarchy of actors can be impacted and analyzed for systems’ improvement. The proposed solution can improve the system’s resiliency in exchange for higher complexity but goes in the direction of building antifragile systems.
Download

Paper Nr: 140
Title:

Quality Measurement of Functional Requirements

Authors:

David Šenkýř and Petr Kroha

Abstract: In this contribution, we propose a metric to measure the quality of textual functional requirements specifications. Since the main problem of such requirements specifications is their ambiguity, incompleteness, and inconsistency, we developed textual patterns to reveal shortcomings in these properties. As a component of our analysis, we use not only the text of the requirements but also the UML model that we construct during the text analysis. Combining the results of part-of-speech tagging of the text and the modeled properties, we are able to identify a number of irregularities concerning the properties named above. Then, the text needs human intervention to correct or remove the suspicious formulations. As a measure of the requirements specification quality, we denote the number of necessary human interventions. We implemented a tool called TEMOS that can test ambiguity, incompleteness, and inconsistency, and we use its results to evaluate the quality of textual requirements. In this paper, we summarize our project results.
Download

Area 3 - Software Systems and Applications

Full Papers
Paper Nr: 19
Title:

Conversational Agents for Simulation Applications and Video Games

Authors:

Ciprian Paduraru, Marina Cernat and Alin Stefanescu

Abstract: Natural language processing (NLP) applications are becoming increasingly popular today, largely due to recent advances in theory (machine learning and knowledge representation) and the computational power required to train and store large language models and data. Since NLP applications such as Alexa, Google Assistant, Cortana, Siri, and chatGPT are widely used today, we assume that video games and simulation applications can successfully integrate NLP components into various use cases. The main goal of this paper is to show that natural language processing solutions can be used to improve user experience and make simulation more enjoyable. In this paper, we propose a set of methods along with a proven implemented framework that uses a hierarchical NLP model to create virtual characters (visible or invisible) in the environment that respond to and collaborate with the user to improve their experience. Our motivation stems from the observation that in many situations, feedback from a human user during the simulation can be used efficiently to help the user solve puzzles in real time, make suggestions, and adjust things like difficulty or even performance-related settings. Our implementation is open source, reusable, and built as a plugin in a publicly available game engine, the Unreal Engine. Our evaluation and demos, as well as feedback from industry partners, suggest that the proposed methods could be useful to the game development industry.
Download

Paper Nr: 23
Title:

Robotic Process Automation for the Gaming Industry

Authors:

Ciprian Paduraru, Adelina-Nicoleta Staicu and Alin Stefanescu

Abstract: Robotic Process Automation has recently been used in many fields to automate business-oriented processes. Industries such as finance, transportation, and retail report significant return on investment (ROI) after replacing redundant, repetitive, and error-prone work performed by human workers with RPA software agents. In our research, we found that there is a great opportunity to use RPA to automate processes in the game development and support industry. In this paper, we identify some of these opportunities and propose automation domains, examples, and high-level blueprints that may be implemented and extended by both academia and the game development industry. The requirements, missing gaps, development ideas, and prototyping work were done in collaboration with local game development partners. Our empirical evaluation shows that the identified automation capabilities can play an important role in automating various processes needed by the game development industry in the future.
Download

Paper Nr: 25
Title:

Acadela: A Domain-Specific Language for Modeling Clinical Pathways

Authors:

Tri Huynh, Selin Erdem, Felix Eckert and Florian Matthes

Abstract: e-Health systems leverage clinical pathways (CPs) models as standardized and optimized procedures to execute and manage medical treatments. To model CPs in decision support e-Health systems, our study develops Acadela, a low-tech-oriented, text-based Domain Specific Language (DSL) with visualization capability. Acadela declares grammar to enforce textual syntax for modeling workflow, control flow, responsibility, medical data visualization, and communications with external systems. Furthermore, Acadela provides a model visualization to preview the CP and assist communication between medical and technical experts. To explore the DSL’s expressiveness and usability, we conducted two separate descriptive user studies with six medical professionals and eight technical adepts. First, we modeled five CPs used by medical professionals in their daily routines. Through semi-structured interviews, we collected feedback regarding the language’s expressiveness. Next, we invited the technical adepts to model a hypertension CP and debug a faulty model written in Acadela. Overall, the medical experts consider the modeled CPs accurately reflect their treatment procedure; and the technical adepts consider the language easy to use and applicable to model CPs. The results imply the DSL’s potential to model CPs with various degrees of complexity in different medical fields while being user-friendly to modelers.
Download

Paper Nr: 38
Title:

A Tool-Supported Approach for Modeling and Verifying Hybrid Systems using EVENT-B and the Differential Equation Solver SAGEMATH

Authors:

Meryem Afendi, Amel Mammar and Régine Laleau

Abstract: The common mathematical model for cyber-physical systems is that of hybrid systems that enable combining both discrete and continuous behaviors represented by differential equations. In this paper, we introduce a formal approach, using EVENT-B and its refinement strategy, for specifying and verifying cyber-physical systems whose behavior is described by ordinary differential equations. To deal with the resolution of ordinary differential equations in Event-B, the approach is based on interfacing the differential equation solver SAGEMATH (System for Algebra and Geometry Experimentation) with the RODIN tool, a platform for EVENT-B projects development. For this purpose, we modeled and implemented the interface to the solver in EVENT-B using a RODIN plugin. This enables to reason on the EVENT-B specification and prove safety properties. The proposed approach was successfully applied on a frequently used cyber-physical system case studies.
Download

Paper Nr: 40
Title:

Towards a Novel Approach for Smart Agriculture Predictability

Authors:

Rima Grati, Myriam Aloulou and Khouloud Boukadi

Abstract: The practice of growing crops and raising cattle is the traditional method of agriculture, a primary source of livelihood. The introduction of advanced technologies and tools provides solutions to predict and avoid soil erosion, over-irrigation, and bacterial infection for crops. Machine learning and Deep learning solutions are hitting high results in terms of precise farming. The most challenging factors for research society are identifying the water need, analyzing soil conditions and suggesting the best crops to cultivate, and predicting fertilizer amounts to prevent bacteria. Grouping similar features helps with accurate prediction and classification. Considering this, we introduce an integrated model Group Organize Forecast (GOF), using Machine Learning (ML) and Deep learning (DL) techniques to balance the requirements and improve automatic irrigation. GOF analyzes the irrigation requirement of a field using the sensed ground parameters such as soil moisture, temperature, weather forecast, radiation levels, the humidity of the crop field, and other environmental conditions. We use a real-time unsupervised dataset to analyze and test the model. GOP clusters the data using Self Organizing Map (SOM) organizes the classes using Cascading Forward Back Propagation (CFBP), and finally predicts the requirement for water and solution to control bacteria in the near future.
Download

Paper Nr: 43
Title:

Impact of Deep Learning Libraries on Online Adaptive Lightweight Time Series Anomaly Detection

Authors:

Ming-Chang Lee and Jia-Chun Lin

Abstract: Providing online adaptive lightweight time series anomaly detection without human intervention and domain knowledge is highly valuable. Several such anomaly detection approaches have been introduced in the past years, but all of them were only implemented in one deep learning library. With the development of deep learning libraries, it is unclear how different deep learning libraries impact these anomaly detection approaches since there is no such evaluation available. Randomly choosing a deep learning library to implement an anomaly detection approach might not be able to show the true performance of the approach. It might also mislead users in believing one approach is better than another. Therefore, in this paper, we investigate the impact of deep learning libraries on online adaptive lightweight time series anomaly detection by implementing two state-of-the-art anomaly detection approaches in three well-known deep learning libraries and evaluating how these two approaches are individually affected by the three deep learning libraries. A series of experiments based on four real-world open-source time series datasets were conducted. The results provide a good reference to select an appropriate deep learning library for online adaptive lightweight anomaly detection.
Download

Paper Nr: 73
Title:

IBE.js: A Framework for Instrumenting Browser Extensions

Authors:

Elvira Moreno-Sanchez and Pablo Picazo-Sanchez

Abstract: Millions of people use web browsers daily. Extensions can enhance their basic functions. As the use and development of browser extensions grow, ensuring adequate code coverage is essential for delivering high-quality, reliable, and secure software. This paper introduces IBE.js, a framework to monitor and assess the coverage of browser extensions. IBE.js conducts an analysis of the main JavaScript files, background pages and content scripts, of 4,495 browser extensions from the Chrome Web Store. By utilizing a blank HTML file, we found that on average, more than 33% of the lines in these scripts are executed automatically. This coverage represents the number of lines executed by default, without any influence from user interaction or web content. Notably, IBE.js is a versatile framework that can be utilized across various platforms, ensuring compatibility with extensions from other web stores such as Firefox, Opera, and Microsoft. This enables comprehensive coverage analysis and monitoring of extensions beyond a single browser ecosystem.
Download

Paper Nr: 76
Title:

Automatic Fuzz Testing and Tuning Tools for Software Blueprints

Authors:

Ciprian Paduraru, Rares Cristea and Alin Stefanescu

Abstract: The increasingly popular no- or low-code paradigm is based on functional blocks connected on a graphical interface that is accessible to many stakeholders in an application. Areas such as machine learning, DevOps, digital twins, simulations, and video games use this technique to facilitate communication between stakeholders regarding the business logic. However, the testing methods for such interfaces that connect blocks of code through visual programming are not well studied. In this paper, we address this research gap by taking an example from a niche domain that nevertheless allows for full generalization to other types of applications. Our open-source tool and proposed methods are reusing existing software testing techniques, mainly those based on fuzzing methods, and show how they can be applied to test applications defined as visual interaction blocks. Specifically for simulation applications, but not limited to them, the automated fuzz testing processes can serve two main purposes: (a) automatically generate tests triggered by new stakeholder changes and (b) support tuning of different parameters with shorter processing times. We present a comprehensive motivation plan and high-level methods that could help industry reduce the cost of testing, designing, and tuning parameters, as well as a preliminary evaluation.
Download

Paper Nr: 93
Title:

An Analysis of Energy Consumption of JavaScript Interpreters with Evolutionary Algorithm Workloads

Authors:

Juan J. Merelo-Guervós, Mario García-Valdez and Pedro A. Castillo

Abstract: What is known as energy-aware computing includes taking into account many different variables and parameters when designing an application, which makes it necessary to focus on a single one to obtain meaningful results. In this paper, we will look at the energy consumption of three different JavaScript interpreters: bun, node and deno; given their different conceptual designs, we should expect different energy budgets for running the (roughly) same workload, operations related to evolutionary algorithms (EA), a population-based stochastic optimization algorithm. In this paper we will first test different tools to measure per-process energy consumption in a precise way, trying to find the one that gives the most accurate estimation; after choosing the tool by performing different experiments on a workload similar to the one carried out by EA, we will focus on EA-specific functions and operators and measure how much energy they consume for different problem sizes. From this, we will try to draw a conclusion on which JavaScript interpreter should be used in this kind of workloads if energy (or related expenses) has a limited budget.
Download

Paper Nr: 106
Title:

A Web Scraping Algorithm to Improve the Computation of the Maximum Common Subgraph

Authors:

Andrea Calabrese, Lorenzo Cardone, Salvatore Licata, Marco Porro and Stefano Quer

Abstract: The Maximum Common Subgraph, a generalization of subgraph isomorphism, is a well-known problem in the computer science area. Albeit being NP-complete, finding Maximum Common Subgraphs has countless practical applications, and researchers are continuously exploring scalable heuristic approaches. One of the state-of-the-art algorithms to solve this problem is a recursive branch-and-bound procedure called McSplit. The algorithm exploits an intelligent invariant to pair vertices with the same label and adopts an effective bound prediction to prune the search space. However, McSplit original version uses a simple heuristic to pair vertices and to build larger subgraphs. As a consequence, a few researchers have already focused on improving the sorting heuristics to converge faster. This paper concentrate on these aspects and presents a collection of heuristics to improve McSplit and its state-of-the-art variants. We present a sorting strategy based on the famous PageRank algorithm, and then we mix it with other approaches. We compare all the heuristics with the original McSplit procedure, and against each other. In particular, we distinguish the heuristics based on the node degree and novel ones based on the PageRank algorithm. Our experimental section shows that PageRank can improve both McSplit and its variants significantly regarding convergence speed and solution size.
Download

Short Papers
Paper Nr: 6
Title:

A Systematic Mapping Study on Security in Configurable Safety-Critical Systems Based on Product-Line Concepts

Authors:

Richard May, Jyoti Gautam, Chetan Sharma, Christian Biermann and Thomas Leich

Abstract: Safety-critical systems are becoming increasingly configurable. However, as the number of features and configurations grows, the systems’ complexity also increases, making cyber attacks more likely. Nevertheless, we miss an overview of security in configurable safety-critical systems which are based on product-line engineering. Thus, we conducted a systematic mapping study in which we analyzed 44 papers (2008–2022) to discuss relevant properties and to identify 8 research opportunities. Our key finding is that security in the context of variability and safety-critical systems needs more consideration and research. We emphasize that safety-critical systems, especially those with networking capabilities, cannot be safe if they do not provide techniques to ensure security and do not consider the systems’ configurability. Our study is aimed to guide both researchers and practitioners in understanding the importance of security for configurable safety-critical systems, relevant properties, and open issues.
Download

Paper Nr: 17
Title:

Uncovering Behavioural Patterns of One: And Binary-Class SVM-Based Software Defect Predictors

Authors:

George Ciubotariu, Gabriela Czibula, Istvan G. Czibula and Ioana-Gabriela Chelaru

Abstract: Software defect prediction is a relevant task, that increasingly gains more interest as the programming industry expands. However, one of its difficulties consists in overcoming class imbalance issues, because most open-source software projects that are annotated using bug tracking systems do not have lots of defects. Therefore, the rarity of bugs may often cause machine learning models to dramatically underperform, even when diverse data augmentation or selection methods are applied. As a result, our focus shifts towards one-class classification, which is a family of outlier detection algorithms, designed to be trained on data instances of a single label. Considering this approach, we are adapting the traditional Support Vector Machine model to perform outlier detection. Experiments are performed on 16 versions of an open-source medium-sized software system, the Apache Calcite software. We are performing an extensive assessment of the ability of one-class classifiers trained on software defects to effectively discriminate between defective and non-defective software entities. The main findings of our study consist in uncovering several trends in the behaviour of the one- and binary-class support vector machine-based models when solving SDP problems.
Download

Paper Nr: 20
Title:

RPA Testing Using Symbolic Execution

Authors:

Ciprian Paduraru, Marina Cernat and Adelina-Nicoleta Staicu

Abstract: The goal of Robotic Process Automation (RPA) technology is to identify patterns in repetitive processes that can be automated in enterprise workflows, and to create intelligent agents that can repeat those processes contextually and without human effort. However, as the technology has evolved considerably in terms of model complexity, data inputs, and output dimensionality, preserving the quality of the building blocks of the operations is a difficult task. We identified that there is a gap in testing methods and tools capable of efficiently testing RPA workflows. In this paper, we therefore propose to address this gap by using symbolic execution as a starting point. We focus on both the methodology and algorithms required to transfer existing research in symbolic execution to the RPA domain, propose a tool that can be used by researchers and industry, and present our current evaluation results for various use cases along with best practices.
Download

Paper Nr: 21
Title:

Practitioners’ Experiences on Developing Graphical Modeling Editors: A Survey

Authors:

Mert Ozkaya, Kamran Musayev and Mehmet A. Kose

Abstract: Graphical modeling editors used for modeling and processing any information can be developed using either programming technologies (e.g., software libraries and frameworks) or meta-modeling technologies. However, with the existing literature, it is not clear which technique is popular and what motivate and demotivate practitioners using those techniques. In this paper, we conducted a survey among 76 practitioners (with 52 acceptable responses) to understand their experiences on developing graphical modeling editors. The survey led to interesting results. The top motivation for developing editors is the model-driven engineering and model transformation. 62% of the participants use meta-modeling technologies for developing editors, while the rest use programming languages. Sirius is the top-used meta-modeling technology, while C# and Python are the top-used programming languages. The participants using programming languages emphasized the reduced learning-curve with programming and advanced development platforms for developing portable editors. Many of those participants have no idea about meta-modeling. The participants using meta-modeling technologies revealed the huge time and effort gain with no-code editor development. Also, enhanced maintenance of editors by just changing the meta-model without writing code is considered important. However, those practitioners state challenges on the meta-modeling technologies’ support for extensibility and customisation, developers’ community, and complex meta-modeling.
Download

Paper Nr: 27
Title:

RoLA: A Real-Time Online Lightweight Anomaly Detection System for Multivariate Time Series

Authors:

Ming-Chang Lee and Jia-Chun Lin

Abstract: A multivariate time series refers to observations of two or more variables taken from a device or a system simultaneously over time. There is an increasing need to monitor multivariate time series and detect anomalies in real time to ensure proper system operation and good service quality. It is also highly desirable to have a lightweight anomaly detection system that considers correlations between different variables, adapts to changes in the pattern of the multivariate time series, offers immediate responses, and provides supportive information regarding detection results based on unsupervised learning and online model training. In the past decade, many multivariate time series anomaly detection approaches have been introduced. However, they are unable to offer all the above-mentioned features. In this paper, we propose RoLA, a real-time online lightweight anomaly detection system for multivariate time series based on a divide-and-conquer strategy, parallel processing, and the majority rule. RoLA employs multiple lightweight anomaly detectors to monitor multivariate time series in parallel, determine the correlations between variables dynamically on the fly, and then jointly detect anomalies based on the majority rule in real time. To demonstrate the performance of RoLA, we conducted an experiment based on a public dataset provided by the FerryBox of the One Ocean Expedition. The results show that RoLA provides satisfactory detection accuracy and lightweight performance.
Download

Paper Nr: 37
Title:

Goal-Modeling Privacy-by-Design Patterns for Supporting GDPR Compliance

Authors:

Mohammed G. Al-Obeidallah, Luca Piras, Onyinye Iloanugo, Haralambos Mouratidis, Duaa Alkubaisy and Daniele Dellagiacoma

Abstract: The introduction of the European General Data Protection Regulation (GDPR) has imposed obligations on organisations collecting data in the EU. This has been beneficial to citizens due to rights reinforcement achieved as data subjects. However, obligations heavily affected organisations, and their privacy requirements analysts, having issues with interpreting and implementing GDPR principles. This paper proposes visual GDPR Patterns supporting analysts through Privacy-by- Design (PbD) and GDPR compliance analysis. In order to achieve that, we extended a requirements modeling tool, SecTro, which is used to assist analysts in creating visual requirements models. Specifically, we extended SecTro with novel visual GDPR patterns representing GDPR principles. We evaluated the patterns in a healthcare case study. The evaluation results suggest that the GDPR patterns can help analysts in PbD modeling analysis, by representing GDPR principles and considering relevant ready-to-use alternatives, towards achieving GDPR compliance.
Download

Paper Nr: 44
Title:

The Power of Scrum Mastery: An Analysis of Agile Team Performance and Scrum Master Influence

Authors:

Hannes Salin, Felix Albrecht and John Skov

Abstract: We investigate the level of influence the scrum master role has on a team’s success, in terms of four different leading indicators: feature lead time, defect leakage, predictability and velocity. We use statistical analysis on a large data set of agile metrics, collected by the target company, a large corporation in the Nordics. Our study showed that primarily the presence of a scrum master correlates well with team success for feature lead time. The remaining three indicators did not show strong statistical correlation with the inclusion/exclusion of a scrum master. However, these indicators should not be excluded for data-driven decision making, and need further research to identify obstructive or external factors that may influence either the scrum master’s presence or the leading indicators’ reliability.
Download

Paper Nr: 46
Title:

Conceptual Framework for Adaptive Safety in Autonomous Ecosystems

Authors:

David Halasz and Barbora Buhnova

Abstract: The dynamic collaboration among hyper-connected Autonomous Systems promotes their evolution towards Autonomous Ecosystems. In order to maintain the safety of such structures, it is essential to ensure that there is a certain level of understanding of the present and future behavior of individual systems in these ecosystems. Adaptive Safety is a promising direction to control access to features between cooperating systems. However, it requires information about its collaborators within the environment. Digital Twins could be used to predict possible future behavior of a system. This paper introduces a conceptual framework for Adaptive Safety that is being triggered based on the trust score computed from the predictive simulation of Digital Twins, which we suggest to use in Autonomous Ecosystems to load and safely execute third-party Smart Agents. By quantifying trust towards the agent and combining it with a decision tree, we leverage this as a deciding factor to conceal or expose certain features among collaborating systems.
Download

Paper Nr: 57
Title:

AIM-RL: A New Framework Supporting Reinforcement Learning Experiments

Authors:

Ionuţ-Cristian Pistol and Andrei Arusoaie

Abstract: This paper describes a new framework developed to facilitate implementing new problems and associated models and use reinforcement learning (RL) to perform experiments by employing these models to find solutions for those problems. This framework is designed as being as transparent and flexible as possible, optimising and streamlining the RL core implementation and allowing users to describe problems, provide models and customise the execution. In order to show how AIM-RL can help with the implementation and testing of new models we selected three classic problems: 8-puzzle, Frozen Lake and Mountain Car. The objective results of these experiments, as well as some subjective observations, are included in the latter part of this paper. Considerations are made with regards to using these frameworks both as didactic support as well as tools adding RL support to new systems.
Download

Paper Nr: 61
Title:

An Empirical Study on the Possible Positive Effect of Imperative Constructs in Declarative Languages: The Case with SQL

Authors:

Seyfullah Davulcu, Stefan Hanenberg, Ole Werger and Volker Gruhn

Abstract: Today, imperative programming languages are often equipped with declarative constructs (such as lambda expressions in Java or C++). The underlying assumption (which is partly confirmed by experiments) is that imperative languages benefit from such constructs. This gives the impression that declarative programming languages are better suited for programming than imperative languages. However, the question is whether this statement holds vice versa as well, i.e., whether declarative languages benefit from imperative constructs. The present paper introduces a crossover trial where 24 students were equipped with an SQL extension that gives the illusion of imperative assignments. It turned out with high confidence (p<.001) that this construct -- although in principle already contained in a declarative fashion in SQL -- lets students solve a given task in only 52% of the time in comparison to the time required in standard SQL.
Download

Paper Nr: 74
Title:

Can ChatGPT Fix My Code?

Authors:

Viktor Csuvik, Tibor Gyimóthy and László Vidács

Abstract: ChatGPT, a large language model (LLM) developed by OpenAI, fine-tuned on a massive dataset of text and source code, has recently gained significant attention on the internet. The model, built using the Transformer architecture, is capable of generating human-like text in a variety of tasks. In this paper, we explore the use of ChatGPT for Automated Program Repair (APR); that is, we ask the model to generate repair suggestions for instances of buggy code. We evaluate the effectiveness of our approach by comparing the repair suggestions to those made by human developers. Our results show that ChatGPT is able to generate fixes that are on par with those made by humans. Choosing the right prompt is a key aspect: on average, it was able to propose corrections in 19% of cases, but choosing the wrong input format can drop the performance to as low as 6%. By sampling real-world bugs from seminal APR datasets, generating 1000 input examples for the model, and evaluating the output manually, our study demonstrates the potential of language models for Automated Program Repair and highlights the need for further research in this area.
Download

Paper Nr: 75
Title:

A Methodology Based on Quality Gates for Certifiable AI in Medicine: Towards a Reliable Application of Metrics in Machine Learning

Authors:

Miriam Elia and Bernhard Bauer

Abstract: As of now, intelligent technologies experience a rapid growth. For a reliable adoption of those new and powerful systems into day-to-day life, especially with respect to high-risk settings such as medicine, technical means to realize legal requirements correctly, are indispensible. Our proposed methodology comprises an approach to translate such partly more abstract concepts into concrete instructions - it is based on Quality Gates along the intelligent system’s complete life cycle, which are composed of use-case adapted Criteria that need to be addressed with respect to certification. Also, the underlying philosophy regarding stakeholder inclusion, domain embedding and risk analysis is illustrated. In the present paper, the Quality Gate Metrics is outlined for the application of machine learning performance metrics focused on binary classification.
Download

Paper Nr: 90
Title:

Trust Management and Attribute-Based Access Control Framework for Protecting Maritime Cyber Infrastructure

Authors:

Yunpeng Zhang, Izzat Alsmadi, Yi Qi and Zhixia Li

Abstract: The modern world depends on maritime supply chains to sustain flows of international commercial, industrial, and economic activities. As the maritime supply chain heavily utilizes cyber operations to manage communications and physical processes, maritime cybersecurity emerges as an imperative aspect for maritime safety and security. The maritime cyber infrastructure displays distributed, heterogeneous, networked, and volatile characteristics. This article first studies the effectiveness of the mechanism of traditional access control systems for maritime cyber infrastructure. The research also examines the shortcomings associated with the existing network wide access control and identity theories to develop solutions. In order to develop a suitable method for the maritime context, the paper presents an Attribute-Based Access Control (ABAC) framework which is adaptable and highly scalable for the complex maritime cyber space. The analytical results show that implementing the new framework can enhance the access control of the maritime cyber infrastructure.
Download

Paper Nr: 94
Title:

Scientometric Analysis of Fake News Detection and Machine Learning Based on VOSviewer

Authors:

Lumbardha Hasimi and Aneta Poniszewska-Marańda

Abstract: This study presents a comprehensive analysis of recent research patterns and progress in the field of fake news detection and machine learning. By examining 2209 publications from 2015 to 2022, the study aims to identify the most fre-quently developed topics and explore the involvement of publications, authors, and institutions. Using the network visualizing tool VOSviewer, a quantitative analysis is performed to investigate research productivity, patterns, and keyword distribution. This study contributes to the understanding of the current state of research in fake news detection and machine learning, and offers valuable in-sights for researchers, policymakers, and technology developers seeking to ad-dress the challenges posed by fake news and disinformation. The findings indi-cate that fake news detection research is still in its early stages and primarily focuses on social media and social contexts. There is a growing interest in the subject, as evidenced by increasing attention from the research community, whereas the network of interconnected research clusters, highlights the multidis-ciplinary nature of fake news detection.
Download

Paper Nr: 98
Title:

Automatic Test-Based Assessment of Assembly Programs

Authors:

Luís Tavares, Bruno Lima and António J. Araújo

Abstract: As computer science and engineering programs continue to grow in enrollment, automatic assessment tools have become prevalent. Manual assessment of programming exercises can be time-consuming and resource-intensive, creating a need for such tools. In response, this paper proposes a tool to assess assembly exercises, specifically ARM64 programs, and provide real-time feedback to students. The tool includes features for evaluating, analyzing, and detecting plagiarism in student submissions. After two years of intensive usage in a higher education environment, the results and analysis show a positive impact and potential benefits for teachers and students. Furthermore, the tool’s source code is publicly available, making it a valuable contribution to building more effective and efficient automatic assessment tools for computer science and engineering schools.
Download

Paper Nr: 102
Title:

Verifying Data Integrity for Multi-Threaded Programs

Authors:

Imran Pinjari, Michael Shin and Pushkar Ogale

Abstract: This paper describes the Integrity Breach Conditions (IBCs) to identify security spots that might contain malicious codes in message communications in multi-threaded programs. An attacker can inject malicious code into a program so that the code would tamper with sensitive data handled by the program. The IBCs indicate what functions might encapsulate malicious code if the defined IBC conditions hold true. This paper describes the IBCs for multi-threaded based synchronous and asynchronous message communications in which two threads communicate via message queues or message buffers. A prototype tool was developed by implementing the IBCs to identify security spots in multi-threaded programs. An online shopping system was implemented to validate the IBCs using the prototype tool.
Download

Paper Nr: 121
Title:

Towards Incremental Model-Driven Software Modernisation: Feedback from an Industrial Proof of Concept in Railways

Authors:

Robert Darimont, Valery Ramon, Christophe Ponsard, Fati Azmali, Michel Thauvoye and Henri Bingen

Abstract: Many industrial sectors are dependent on software-based systems on the long run and many software systems tend to live much longer than initially expected. Beyond update and maintenance activities, such software systems require a long-term modernization process in order to avoid turning into problematic legacy applications. Conducting a modernization process remains difficult in many aspects such as the scope, the strategy for conducting progressive refactoring, testing and transitioning to a modernized system while ensuring critical properties such a availability and reliability. In this paper, we propose a modernization process able to cope with such constraints and based on a MBSE approach applied to recover systems requirements, refactor the architecture and support the redevelopment of specific subsystems. We report about some lessons learned during a proof of concept conducted in the railway domain using a chain of models composed of a goal-model, a SysML model and a Simulink model.
Download

Paper Nr: 122
Title:

Enhancing ICS Security Diagnostics with Pseudo-Greybox Fuzzing During Maintenance Testing

Authors:

Kazutaka Matsuzaki and Shinichi Honiden

Abstract: This paper presents a novel Pseudo-Greybox Fuzzer (pseudo-GBF) methodology designed to improve the security diagnosis of Industrial Control Systems (ICS) during maintenance testing. The proposed method combines stateful protocol fuzzing, network fuzzing, and ICS monitoring to optimize the coverage of state transitions in the system under test (SUT) while operating within the constraints of on-site maintenance testing. Pseudo-GBF enhances security testing by utilizing replayable seeds to trigger specific state transitions, enabling efficient and practical testing. By incorporating Pseudo-Greybox Fuzzing during maintenance testing, the methodology addresses the challenges faced in ICS security diagnostics, leading to improved security and resilience of critical infrastructure systems. This paper provides a comprehensive overview of the system design, including integrating stateful protocol fuzzing, network fuzzing, and ICS monitoring, demonstrating its potential to advance ICS security testing.
Download

Paper Nr: 126
Title:

Agile Quality Requirements Elaboration: A Proposal and Evaluation

Authors:

Wasim Alsaqaf, Maya Daneva and Roel Wieringa

Abstract: The increasing success and user satisfaction of agile methods’ application in their original context (e.g. small co-located teams), motivated large organizations to utilize agile methods to deal with the rapidly changing markets and the distributed global workforce. Several studies have reported a variety of quality requirements (QRs) challenges in large-scale distributed agile (LSDA) context, so a recent empirical study has identified 15 QRs challenges in LSDA projects. This paper proposes an approach based on the concept of goal documentation to deal with the 15 QRs challenges reported previously. Our proposal, the Agile Quality Requirements Elaboration (AQRE) approach, introduces a new organizational role and a two-step process to elaborate high-level goals(s) into epics and user stories alongside QRs. The fitness and the usefulness of AQRE are evaluated by using a focus group with eight practitioners in the IT department of a large Dutch government organization. The evaluation indicated that 12 of the 15 QRs challenges could be mitigated by the AQRE. Our main contribution is two-fold, (i) we proposed a solution approach to deal with QRs challenges in LSDA context, and (ii) our evaluation provided empirical evidence about its usefulness in real-world context.
Download

Paper Nr: 128
Title:

A Knowledge-Based Proactive Intelligent System for Buildings Occupancy Monitoring

Authors:

Marie U. Baerentzen, Jalil Boudjadar, Saif U. Islam and Carl L. Schultz

Abstract: Occupancy monitoring for buildings is a key component to enable cost-effective allocation of spaces and efficient resources utilization. The occupancy monitoring systems rely on networks of sensors and cameras to achieve high accuracy, however the main challenges are the privacy concerns and the computation cost. This paper proposes the design of an intelligent energy-efficient and privacy-aware system to track, monitor and analyze buildings occupancy. The core idea is that rather than collecting large amounts of sensor data to perform occupancy analysis post hoc, our proposal adopts a top-down approach where, using the knowledge about the activity expected to be taking place it proactively identifies the minimal data relevant to the actual state following the semantics of the expected activity. Thus, it switches on/off the sensors in accordance with such a subsequent dynamics and reduces the data amount to collect and the computation cost. The proposed system has been built using a domain-specific language, implemented in C++ and tested for a building case study. Our experimental results show that, while achieving a considerable reduction in computation cost (up to 35%) and energy consumption (up to 31%), our system maintains high accuracy for occupancy tracking compared to the state of the art solutions.
Download

Paper Nr: 135
Title:

Blockchain for Artificial Intelligence: An Industry and Literature Survey

Authors:

Ciprian Paduraru, Augustin Jianu and Alin Stefanescu

Abstract: The requirements of today’s applications and their users set high demands and expectations. AI is a part of these and has played an important role recently. However, the credibility of AI methods is controversial in many cases, as is data security and user privacy. On the other hand, Blockchain is a trending technology that offers security and privacy as required by many enterprise applications. The presentation will provide an overview of how AI and Blockchain can be integrated for mutual benefit: (a) using Blockchains to make AI systems more trustworthy and private data secure, (b) using AI to improve Blockchain related operations and internal algorithms. The presentation includes examples from the literature, established research in the field, and practical examples from industry.
Download

Paper Nr: 137
Title:

Closer: A Tool Support for Efficient Learning Integrating Alexa and ChatGPT

Authors:

Dan-Cristian Alb and Camelia Șerban

Abstract: In a society in continuous change due to the astonishing speed with which science and technology develop, the educational system must keep up with these changes and meet the students with effective learning methods integrated in intelligent platforms. In this respect, the paper proposed an e-learning platform whose underlying educational framework is built upon three studied principles that lead to efficient learning, namely: elaboration, retrieval practice and feedback. Furthermore, the core functionalities of the proposed tool consist of a question proposal system and quiz taking system, elements shaped based on active and collaborative learning. Nevertheless, the proposed tool’s learning authenticity is ensured by the dedicated voice assistant powered by Alexa and the integration with ChatGPT, an OpenAI product. As for the validity of the proposed solution, an ongoing study is in place. The study consists of integrating the proposed tool as part of the didactic activity for the Data Structures course taken by 1st year students enrolled in the Mathematics and Computer Science undergraduate program offered by Babeş-Bolyai University (Cluj-Napoca, Romania).
Download

Paper Nr: 2
Title:

Design Patterns for Monitoring and Prediction Machine Learning Systems: Systematic Literature Review and Cluster Analysis

Authors:

Richard May, Tobias Niemand, Paul Scholz and Thomas Leich

Abstract: Although machine learning methods for industrial maintenance systems have already been well described in recent years, their practical implementation is only slowly taking place. One of the reasons is a lack of comparable analyses of machine learning systems. To address this gap, we first conducted a systematic literature review (2012–2021) of 104 monitoring and prediction systems. Second, we extracted 5 design patterns (i.e., high-level construction manuals) based on a k-means cluster analysis. Our results show that monitoring and prediction systems mainly differ in their choice of operations. However, they usually share similar learning strategies (i.e., supervised learning) and tasks (i.e., classification, regression). With our work, we aim to help researchers and practitioners to understand common characteristics, contexts, and trends.
Download

Paper Nr: 10
Title:

ODRL-Based Resource Definition in Business Processes

Authors:

Zakaria Maamar, Amel Benna, Minglin Li, Huiru Huang and Yang Xu

Abstract: Despite the popularity of Resource-as-a-Service (RaaS) model, it falls short of providing low-level, flexible control over resources in terms of which category of users can use them, when and where users can use them, and how much users need to pay for them. To provide such a control, this paper presents 2 intermediary models between resources and services referred to as Resource-as-an-Asset (RaaA) and Asset-as-a-Service (AaaS), respectively. First, resource-related constructs are mapped onto asset-related constructs and then, the same asset-related constructs are mapped onto service-related constructs. A projection operator is developed to support construct mappings based on a set of pre-defined rules. To illustrate asset-related constructs, the paper resorts to the Open Digital Rights Language (ODRL) defining what is permitted, forbidden, or obliged over an asset. A system implementing construct mappings is presented in the paper as well.
Download

Paper Nr: 11
Title:

How to Make IoT Sensitive to Privacy? An Approach Based on ODRL and Illustrated With WoT TD

Authors:

Zakaria Maamar, Amel Benna, Yang Xu, Mohamed A. Serhani, Minglin Li, Huiru Huang, Wassim Benadjel and Nacereddine Sitouah

Abstract: Despite the multiple advantages of the Internet-of-Things (IoT), many users are still skeptical considering IoT as another disruptive information and communication technology that is “silently” invading their privacy with the risk of having their habits, preferences, and choices exposed publicly. To mitigate this silent invasion, this paper looks into innovative ways of making IoT sensitive to privacy by first, allowing users to explicitly express what is permitted, forbidden, and obliged over their personal data using the Open Digital Rights Language (ODRL), and second, adjusting things’ specifications like the Web-of-Things Thing Description (WoT TD), so that things would act according to these users’ permissions, prohibitions, and obligations. A system implementing and demonstrating the blend of ODRL with WoT TD is presented based on a case study capturing privacy concerns in a center for elderly people.
Download

Paper Nr: 48
Title:

Nagare Media Engine: Towards an Open-Source Cloud- and Edge-Native NBMP Implementation

Authors:

Matthias Neugebauer

Abstract: Making efficient use of cloud and edge computing resources in multimedia workflows that span multiple providers poses a significant challenge. Recently, MPEG published ISO/IEC 23090-8 Network-Based Media Processing (NBMP), which defines APIs and data models for network-distributed multimedia workflows. This standardized way of describing workflows over various cloud providers, computing models and environments will benefit researchers and practitioners alike. A wide adoption of this standard would enable users to easily optimize the placement of tasks that are part of the multimedia workflow, potentially leading to an increase in the quality of experience (QoE). As a first step towards a modern open-source cloud- and edge-native NBMP implementation, we have developed the NBMP workflow manager Nagare Media Engine based on the Kubernetes platform. We describe its components in detail and discuss the advantages and challenges involved with our approach. We evaluate Nagare Media Engine in a test scenario and show its scalability.
Download

Paper Nr: 70
Title:

A Topic Modelling Method for Automated Text Analysis of the Adoption of Enterprise Risk Management

Authors:

Hao Lu, Xiaoyu Liu and Hai Wang

Abstract: This paper presents a topic modelling method for automated text analysis of the adoption of enterprise risk management by publicly traded firms. The topic modelling method applies the Latent Dirichlet Allocation algorithm on corporate annual financial disclosures to identify whether firms have adopted enterprise risk management. The preliminary results indicate that the firms that have adopted enterprise risk management have a smaller reduction in daily abnormal returns during the recession period of the COVID-19 financial market shock in 2020 (the first quarter of 2020 when the stock market crashed) and a larger increase in daily abnormal returns during the recovery period (the second and third quarters of 2020 when the stock market recovered). Moreover, there is no evidence that the adoption of enterprise risk management reduces the volatility of stock returns of publicly traded firms during the COVID-19 financial market shock in 2020.
Download

Paper Nr: 72
Title:

An Analysis of Improving Bug Fixing in Software Development

Authors:

Daniel Caliman, Valentina David and Alexandra Băicoianu

Abstract: When a defect arises during a software development process, architects and programmers spend significant time trying to identify whether any similar defects were identified during past assignments. To efficiently address a software issue, the developer must understand the context within which a software defect is reproducible and how it manifests itself. Another important aspect is how many other issues related to the same functionality were reported in the past and how they were solved. The current approach suggests using unsupervised machine learning models for natural language processing to identify past defects similar to the textual content of the newly reported defects. One of this study’s main benefits is ensuring a valuable knowledge transfer process that reduces the average time spent on bug fixing and better task distribution across team members. The innovative aspect of this research is gaining an increased ability to automate specific steps required for solving software reports.
Download

Paper Nr: 82
Title:

Child Vaccination in Peru: How to Support it Using Cloud Computing and Blockchain

Authors:

Arthur Valladares-Nole, Yohan Yi-Chung and Daniel Burga-Durango

Abstract: Childhood vaccination is an important pillar for a country’s public health system. However, in Peru there was no adequate tool or technology that could be used by parents or guardians of children to monitor or control the process. In this work, a technological solution based on Cloud Computing and Blockchain that is implemented in the childhood vaccination system of Peru will be presented based on a diversity of previous studies. Which focused on the analysis of the processes involved in vaccination, the system used, and the main actors to provide the best possible solution. The presented solution seeks to support the increase in the rate of childhood vaccination in the national healthcare system. We developed a system with functionalities that enhance the weak points in the current process, it is deployed in the cloud for the operation and connection with various services, and which takes advantage of Blockchain technology to ensure the protection of information that is managed in the process. The results obtained in the validation tests were satisfactory, obtaining a minimum compliance percentage of 99%. In this way, the system with the Blockchain network complies with the pillars of security such as confidentiality, integrity, and availability. On the other hand, the non-technical aspects to safeguard the security of the system that have to do with the processes and actors involved in vaccination were also considered. With this, the fundamental pillars of security (“CIA”) can be ensured to provide a solution according to the context of the protection of transactional and medical data in Peru.
Download

Paper Nr: 96
Title:

Multimodal Biometric Recognition Systems based on Physiological Traits: A Systematic Mapping Study

Authors:

Hind Es-Sobbahi, Mohamed Radouane and Khalid Nafil

Abstract: Context: Biometric systems are fundamental to protect against identity theft and illegitimate access. However most of them are unimodal and have several drawbacks such as: noisy data, intra-class variation, inter-class similarity, non-universality and spoofing attacks. Hence, multimodal biometric recognition systems (MBRS) are increasingly in demand to overcome these limitations. Objective: This work aims to aggregate and synthesize available studies and provide a historical and geographical classification in order to guide researchers in their choices of biometric traits (BT) combinations and image processing (IP) techniques. Therefore, we conducted a Systematic Mapping Study (SMS). Method: We analysed 247 relevant articles to answer the research questions according to inclusion and exclusion criteria, namely: country, source and year of publication, BT combinations and IP techniques employed. Results: According to our results, India tops the list; iris, fingerprint and face are the most requested by researchers. Concerning IP techniques used, PCA Algorithm leads (24%), followed equally (14%) by LBP and Deep CNN. Conclusion: This SMS was produced to guide stakeholders in choosing the most relevant configuration between of BT and IP methods when designing an MBRS. Findings are interesting as they provide a detailed overview of aspects that can impact the performance of a system.
Download

Paper Nr: 103
Title:

Architectural Evolution Style Representation Model

Authors:

Kadidiatou Djibo, Mourad C. Oussalah and Jacqueline Konate

Abstract: In this paper, we present a software architecture evolution process representation model following the principles of meta-modeling. The architecture evolution process is modeled as a software architecture evolution style. Then, we introduce an evolution style representation model in square. The model in square allows to represent the evolution process through four main dimensions of which : the evolution actor, the evolving architectural element, The evolution time and the evolution operation signature. Finally, we define a simplified formalism to express these architectural evolution with more convenience.
Download

Paper Nr: 117
Title:

DESKED: An Approach for MoDel-Driven ProcESs Aware KnowledgE Discovery

Authors:

Sonya Ouali, Mohamed Mhiri and Faiez Gargouri

Abstract: The modeling of the business processes aims essentially at providing meaningful and explanatory graphical representations. Indeed, both domain knowledge and mastery of modeling techniques are necessary to identify and represent a particular business domain’s tasks, activities, and procedures. The main purpose of this paper is to introduce an MDA-based approach called DESKED to capture different dimensions of business knowledge so as to ensure comprehensive and semantically robust business modeling.

Paper Nr: 134
Title:

ACPS: Adaptive Cyber-Physical Systems in Industry 4.0

Authors:

Sebastien Ducos and Ernesto Exposito

Abstract: Nowadays, with the rapid growth of connected objects and produced data involved in industrial processes, it is increasingly difficult to design and implement efficient cyber-physical systems (CPS) meeting business needs. As a consequence, architectures of CPS have to be able to integrate different heterogeneous actors (people, objects, data, services) coordinated by autonomous and self-adaptive processes capable of implementing the different business missions of a company. Moreover, with the emergence of Industry 4.0, interest in elastic services provided by cloud architectures is booming. Indeed, these architectures allow the smooth and scalable interconnection of interdependent systems in order to provide efficient solutions to facilitate the management of industrial processes. In this paper, we propose a generic architecture for Integration Platforms as a Service (iPaaS). This architecture offers key functionalities, namely integration and interoperability, but also self-decision support. One implementation based on open-source solutions and illustrating the benefits of this proposal in the area of the Agriculture 4.0 domain is proposed.
Download

Paper Nr: 139
Title:

Towards Computer Assisted Compliance Assessment in the Development of Software as a Medical Device

Authors:

Sadra Farshid, Bruno Lima and João P. Faria

Abstract: Medical devices (MDs) and Software as a Medical Device (SaMD) are essential for e-Health applications, but they must comply with strict standards and regulations to ensure their safety and effectiveness. However, there is a lack of tools to assist in conducting appraisals for compliance assessment and managing appraisal information. In this paper, after reviewing the most relevant standards and regulations for MD and SaMD certification, we propose a web platform to help technology companies that lack expertise in developing SaMD to create compliant and high-quality products for the e-Health market. The platform provides users with custom checklists or questionnaires depending on the selected regulations, standards, risk classes, and product parameters. Supporting a secure, incremental, and collaborative approach to completing the assessment process, the platform enables the attachment of notes, evidence, and improvement suggestions. It facilitates repeated assessments over time for data reuse and comparative analysis, enhancing the assessment process’s efficiency and effectiveness.
Download