ICSOFT 2024 Abstracts


Area 1 - Foundational and Trigger Technologies

Short Papers
Paper Nr: 55
Title:

Smart Blockchain-Based Information Flow Control for SOA

Authors:

Khaoula Braiki and Olfa Dallel

Abstract: The Internet of Things (IoT) integrates smart devices that collect real time data from the environment. These data are leveraged to propose innovative services which transform the individual lives in a particular context such as smart homes. The Service Oriented Architecture (SOA) is adopted to support the composition of services. However, the service composition faces the problem of security, where data can illegitimately be shared with unauthorized services. This problem is called interference. The key challenge is to ensure end-to-end security which will guarantee the confidentiality and integrity of data. In this paper, we ensure the service binding in a blockchain-based SOA architecture and propose an approach based on the information flow control to protect data confidentiality. Service provider can express the service binding requirements by considering the service provider, the domain, the trust degree and the type of the operation to perform in order to secure the service composition. Moreover, we propose to apply a binding mode: a rule-based binding mode or smart binding based on a machine learning decision tree model. To avoid the interference issue, we integrate a non-interference verification mechanism by assigning a security level for each service. Our smart blockchain-based information flow control approach guarantees the confidentiality and integrity of information in IoT systems.
Download

Paper Nr: 94
Title:

Efficiently Computing Maximum Clique of Sparse Graphs with Many-Core Graphical Processing Units

Authors:

Lorenzo Cardone, Salvatore Di Martino and Stefano Quer

Abstract: The Maximum Clique is a fundamental problem in graph theory and has numerous applications in various domains. The problem is known to be NP-hard, and even the most efficient algorithm requires significant computational resources when applied to medium or large graphs. To obtain substantial acceleration and improve scalability, we enable highly parallel computations by proposing a many-core graphical processing unit implementation targeting large and sparse real-world graphs. We developed our algorithm from CPU-based solvers, redesigned the graph preprocessing step, introduced an alternative parallelization scheme, and implemented block-level and warp-level parallelism. We show that the latter performs better when the amount of threads included in a block cannot be fully exploited. We analyze the advantages and disadvantages of the proposed strategy and its behavior on different graph topologies. Our approach, applied to sparse real-world graph instances, shows a geomean speed-up of 9x, an average speed-up of over 19x, and a peak speed-up of over 70x, compared to a parallel implementation of the BBMCSP algorithm. It also obtains a geometric mean speed-up of 1.21x and an average speed-up of over 2.0x on the same graph instances compared to the parallel implementation of the LMC algorithm.
Download

Area 2 - Software Engineering and Systems Development

Full Papers
Paper Nr: 11
Title:

A Literature Survey on Pitfalls of Open-Source Dependency Management in Enterprise

Authors:

Andrey Kharitonov, Amro Abdalla, Abdulrahman Nahhas, Daniel G. Staegemann, Christian Haertel, Christian Daase and Klaus Turowski

Abstract: Open-source dependencies are an integral part of the modern enterprise software development process for numerous technology stacks. Often, these dependencies are distributed through public repositories located outside of the secure corporate environment, which introduces numerous challenges in ensuring the security, compliance, and maintainability of the developed software. In this work, we conduct a systematic literature review focused on the pitfalls of relying on open-source dependencies. We discovered 23 relevant publications between 2016 and the beginning of 2024 pointing out that supply chain attacks, outdated or abandoned dependencies, licensing issues, security vulnerabilities, as well as reliance on trivial packages and complex dependency trees are mentioned in the analyzed literature as significant challenges. Among the ways to tackle these, it is commonly suggested in the literature to use scanning tools to ensure security, consciously select the used dependencies, document, and keep track of the open-source dependencies used in software projects. Maintaining up-to-date dependencies and actively contributing to the development of the open-source project is encouraged.
Download

Paper Nr: 31
Title:

Feature Extraction, Learning and Selection in Support of Patch Correctness Assessment

Authors:

Viktor Csuvik, Dániel Horváth and László Vidács

Abstract: Automated Program Repair (APR) strives to minimize the expense associated with manual bug fixing by developing methods where patches are generated automatically and then validated against an oracle, such as a test suite. However, due to the potential imperfections in the oracle, patches validated by it may still be incorrect. A significant portion of the literature on APR focuses on this issue, usually referred to as Patch Correctness Check (PCC). Several approaches have been proposed that use a variety of information from the project under repair, such as diverse manually designed heuristics or learned embedding vectors. In this study, we explore various features obtained from previous studies and assess their effectiveness in identifying incorrect patches. We also evaluate the potential for accurately classifying correct patches by combining and selecting learned embeddings with engineered features, using various Machine Learning (ML) models. Our experiments demonstrate that not all features are equally important, and selecting the right ML model also has a huge impact on the overall performance. For instance, using all 490 features with a decision tree classifier achieves a mean F1 value of 64% in 10 independent trainings, while after an in-depth feature- and model selection with the selected 43 features, the MLP classifier produces a better performance of 81% F1. The empirical evaluation shows that this model is able to correctly classify samples on a dataset containing 903 labeled patches with 100% precision and 97% recall on it's peak, which is complementary performance compared to state-of-the-art methods. We also show that independent trainings can exhibit varying outcome, and propose how to improve the stability of model trainings.
Download

Paper Nr: 48
Title:

AsmDocGen: Generating Functional Natural Language Descriptions for Assembly Code

Authors:

Jesia Q. Yuki, Mohammadhossein Amouei, Benjamin M. Fung, Philippe Charland and Andrew Walenstein

Abstract: This study explores the field of software reverse engineering through the lens of code summarization, which involves generating informative and concise summaries of code functionality. A significant aspect of this research is the application of assembly code summarization in malware analysis, highlighting its critical role in understanding and mitigating potential security threats. Although there have been recent efforts to develop code summarization techniques for high-level programming languages, to the best of our knowledge, this study is the first attempt to generate comments for assembly code. For this purpose, we first built a carefully curated dataset of assembly function-comment pairs. We then focused on automatic assembly code summarization using transfer learning with pre-trained natural language processing (NLP) models, including BERT, DistilBERT, RoBERTa, and CodeBERT. The results of our experiments show a notable advantage of Code-BERT: despite its initial training on high-level programming languages alone, it excels in learning assembly language, outperforming other pre-trained NLP models.
Download

Paper Nr: 68
Title:

Towards an Approach for Integrating Risk Management Practices into Agile Contexts

Authors:

Fernando V. Garcia, Jean R. Hauck, Fernanda R. Hahn and Raul S. Wazlawick

Abstract: Risk management has been perceived as important to assist in the delivery of software products that meet the expected scope, schedule, and costs. In traditional plan-driven software development, risk management includes identifying, analyzing, planning responses, monitoring, and controlling risks. In agile methods, however, risks are typically managed implicitly, through practices and values that tend to keep risks under control. However, in many contexts, such as software development for highly regulated domains, such as health and finance, this implicit risk management may not be enough. This paper presents an action research on the introduction of explicit risk management practices in a healthcare software development organization that uses agile methods. Based on identified risk management practices used in agile contexts, the research intervention is planned, applied in two teams during three iterations and the collected data are evaluated and interpreted. The results of the study raise evidence that the agile risk management practices adopted were effective in identifying and mitigating risks and improving project results, without negatively impacting the team’s productivity or the agile values adopted by the organization. Also as a result of the application of the study, a possible approach to introduce risk management practices in agile contexts emerges.
Download

Paper Nr: 90
Title:

The Game of One More Idea: Gamification of Managerial Group Decision-Making in Software Engineering

Authors:

Hannes Salin

Abstract: Gamification is widely used and explored for improving learning and increase motivation. However, it has not been further explored in the context of software engineering managerial decision making, as a supportive tool in engaging both the individual and the team when making managerial group decisions. By applying a framework for game- and study design, we develop a card-based game for managerial group decision making, specifically for software engineering management teams. Our study is an industrial exploratory case study at the Swedish Transport Administration, where the decision-making game is tested on a management team. The aim of the study was to evaluate the perceived effects on the team’s engagement and overall decision-making process. Our experiments showed that the perceived engagement and confidence in making group decisions using the game was improved. Although difficult to conclude general remarks, the results give indications that the decision making process could benefit using the card game.
Download

Paper Nr: 92
Title:

Detecting and Resolving Bad Organisational Smells for Microservices

Authors:

Michele Agostini, Jacopo Soldani and Antonio Brogi

Abstract: The development and maintenance of microservices should be decentralised. The microservices in an application should be partitioned among DevOps teams so to reduce cross-team interactions, which are costly and slow the delivery of updates. To this end, this paper identifies three bad organisational smells for microser-vices, which may possibly denote decentralisation lapses in DevOps team assignments for microservice applications, together with the organisational refactorings allowing to resolve them. We then introduce a model-driven method to automatically detect and resolve bad organisational smells in a microservice application. The proposed method is based on extending µTOSCA, an existing metamodel for specifying microservice applications, to also support modelling the DevOps team assignment of microservices. Finally, we illustrate the feasibility and usefulness of the proposed model-driven method by providing its prototype implementation and reporting on a controlled experiment, respectively.
Download

Paper Nr: 104
Title:

Smart Widening for the Polyhedral Analysis

Authors:

Yassamine Seladji

Abstract: Interpretation, Static Analysis, Polyhedra Abstract Domain. The polyhedron abstract domain is a cornerstone of static program analysis, providing a powerful mathematical framework for reasoning about numerical properties of programs. It can express a large number of properties. This makes it complex and time-consuming. In the polyhedral analysis, the widening operator with threshold proposes a good compromise between precision and time execution. However, ensuring both termination and precision in the analysis process remains a challenge, particularly when dealing with complex programs and high-dimensional state spaces. This paper proposes a novel approach to improve polyhedral analysis by dynamically computing relevant widening thresholds, thereby improving both the termination and precision of the analysis. We demonstrate the effectiveness of our approach through experimental evaluation on a variety of benchmark programs. Our results show significant improvements in both analysis termination and precision.
Download

Short Papers
Paper Nr: 37
Title:

An Evaluation of Risk Management Standards and Frameworks for Assuring Data Security of Medical Device Software AI Models

Authors:

Buddhika Jayaneththi, Fergal M. Caffery and Gilbert Regan

Abstract: Data is the backbone of Artificial Intelligence (AI) applications, including Medical Device Software (MDS) AI models which rely on sensitive health data. Assuring security of this sensitive health data is a key requirement for MDS AI models and there should be a structured way to manage the risk caused by data security compromises. Implementing a security risk management standard/framework is an effective way to develop a solid baseline for managing security risks, measuring the effectiveness of security controls and meeting compliance requirements. In this paper, nine risk management standards/frameworks in data/information security, AI, Medical Devices (MDs) and AI-enabled MDs domains are evaluated to identify their gaps and implementation challenges when applying them to assure data security of MDS AI models. The results show that currently there is no specific standard/framework that specifically addresses data security risk management of MDS AI models, and that existing standards/frameworks have several gaps such as complexity of the implementation process; lack of detailed threat and vulnerability catalogues; lack of a proper method for risk calculation/estimation; and lack of risk controls and control implementation details. These gaps necessitate the need for the development of a new data security risk management framework for MDS AI models.
Download

Paper Nr: 46
Title:

An Empirical Examination of the Technical Aspects of Data Sovereignty

Authors:

Julia Pampus and Maritta Heisel

Abstract: Self-determination and autonomy in data sharing, in recent research also referred to as data sovereignty, arouses increasing interest in the context of industrial ecosystems. Its practical implementation considers organisational, regulatory, legal, and particularly technical aspects. Previous work has not yet focused on the structured analysis of technical characteristics of systems used in data sharing concerning data sovereignty. In this paper, we therefore elicit what system requirements help the data sovereignty of a data sharing participant, starting from privacy protection goals, FAIR principles, and ISO/IEC 25010:2011. To address this, we conducted a qualitative study in the form of an online questionnaire. We asked 18 domain experts to evaluate selected system criteria for their impact and relevance to the implementation of data sovereignty. Our work has resulted in a set of 22 functional requirements that can be used for designing data sharing systems. Subsequently, we discuss our findings, compare them with related work, and address further research.
Download

Paper Nr: 54
Title:

Six-Layer Industrial Architecture Applied to Predictive Maintenance

Authors:

Fernanda P. Guidotti, Jesimar S. Arantes, Márcio S. Arantes, Albeiro E. Bedoya and Claudio M. Toledo

Abstract: In the context of digital transformation defining Industry 4.0, the integration of Industrial Artificial Intelligence (I-AI) emerges as a transformative element, promoting the development, validation, and deployment of machine learning algorithms in industrial applications. As sensor technologies advance, reducing costs and expanding the capability for direct data collection from machines, there arises a need for system architectures that not only support but also optimize these data collection and analysis processes. This paper introduces an innovative reference architecture for I-AI, which stands out by advancing beyond the traditional 5-layer (5C) framework through the addition of a sixth layer, named ”Consciousness”. This innovative layer is designed to retroactively feed the knowledge acquired back to the previous layers, significantly enhancing control and optimization through AI systems. The proposed architecture, termed 6C, comprises the layers of Connection, Conversion, Cyber-Physical, Cognition, Configuration, and finally, Consciousness. The introduction of the Consciousness layer marks a significant innovation in the literature, offering a mechanism by which the architecture is capable of autonomously perceiving the state and needs of the industrial system. Validated in an industrial case study, the 6C architecture demonstrated performance improvement by incorporating the Consciousness layer, highlighting its effectiveness in enhancing operational efficiency and decision-making within complex industrial contexts.
Download

Paper Nr: 61
Title:

The End of Mobile Software Engineering (As We Know It)

Authors:

Robin Nunkesser

Abstract: Mobile Software Engineering as a separate research field has been relevant to speed up Software Engineering research in the area of mobile devices and applications. However, there are strong signs that the need for separate research in Mobile Software Engineering is coming to an end. This is not due to a decline in the importance of mobile devices and applications, but due to the impact of mobile on Software Engineering in its entirety. Existing mobile specific research may be reconsidered for its implications on Software Engineering in general. Future research can profit from considering mobile and non mobile devices and applications whereever possible.
Download

Paper Nr: 64
Title:

Towards an Ethereum Smart Contract Fuzz Testing Tool

Authors:

Mariam Lahami, Moez Krichen, Mohamed A. Mnassar, Racem Mrabet and Mohamed Ben Rhouma

Abstract: The widespread and well-known blockchain platform that makes use of smart contracts is Ethereum. The key feature of these computer programs is that once deployed, they cannot be updated anymore. Therefore, it is highly necessary to efficiently test smart contracts before their deployment. This paper presents a Web-based testing tool called LeoKai that makes it easy to automatically generate test inputs and also unit test templates to detect bugs and vulnerabilities in Ethereum smart contracts. It helps developers to perform manual UI tests by invoking smart contracts deployed on the Ganache blockchain. Furthermore, it supports black-box fuzz testing and randomly generates test inputs. Blockchain developers may use the unit test template generator to generate unit tests. It also includes a code coverage module that highlights their efficiency by assessing function, branch, and line coverage. Finally, the prototype and its implementation details are illustrated.
Download

Paper Nr: 78
Title:

A Measures-Driven Decision Support System for Managing Requirement Change in Scrum: An Empirical Evaluation

Authors:

Hela Hakim, Asma Sellami and Hanêne Ben-Abdallah

Abstract: In Scrum-based projects, precise assessments of requirement changes are crucial for effective management. A Decision Support System (DSS) can streamline managing these changes, improve collaboration, and enhance decision-making. This paper proposes a Measure-Driven Decision Support System (MD-DSS) for managing requirement changes at both functional and structural levels, using COSMIC FSM (ISO 19761) and an extended Structural Size Measurement method. The MD-DSS benefits all Scrum stakeholders, including Product Owners, Scrum Masters, Development Teams, and managers. Its performance was evaluated quantitatively and qualitatively across 15 software development projects.
Download

Paper Nr: 101
Title:

Design of Adaptable and Secure Connectors for Software Architectures

Authors:

Juan Marcelo Gutierrez Carballo, Michael Shin and Hassan Gomaa

Abstract: This paper describes the design of adaptable and secure (AS) connectors that encapsulate security concerns and their adaptation concerns in the interaction between application components in secure software architectures. The security concerns in software architectures need to be dynamically adaptable to changing security requirements so that the architectures respond to evolving security risks. This paper describes the design of AS connectors that dynamically adapt security concerns over changing security risks in software architectures. The AS connectors can reduce the complexity of adaptation separately from application concerns. To validate this research, we designed and implemented the AS connectors for asynchronous message communication (AMC), which would adapt security concerns for secure software architectures to changing security risks.
Download

Paper Nr: 43
Title:

Automated Generation of Web Application Front-end Components from User Interface Mockups

Authors:

Oksana Nikiforova, Kristaps Babris and Farshad Mahmoudifar

Abstract: In modern web development, the design-to-code process is a critical bottleneck, often characterized by inefficiencies and inconsistencies between initial design concepts and implemented user interfaces. Bridging this gap demands innovative solutions that streamline the translation of design artefacts, such as wireframes/mockups, into functional front-end components. Generation of user interface (UI) elements has also been in the spotlight of researchers all along. There are several solutions to support the transformation of different automation processes and elements, but the fact that common methods and tools for generating UIs are still not widely used shows that this problem has not been solved yet. In this paper, we propose model-driven approach to automate the generation of front-end source code from wireframe representations and, thereby facilitating rapid prototyping and enhancing collaboration between designers and developers. The paper offers the conceptual solution for front-end components generation from the mockups developed in the UI design tools, where it is possible to have formal definition of the UI elements source code. The offered solution is approbated on the abstract practical example, where mockups designed in the tool draw.io are used as a source for generation of front-end components of the web application deployed in AngularJS framework.
Download

Paper Nr: 65
Title:

Asserting Frame Properties

Authors:

Yoonsik Cheon, Bozhen Liu and Carlos Rubio-Medrano

Abstract: Frame axioms and properties are crucial for ensuring the correctness of operations by defining which parts of a program’s state may change during operation execution. Despite their significance, there has been no known method for asserting frame properties of operations for runtime checks. This paper introduces a practical approach that utilizes abstract models and executable assertions to effectively check frame properties at runtime. By defining abstract models that capture relevant state variables and their relationships, programmers can specify abstractly the parts of an object’s state that may change during operation execution. These frame properties, specified in terms of abstract models and embedded as executable assertions within the code, enforce behavioral constraints and improve the readability, maintainability, and reusability of the assertion code. Additionally, the approach supports the concept of observable side effects.
Download

Paper Nr: 74
Title:

Overcoming Obstacles in Model-Driven Engineering: Lessons from the Software Industry

Authors:

Sayeda R. Akthar, Muhammad R. Islam, Marzan B. Hasan, Mahpara S. Siddiqua, Shadat I. Haque, Jamil A. Saad, Farzana Sadia and Mahady Hasan

Abstract: Software modeling, as used in Model-Driven Engineering (MDE), is the process of abstracting software systems using formal or informal notations to help with communication, analysis, and design. This study looks into the difficulties Bangladesh’s software industry faces while using model-based engineering, or system modeling. A survey of several companies in Bangladesh was carried out to get opinions from professionals such as software developers, project managers, test engineers, and architects. The data gathering instrument utilized was Google Forms, and the questionnaire’s design was informed by extant literature. Results show that about 75.1% of respondents use system modeling in their projects, and 56.3% of them say it helps to streamline project development. Strong internal consistency among survey items pertaining to system modeling methodologies is indicated by a high Cronbach’s alpha value (0.93007), which suggests growing adoption among software enterprises in Bangladesh. Response analysis revealed trends and patterns that were represented through data quantification. Based on these results, the paper offers suggestions for resolving these problems and advancing more comprehensive system modeling methodologies. The steps that have been proposed to address the challenges in Model-Driven Engineering (MDE) include conducting in-depth research to determine the underlying causes of the problems, putting scalability techniques into practice, maintaining documentation standards, and setting up training sessions, seminars, and workshops. Such measures have the potential to increase the efficacy of resolving the problems.
Download

Paper Nr: 87
Title:

Towards Building a Reputation-Based Microservices Trust Model Using Similarity Domains

Authors:

Zhongyi Lu, Declan T. Delaney, Tong Li and David Lillis

Abstract: Microservices have been seen as a solution to open systems, within which microservices can behave arbitrarily. This requires the system to have strong trust management. However, existing microservices trust models cannot fully support open systems. In this paper, we propose a reputation-based trust model designed for open microservices that groups similar microservices within the same “similarity domain” and includes a trust bootstrapping process and a comprehensive trust computation method. Our proposal introduces a new concept called “trust balancing” to assure that all microservices can fairly be incorporated into the operation of the system. The design of the evaluation plan is also introduced to demonstrate the suitability of the proposed model to open microservice systems.
Download

Paper Nr: 100
Title:

Synthesizers: A Meta-Framework for Generating and Evaluating High-Fidelity Tabular Synthetic Data

Authors:

Peter Schneider-Kamp, Anton D. Lautrup and Tobias Hyrup

Abstract: Synthetic data is by many expected to have a significant impact on data science by enhancing data privacy, reducing biases in datasets, and enabling the scaling of datasets beyond their original size. However, the current landscape of tabular synthetic data generation is fragmented, with numerous frameworks available, only some of which have integrated evaluation modules. synthesizers is a meta-framework that simplifies the process of generating and evaluating tabular synthetic data. It provides a unified platform that allows users to select generative models and evaluation tools from open-source implementations in the research field and apply them to datasets of any format. The aim of synthesizers is to consolidate the diverse efforts in tabular synthetic data research, making it more accessible to researchers from different sub-domains, including those with less technical expertise such as health researchers. This could foster collaboration and increase the use of synthetic data tools, ultimately leading to more effective research outcomes.
Download

Paper Nr: 109
Title:

A Reference Architecture for Dynamic IoT Environments Using Collaborative Computing Paradigms (CCP-IoT-RA)

Authors:

Prashant G. Joshi and Bharat M. Deshpande

Abstract: Collaborative Computing Paradigms (CCP) has shown the potential to overcome challenges in dynamic IOT environments. CCP’s features are interconnection and interplay, dynamic distribution of data processing, fluidity of computing across paradigms, storage and data management across participating paradigms, and scalability and extendability of the systems software architecture. Reference Architecture and Models are known to provide a blueprint, that can be applied across applications domains, and thus can potentially accel-erate the development and deployment of systems software. Using the features of CCP, this paper proposes a RA for dynamic IOT environments using the collaborative computing paradigm (CCP-IOT-RA). Proposed CCP-IOT-RA reference architecture has been applied to commercial and telematics applications like building automation and vehicle and driver behaviour, demonstrating it’s versatility and effectiveness.
Download

Area 3 - Software Systems and Applications

Full Papers
Paper Nr: 22
Title:

Improving Robustness of Satellite Image Processing Using Principal Component Analysis for Explainability

Authors:

Ulrike Witteck, Jan Stambke, Denis Grießbach and Paula Herber

Abstract: Finding test-cases that cause mission-critical behavior is crucial to increase the robustness of satellite on-board image processing. Using genetic algorithms, we are able to automatically search for test cases that provoke such mission-critical behavior in a large input domain. However, since genetic algorithms generate new test cases using random mutations and crossovers in each generation, they do not provide an explanation why certain test cases are chosen. In this paper, we present an approach to increase the explainability of genetic test generation algorithms using principal component analysis together with visualizations of its results. The analysis gives deep insights into both the system under test and the test generation. With that, the robustness can be significantly increased because we 1) better understand the system under test as well as the selection of certain test cases and 2) can compare the generated explanations with the expectations of domain experts to identify cases with unexpected behavior to identify errors in the implementation. We demonstrate the applicability of our approach with a satellite on-board image processing application.
Download

Paper Nr: 28
Title:

Towards Accurate Cervical Cancer Detection: Leveraging Two-Stage CNNs for Pap Smear Analysis

Authors:

Franklin C. Paucar, Carlos M. Bojorque, Iván Reyes-Chacón, Paulina Vizcaino-Imacaña and Manuel E. Morocho-Cayamcela

Abstract: Cervical cancer is a type of cancer that occurs in the cervix. It is caused by the abnormal growth of cells in the cervix and is often caused by the human papillomavirus. Symptoms can include abnormal vaginal bleeding, and pelvic pain, among others. It is usually diagnosed with a pelvic exam, biopsy, and Papanicolaou or pap test. Generally, during the test, a small sample of cells is taken from the cervix and examined under a microscope to look for any abnormal cells. The test is usually done during a pelvic exam and can be done in a doctor’s office or clinic, which can cause human errors to exist and lead to a deficit of service or misdiagnosis for patients. Especially, in Ecuador, cervix cancer is the second with the most prominent incidence and mortality. One of the obstacles in Latin America to improving the number of cervix cancer screens is the amount of time needed to give results. This paper proposes a pre-trained artificial neural network and a much larger database than its paper base, this will allow us to obtain better results and a network with more accurate predictions when throwing where malignant cells could be located that could lead to cervical cancer. The process to carry it out is similar to its original process, where the analysis of the Papanicolaou tests is carried out in two stages. The first focused on finding the coordinates of the anomalous cells observed within each of the images of our dataset and the second, specializing in being able to obtain an image with a much higher resolution for each of these coordinates, thus obtaining an improvement and being able to give a much more reliable diagnosis for each of the patients.
Download

Paper Nr: 44
Title:

A Webcam Artificial Intelligence-Based Gaze-Tracking Algorithm

Authors:

Saul Figueroa, Israel Pineda, Paulina Vizcaíno, Iván Reyes-Chacón and Manuel E. Morocho-Cayamcela

Abstract: Nowadays, technological advancements for supporting human-computer interaction have had a big impact. However, most of those technologies are expensive. For that reason, building a webcam gaze-tracking system represents a computationally cost-effective approach. The gaze-tracking technique focuses on tracking the gaze direction and estimating its coordinates over a computer screen to follow user visual attention. This research presents a gaze estimation approach to predict the user's gaze direction using a webcam artificial intelligence-based gaze-tracking algorithm. The purpose of this paper is to train a convolutional neural network model capable of predicting a 3D gaze vector to estimate then the 2D gaze position coordinates over a computer screen. To perform this task, three steps are followed: $1)$ Pre-processing the input, crop facial, and eye images from the MPIIFaceGaze dataset. $2)$ Train a customized network based on a ResNet-50 pre-trained on ImageNet for gaze vector predictions. $3)$ 3D gaze vectors conversion to 2D point of gaze on the screen. The results demonstrate that our model outperforms the state-of-the-art VGG-16 model under the same dataset by up to $\sim33\%$.
Download

Paper Nr: 70
Title:

From IoT Servitization to IoT Assetization

Authors:

Zakaria Maamar, Amel Benna, Vanilson Buregio and David Alves

Abstract: This paper discusses the stages and techniques to guide the conversion of IoT-compliant things into assets and then, services. While existing initiatives focus on the conversion of things into services as part of the servitization process, there are not initiatives that focus on thing conversion into assets as part of the assetization process. Assetization permits to model things from a management perspective in terms of depreciation over time, transferability across locations, disposability after use, and convertibility across platforms. Thanks to assetization, things would provide economical, informational, operational, and regulatory benefits to their owners (whether moral or juridical). A system demonstrating the technical doability of thing assetization is also discussed in the paper.
Download

Paper Nr: 77
Title:

Optimisation of Ceramic Kiln Loading Problem Using Multi-Objective Genetic Algorithm

Authors:

Derya Deliktaş and Ayşe Kaygısız

Abstract: Efficient resource utilisation is paramount for boosting productivity and competitiveness within industrial contexts. In ceramic manufacturing, the Ceramic Kiln Loading Problem is critical, wherein the optimal arrangement of ceramic products within kilns significantly influences production efficiency. This study aims to enhance efficiency by maximising the utilisation of the oven vehicle through optimal loading of the ordered products. To achieve this objective, the Genetic Algorithm has been integrated with weighted sum and conic scalarisation methods, and the results obtained from each method have been compared. Additionally, since the algorithm’s parameters can significantly influence its performance, parameter tuning has been conducted using the irace method. The findings corroborate the superiority of results obtained by integrating the Genetic Algorithm with weighted sum scalarisation.
Download

Paper Nr: 84
Title:

Evaluating the Performance of LLM-Generated Code for ChatGPT-4 and AutoGen Along with Top-Rated Human Solutions

Authors:

Ashraf Elnashar, Max Moundas, Douglas C. Schmidt, Jesse Spencer-Smith and Jules White

Abstract: In the domain of software development, making informed decisions about the utilization of large language models (LLMs) requires a thorough examination of their advantages, disadvantages, and associated risks. This paper provides several contributions to such analyses. It first conducts a comparative analysis, pitting the best-performing code solutions selected from a pool of 100 generated by ChatGPT-4 against the highest-rated human-produced code on Stack Overflow. Our findings reveal that, across a spectrum of problems we examined, choosing from ChatGPT-4's top 100 solutions proves competitive with or superior to the best human solutions on Stack Overflow. We next delve into the AutoGen framework, which harnesses multiple LLM-based agents that collaborate to tackle tasks. We employ prompt engineering to dynamically generate test cases for 50 common computer science problems, both evaluating the solution robustness of AutoGen vs ChatGPT-4 and showcasing AutoGen's effectiveness in challenging tasks and ChatGPT-4's proficiency in basic scenarios. Our findings demonstrate the suitability of generative AI in computer science education and underscore the subtleties of their problem-solving capabilities and their potential impact on the evolution of educational technology and pedagogical practices.
Download

Paper Nr: 85
Title:

A Lightweight, Computation-Efficient CNN Framework for an Optimization-Driven Detection of Maize Crop Disease

Authors:

Shahinza Manzoor, Muhammad R. Mughal, Syed A. Irtaza, Saif U. Islam and Jalil Boudjadar

Abstract: Detecting and mitigating crop diseases can prevent significant yield losses and economic damage. However, most state-of-the-art solutions can be expensive computation-wise. This paper presents an efficient Lightweight multi-layer convolutional neural network (ML-CNN) to identify maize crop diseases. The proposed model aims to improve disease identification accuracy and reduce computational costs. The model was optimized by adjusting parameters, setting convolutional layers, changing the combinations of the pooling layer, and adding dropout layers. By optimizing the model architecture, we create a software tool that can be deployed in resource-limited environments, an ideal choice for deployment on embedded platforms. The PlantVillage dataset was used to train and test the model implementation, including images of healthy and two disease-affected leaves. The performance of the proposed model was compared with pre-trained models such as InceptionV3, VGG16, VGG19, and ResNet50. The analysis results show that the proposed model improved identification accuracy by 16.32%, 1.48%, 1.28%, and 2.26%, respectively. Additionally, the proposed model achieved identification accuracy of 99.60% on the training data and 98.16% on the testing data and also reduced iteration convergences and computational costs.
Download

Paper Nr: 86
Title:

Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project

Authors:

Florian Maronnat, Guillaume Loup, Jonathan Degand, Frédéric Davesne and Samir Otmane

Abstract: Alzheimer’s disease is the first cause of dementia worldwide without any current curative treatment. It represents a public health challenge with an increasing prevalence and associated costs. Usual diagnostic methods rely on extended interviews and paper tests provided by an exterior examiner. We aim to create a novel, quick cognitive-screening tool on a digital tablet. This program, built and edited with Unity®, runs on Android® for the Samsung Galaxy Tab S7 FE®. Composed of seven tasks inspired by the Mini-Mental Status Examination and the Montréal Cognitive Assessment, it browses several cognitive functions. The architectural design of this tablet application is distinguished by its multifaceted capabilities, encompassing not only seamless offline functionality but also a mechanism to ensure the singularity of data amalgamated from diverse sites. Additionally, a paramount emphasis is placed on safeguarding the confidentiality of patient information in the healthcare domain. Furthermore, the application empowers individual site managers to access and peruse specific datasets, enhancing their operational efficacy and decision-making processes. We performed a preliminary usability assessment among 24 healthy patients with a final F-SUS score of "excellent". Participants perceived the tool as simple to use and completed the test in a mean time of 142 seconds.
Download

Paper Nr: 98
Title:

Enhancing User Experience in Games with Large Language Models

Authors:

Ciprian Paduraru, Marina Cernat and Alin Stefanescu

Abstract: This paper explores the application of state-of-the-art natural language processing (NLP) technologies to improve the user experience in games. Our motivation stems from the realization that a virtual assistant’s input during games or simulation applications can significantly assist the user in real-time problem solving, suggestion generation, and dynamic adjustments. We propose a novel framework that seamlessly integrates large-scale language models (LLMs) into game environments and enables intelligent assistants to take the form of physical 3D characters or virtual background entities within the player narrative. Our evaluation considers computational requirements, latency and quality of results using techniques such as synthetic dataset generation, fine-tuning, Retrieval Augmented Generation (RAG) and security mechanisms. Quantitative and qualitative evaluations, including real user feedback, confirm the effectiveness of our approach. The framework is implemented as an open-source plugin for the Unreal Engine and has already been successfully used in a game demo. The presented methods can be extended to simulation applications and serious games in general.
Download

Short Papers
Paper Nr: 14
Title:

Determining the Progress of a Business Object Based on its Object Instances: An Empirical Study

Authors:

Lisa Arnold, Marius Breitmayer and Manfred Reichert

Abstract: A fundamental task of any business process monitoring component is to continuously determine the progress of the running processes of an enterprise. This is particularly challenging when facing dynamic processes undergoing changes during run-time, which most likely affect the progress of the respective processes as well. This paper considers object-centric business processes, which consist of business objects and their relations. During run-time, these business objects may be instantiated multiple times to form object instances. The run-time behaviour of these object instances is manifested in terms of object lifecycles that interact with each other. For monitoring a single business object five alternative methods are introduced, which allow determining the progress based on average calculations, information about the semantic object relations (hierarchical order, minimal and maximal cardinality), or event logs (if available). For all methods, the precalculated progress of individual object instances is leveraged. To evaluate the different methods, an empirical study with 65 participants was conducted. As key observation, the majority of the participants that are experienced with process modelling and monitoring tools, prefer deriving the progress of a business object from event logs. The results of this paper are fundamental for determining the progress of a holistic object-centric business process.
Download

Paper Nr: 17
Title:

Product-Line Engineering for Smart Manufacturing: A Systematic Mapping Study on Security Concepts

Authors:

Richard May, Alen J. Alex, Rakky Suresh and Thomas Leich

Abstract: The increasing configurability of smart manufacturing software systems introduces multiple attack vectors, mainly due to their system growing complexity. Although there is an ever-increasing risk for exploiting vulnerabilities caused by configuration-related issues in this context, there is currently no overview of current state-of-research on security within smart manufacturing software systems, especially those that are based on product-line engineering. To address this gap, we employed a systematic mapping study of 43 publications related to the intersection of security, smart manufacturing, and product-line engineering (2014--2023). Besides an overview of what properties have been researched, we identified nine literature gaps to create awareness of the current state-of-research and to guide future research. Overall, we highlight that there is a need for more research on (configurable) security concepts in PLE. Current approaches often address security as a separate (non-functional) requirement rather than integrating it into the PLE framework or mapping it to the unique properties of smart manufacturing. Concrete vulnerabilities, threats and risks, countermeasures and patterns are typically hardly described, which may have fatal consequences, especially in the context of configuration-related issues in cyber-physical systems.
Download

Paper Nr: 26
Title:

Multimodal Approach Based on Autistic Child Behavior Analysis for Meltdown Crisis Detection

Authors:

Marwa Masmoudi, Salma K. Jarraya and Mohamed Hammami

Abstract: This paper presents an innovative method for addressing the challenge of recognizing and responding to meltdown crises in autistic children. It focuses on integrating information from emotional and physical modalities, employing multimodal fusion with an emphasis on the early fusion technique. Existing literature outlines three fusion techniques – early, late, and hybrid fusion, each with unique advantages. Due to the distinct nature of datasets representing emotions and physical activities, late and hybrid fusion were considered impractical. Therefore, the paper adopts the early fusion method and introduces a Multi-modal CNN model architecture for efficient meltdown crisis recognition. The architecture comprises three Convolution layers, Max-pooling Layers, a Fully Connected (FC) layer, and Softmax activation for classification. The decision to opt for early fusion is driven by the inconsistent detection of children’s faces in all video frames, resulting in two different output sizes for emotion and physical activity systems. The presented pseudo-code outlines the architecture development steps. The proposed model’s efficiency is highlighted by its outstanding recognition rate and speed, making it the preferred choice for the time-sensitive Smart-AMD (Smart-Autistic Meltdown Detector) System. Beyond technical aspects, the model aims to enhance the well-being of autistic children by promptly recognizing and alerting caregivers to abnormal behaviors during a meltdown crisis. This paper introduces a comprehensive system that integrates advanced technology and a profound understanding of autism, offering timely and effective support to those in need.
Download

Paper Nr: 34
Title:

RLHR: A Framework for Driving Dynamically Adaptable Questionnaires and Profiling People Using Reinforcement Learning

Authors:

Ciprian Paduraru, Catalina C. Patilea and Alin Stefanescu

Abstract: In today’s corporate landscape, the creation of questionnaires, surveys or evaluation forms for employees is a widespread practice. These tools are regularly used to check various aspects such as motivation, opportunities for improvement, satisfaction levels and even potential cybersecurity risks. A common limitation lies in their generic nature: they often lack personalization and rely on predetermined questions. Our research focuses on improving this process by introducing AI agents based on reinforcement learning. These agents dynamically adapt the content of surveys to each person based on their unique personality traits. Our framework is open source and can be seamlessly integrated into various use cases in different industries or academic research. To evaluate the effectiveness of the approach, we tackle a real-life scenario: the detection of potentially inappropriate behavior in the workplace. In this context, the reinforcement learning-based AI agents function like human recruiters and create personalized surveys. The results are encouraging, as they show that our decision algorithms for content selection are very similar to those of recruiters. The open-source framework also includes tools for detailed post-analysis for further decision making and explanation of the results.
Download

Paper Nr: 35
Title:

Optimizing Intensive Database Tasks Through Caching Proxy Mechanisms

Authors:

Ionut,-Alex Moise and Alexandra Băicoianu

Abstract: Web caching is essential for the World Wide Web, saving processing power, bandwidth, and reducing latency. Many proxy caching solutions focus on buffering data from the main server, neglecting cacheable information meant for server writes. Existing systems addressing this issue are often intrusive, requiring modifications to the main application for integration. We identify opportunities for enhancement in conventional caching proxies. This paper explores, designs, and implements a potential prototype for such an application. Our focus is on harnessing a faster bulk-data-write approach compared to single-data-write within the context of relational databases. If a (upload) request matches a specified cacheable URL, then the data will be extracted and buffered on the local disk for later bulk-write. In contrast with already existing caching proxies, Squid for example, in a similar uploading scenario, the request would simply get redirected, leaving out potentially gains such us minimized processing power, lower server load and bandwidth. After prototyping and testing the suggested application against Squid, concerning data uploads with 1, 100, 1.000,..., 100.000 requests, we consistently observed query execution improvements ranging from 5 to 9 times. This enhancement was achieved through buffering and bulk-writing the data, the extent of which depended on the specific test conditions.
Download

Paper Nr: 39
Title:

A Systematic Mapping Study on Impact Analysis

Authors:

Emanuele Gentili, Jonida Çarka and Davide Falessi

Abstract: Context: Software change impact analysis (CIA) concerns managing the consequences of the changes in software artefacts. CIA solutions support software developers in making informed decisions regarding the impact of changes and ensure the overall stability and reliability of software systems. Aim: This paper presents a mapping study on CIA solutions. Our analysis focuses on two main dimensions: 1) the source of the change and 2) the target of the change. Method: We analyse 258 primary studies. Results: We show that in more than one-third of the cases, the Target and Source artefacts mentioned are in the general category. The second most analyzed artefact is Code. In contrast, the least mentioned source artefact is test, while the least mentioned target artefact is requirement. Conclusions: The identified research gaps offer opportunities to expand the knowledge and understanding of CIA techniques, ultimately benefiting practitioners and software development processes as a whole.
Download

Paper Nr: 40
Title:

HybridCRS-TMS: Integrating Collaborative Recommender System and TOPSIS for Optimal Transport Mode Selection

Authors:

Mouna Rekik, Rima Grati, Ichrak Benmohamed and Khouloud Boukadi

Abstract: The pervasive influence of smartphones and mobile internet has revolutionized journey planning, particularly transportation. With navigation applications delivering real-time information, recommender systems have emerged as crucial tools for enhancing the travel experience. This paper introduces HybridCRS-TMS, a unique Hybrid Collaborative Recommender System for Transport Mode Selection, leveraging a dataset of 260 passengers. Through advanced data mining techniques, specifically k-Nearest Neighbors (k-NN) for collaborative recommendations and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) analysis for objective evaluation, the system provides personalized transportation mode recommendations. The model not only demonstrates exceptional performance but also showcases the synergy between collaborative and objective decision-making approaches, contributing to efficient, personalized, and well-informed travel solutions. This study underscores the system’s versatility, illustrating its ability to optimize travel choices through a hybrid recommendation framework that integrates both collaborative and objective criteria.
Download

Paper Nr: 51
Title:

Artificial Intelligence-Based Detection and Prediction of Giant African Snail (Lissachatina Fulica) Infestation in the Galápagos Islands

Authors:

Jonathan Loor, Ariana Jiménez, Juan M. Aguirre, Grace Rodríguez, Iván Reyes, Paulina Vizcaino-Imacaña and Manuel E. Morocho-Cayamcela

Abstract: The Galápagos Archipelago are confronting a significant threat from invasive species, notably L. fulica, which disrupts the delicate balance of their natural ecosystem. An innovative solution is proposed, employing mobile application technology and artificial intelligence (AI) to streamline the collection, analysis, and prediction of L. fulica movements. The mobile application facilitates efficient recording of L. fulica sightings by field teams, including Global Positioning System (GPS) coordinates, type, condition, and quantity. Data collected is transmitted to a cloud-based server for storage and analysis, where machine learning algorithms process time-series data to generate predictive models of L. fulica movement patterns. Results underscore the effectiveness of AI in enhancing the efficiency and accuracy of Giant African Snail (GAS) detection and movement estimation, facilitating informed decision-making by administrators and managers. By safeguarding the native flora and fauna of the archipelago, this solution represents a significant stride towards mitigating the impact of invasive species and preserving the unique biodiversity of the Galapagos Islands.
Download

Paper Nr: 56
Title:

Logging Hypercalls to Learn About the Behavior of Hyper-V

Authors:

Lukas Beierlieb, Nicolas Bellmann, Lukas Iffländer and Samuel Kounev

Abstract: Hypervisors such as Xen, VMware ESXi, or Microsoft Hyper-V provide virtual machines used in data centers and cloud computing, making them a popular attack target. One potential attack vector is the hypercall interface, which exposes privileged operations as hypercalls. We present a hypercall logger for the Hyper-V hypercall interface that logs the inputs, outputs, and sequence of hypercalls. The logs should improve the testability of the hypercall interface by helping to construct test cases for the hypercall handlers. Related works in hypercall monitoring analyze less detailed hypercall invocation data with intrusion detection systems. Our logger extends the WinDbg debugger by adding additional commands to set software breakpoints on the hyper-call handler entry and exit within a debugging session with the Hyper-V hypervisor. The evaluation confirmed that the logs are correct and that the breakpoints slow hypercall execution by 100,000 to 200,000. A case study with the hypercall handler logger helps create test cases for evaluation and demonstrates the logger’s suitability.
Download

Paper Nr: 63
Title:

Adopting Delta Maintainability Model for Just in Time Bug Prediction

Authors:

Lerina Aversano, Martina Iammarino, Antonella Madau, Debora Montano and Chiara Verdone

Abstract: A flaw that leads to a software malfunction is called a bug. Preventing bugs from the beginning reduces the need to address complex problems in later stages of development or after software release. Therefore, bug prevention helps create more stable and robust code because bug-free software is easier to maintain, update, and expand over time. In this regard, we propose a pipeline for the prevention of bugs in the source code, consisting of a machine learning model capable of predicting just in time whether a new commit inserted into the repository can be classified as ”good” or ”bad”. This is a critical issue as it directly affects the quality of our code. The approach is based on a set of features containing process software metrics at the commit level, some of which are related to the impact of changes. The proposed method was validated on data obtained from three open-source systems, for which the entire evolutionary history was considered, focusing mainly on those affected by bugs. The results are satisfactory and show not only the effectiveness of the proposed pipeline capable of working in continuous integration but also the ability of the approach to work cross-project, thus generalizing the results obtained.
Download

Paper Nr: 66
Title:

Enhancing Holonic Architecture with Natural Language Processing for System of Systems

Authors:

Muhammad Ashfaq, Ahmed R. Sadik, Tommi Mikkonen, Muhammad Waseem and Niko Mäkitalo

Abstract: The ever-growing complexity and dynamic nature of modern System of Systems (SoS) necessitate efficient communication mechanisms to ensure interoperability and collaborative functioning among constituent systems (CS), referred to as holons in the holonic architecture of SoS. This paper proposes a novel approach to enhance humand-to-holon and holon-to-holon communication within the holonic architecture through the integration of Natural Language Processing (NLP) techniques. Our proposed framework utilizes advancements in NLP, specifically Large Language Models (LLMs), enabling holons to understand and act on natural language instructions. This enables more intuitive holon-to-holon and human-to-holon interactions, leading to better coordination among diverse systems. The framework’s practical application is demonstrated through an Unmanned Vehicle Fleet (UVF) case study, showcasing its potential in enhancing communication and coordination in complex SoS. Additionally, we propose evaluation strategies to assess the efficiency and effectiveness of this framework, and identify areas for improvement. This work sets the stage for future exploration and prototype implementation, paving the way for further advancements in SoS communication and collaboration.
Download

Paper Nr: 93
Title:

A Decision-Making Approach Combining Process Mining, Data Mining and Business Intelligence

Authors:

Olfa Haj Ayed and Sonia A. Ghannouchi

Abstract: In the era of Big Data, Process Mining (PM), Data Mining (DM) and Business Intelligence (BI) are essential analytical tools for companies. By intelligently exploiting big data, these approaches make it possible to extract valuable information. Although each has its own orientation, concepts, techniques and modes of visualization, these three disciplines converge towards a common goal: improving decision-making. This work proposes an innovative approach which consists in combining the strengths of PM, DM and BI within a powerful global dashboard. This centralized dashboard will bring together visualizations from all three domains, providing a holistic and interactive overview of key business data. By providing decision-makers with these information-rich visualizations, the study aims to facilitate and accelerate the decision-making process, thus allowing informed and responsive strategic choices.
Download

Paper Nr: 96
Title:

Evaluating the Relative Importance of Product Line Features Using Centrality Metrics

Authors:

Fathiya Mohammed, Mike Mannion, Hermann Kaindl and James Paterson

Abstract: A software product line is a set of products that share a set of software features and assets, which satisfy the specific needs of one or more target markets. One common artefact of software product line engineering is a feature model, usually represented as a directed acyclic graph, which shows the product line as a set of structural feature relationships. We argue that there are benefits to considering a feature model as a directed graph and an undirected graph, respectively. One element of managing the impact of a change to these models, as they increase in complexity, is to evaluate the relative importance of the features. This paper explores the application of centrality metrics from social network analysis for the identification of the relative importance of features in feature models. The metrics considered are degree centrality, closeness centrality, eccentricity centrality, eigenvector centrality and between-ness centrality. To illustrate, a product feature model is constructed from a real-world GSMA AI-mobile phone product line requirements specification.
Download

Paper Nr: 97
Title:

Integrating SysML and Timed Reo for Modeling Interactions in Cyber-Physical Systems Components

Authors:

Perla Tannoury and Ahmed Hammad

Abstract: Modeling Cyber-Physical Systems (CPS) with timing constraints remains a challenge due to the complex behaviors of their interconnected components that operate in a physical environment. In this paper, we introduce “Timed SysReo”, a novel incremental design methodology that integrates SysML and Timed Reo for modeling CPS architectures and timed interaction protocols during the design phase. We first define the meta-models to formalize CPS model architecture and to detail timed connections between its components. Then, we propose to precise the meta-model with Object Constraint Language (OCL), that imposes rules to be respected in order to ensure consistency between timed models in our incremental design approach. Finally, we demonstrate our approach through an example of the Smart Medical Bed (SMB) system.
Download

Paper Nr: 99
Title:

From Low Fidelity to High Fidelity Prototypes: How to Integrate Two Eye Tracking Studies into an Iterative Design Process for a Better User Experience

Authors:

Gilbert Drzyzga and Thorleif Harder

Abstract: The aim of this study is to investigate the effectiveness of an iterative evaluation design process using low-fidelity prototypes (LFPs) and high-fidelity prototypes (HFPs) for a learner dashboard (LD) to improve user experience (UX) within an eye-tracking study with thinking aloud. The LD itself is designed to support online students in their learning process and self-regulation. Two studies were conducted, Study 1 focused on an LFP and Study 2 on the HFP version of the prototype. The involved participants (n=22) from different semesters provided different perspectives and emphasized the importance of considering heterogeneous user groups in the evaluations. Key findings included fewer adjustments required for the HFP, highlighting the value of early evaluation and iterative design processes in optimizing UX. This iterative approach allowed for continuous improvement based on real-time feedback, resulting in an optimized final prototype that better met functional and cognitive requirements. Comparison of key concepts across both studies revealed positive effects of methodological improvements, demonstrating the effectiveness of combining early evaluations with refined approaches for improved UX design in learning environments.
Download

Paper Nr: 102
Title:

Readability of Domain-Specific Languages: A Controlled Experiment Comparing (Declarative) Inference Rules with (Imperative) Java Source Code in Programming Language Design

Authors:

Kai Klanten, Stefan Hanenberg, Stefan Gries and Volker Gruhn

Abstract: .Domain-specific languages (DSLs) play a large role in computer science: languages from formal grammars up to SQL are integral part of education as well as industrial applications. However, to what extent such languages have a positive impact on the resulting code, respectively on the activity of writing and reading code, mostly remains unclear. The focus of the present work is on the notation of inference rules as they are applied in programming language education and research. A controlled experiment is introduced where given type rules are either defined using a corresponding DSL or the general-purpose language Java. Thereto, a repeated Nof-1 experiment was executed on 12 undergraduate students in computer science, where the participants had to select for a randomly generated typing rule and a randomly generated term from a list of possible types the correct one. Although the formal notation of inference rules is typically considered as non-trivial (in comparison to code in general–purpose languages), it turned out that the students were able to detect the type of a given expression significantly faster than using Java (p < .001, η2 p = .439): on average, the response times using Java were almost twice as much as the response times using inference rules ( MJava Min f erence = 1.914). Furthermore, the participants did less errors using inference rules (p = .023). We conclude from that the use of inference rules in programming language design also improves the readability of such rules.
Download

Paper Nr: 10
Title:

Machine Learning for a Better Agriculture Calendar

Authors:

Pascal F. Faye, Jeanne A. Faye and Mariane Senghor

Abstract: In Senegal, agriculture is subsistence, low-input, and significantly less mechanized than many other nations in Africa, and is also highly dependent on soil, climate, and water. Food crops take up to 46% of the total land and make up 15% of the Gross Domestic Product (GDP), ensuring between 70% and 75% employment. In this work, we provide a set of mechanisms that uses a set of trust database of agro-climatic parameters and a set of artificial intelligence algorithm in order to assess agricultural calendar for a good distribution of the farm’s activities over time and find the relationship between crops. Our results show the effectiveness of our solution to overcome the abandonment of agricultural perimeters or an agriculture depending on the raining season. That means, taking these data into account makes possible to understand crops dependencies and anticipate the agroecological phenomena, the crop diseases and pests that impact the planning of production facilities and variations in agricultural yields.
Download

Paper Nr: 21
Title:

Diagnosis Automation Using Similarity Analysis: Application to Industrial Systems

Authors:

Ivan Orefice, Wissam Mallouli, Ana R. Cavalli, Filip Sebek and Alberto Lizarduy

Abstract: The paper introduces the MMT-RCA framework, an automated incident diagnosis system crucial for maintaining security and reliability in complex systems such as ABB’s Load Position Sensor (LPS) and FAGOR’s remote manufacturing machinery access. Traditional incident response methods often involve time-consuming and error-prone manual analysis, hindered by limited human expertise. MMT-RCA addresses this challenge by leveraging similarity analysis techniques. It utilizes historical incident data to create a comprehensive repository, capturing characteristics and outcomes of past incidents. By employing sophisticated algorithms, the MMT-RCA framework identifies patterns and correlations among incidents, facilitating the swift identification of similar problems and their root causes. To validate its efficacy, the framework underwent real-world experiments with industrial data from both companies. The results demonstrate the framework’s ability to accurately diagnose incidents and identify root causes.
Download

Paper Nr: 33
Title:

Accurate Recommendation of EV Charging Stations Driven by Availability Status Prediction

Authors:

Meriem Manai, Bassem Sellami and Sadok Ben Yahia

Abstract: The electric vehicle (EV) market is experiencing substantial growth, and it is anticipated to play a major role as a replacement for fossil fuel-powered vehicles in transportation automation systems. Nevertheless, as a rule of thumb, EVs depend on electric charges, where appropriate usage, charging, and energy management are vital requirements. Examining the work that was done before gave us a reason and a basis for making a system that forecasts the real-time availability of electric vehicle charging stations that uses a scalable prediction engine built into a server-side software application that can be used by many people. The implementation process involved scraping data from various sources, creating datasets, and applying feature engineering to the data model. We then applied fundamental models of machine learning to the pre-processed dataset, and subsequently, we proceeded to construct and train an artificial neural network model as the prediction engine. Notably, the results of our research demonstrate that, in terms of precision, recall, and F1-scores, our approach surpasses existing solutions in the literature. These findings underscore the significance of our approach in enhancing the efficiency and usability of EVs, thereby significantly contributing to the acceleration of their adoption in the transportation sector.
Download

Paper Nr: 50
Title:

Integrating a LLaMa-based Chatbot with Augmented Retrieval Generation as a Complementary Educational Tool for High School and College Students

Authors:

Darío S. Cabezas, Rigoberto Fonseca-Delgado, Iván Reyes-Chacón, Paulina Vizcaino-Imacaña and Manuel E. Morocho-Cayamcela

Abstract: In the current educational landscape, the transition from traditional paradigms to more interactive and personalized learning experiences has been accelerated by technological advancements, particularly in artificial intelligence (AI). This paper explores integrating large language models (LLMs) with retrieval augmented generation techniques (RAG) to develop an educational chatbot to enhance students’ learning experiences. Leveraging the LLaMa 7B-chat model and RAG technique, our system incorporates a structured mathematical database supplemented with relevant audiovisual resources. Furthermore, leveraging the Pinecone API enhances information retrieval efficiency through cosine similarity. This capability empowers the chatbot to deliver precise and relevant responses to students’ inquiries by accessing documents from Pinecone. Moreover, incorporating system prompts and memory functionality contributes to a more personalized learning experience, enriching student interaction with the educational assistant. The findings suggest these assistants can enhance student engagement and facilitate knowledge acquisition.
Download

Paper Nr: 67
Title:

Egyptian Hieroglyphs Localisation Through Object Detection

Authors:

Lucia Lombardi, Francesco Mercaldo and Antonella Santone

Abstract: Old Egyptians used Hieroglyphic language to record their findings in medicine, engineering, sciences, achievements, their religious views, beside facts from their daily life. Thus, it is fundamentally important to understand and digitally store these scripts for anyone who wants to understand the Egyptian history and learn more about this great civilization. The interpretation of Egyptian hieroglyphs is areasonably broad and highly complex problem, but have always been fascinating with their stories and the ability to be read in several ways rather than one, which is a challenge in itself to be translated to modern languages. In this paper, we adopt the YOLO 8 model which revolutionized object detection with its one-stage deep learning approach. YOLO is designed to classify images and accurately determine the positions of detected objects within them. Using this DL approach, we were able to significantly reduce the time required to investigate the interpretation of hieroglyphs. To ensure the reproducibility of our results, we opted to utilize a publicly available dataset. All the metrics demonstrate the anticipated patterns: precision, recall, mAP 0.5, and mAP 0.5:0.95 are expected to increase as the number of epochs progresses, indicative of the model effectively learning to detect objects from Egyptian hieroglyphs images.
Download

Paper Nr: 79
Title:

CyberGuardian: An Interactive Assistant for Cybersecurity Specialists Using Large Language Models

Authors:

Ciprian Paduraru, Catalina C. Patilea and Alin Stefanescu

Abstract: Cybersecurity plays an important role in protecting people and critical infrastructure. Sectors such as energy, defense and healthcare are increasingly at risk from cyber threats. To address these challenges, dedicated Security Operations Centers (SOCs) continuously monitor threats and respond to critical issues. Our research focuses on the use of Large Language Models (LLMs) to optimize the tasks of SOCs and to support security professionals. In this work, we propose a framework, which we call CyberGuardian, whose main goal is to provide a chatbot application along with a set of tools to support SOC analysts in real-time cybersecurity tasks. We use state-of-the-art LLM techniques and start from a Llama 2 model, then fine-tune the base model using a new dataset containing mainly cybersecurity topics. The CyberGuardian framework has a plugin architecture that integrates processes such as Retrieval Augmented Generation (RAG), safeguard methods for interaction between human user and chatbot, integration with tools to manage tasks such as database interactions, code generation and execution, and plotting graphs just by specifying the task in a natural language. The work, along with the dataset we collected and reusable code to update or customize, is made available to the cybersecurity community as open source.
Download

Paper Nr: 95
Title:

Cracking the Code: Web 3.0 Software Development Challenges and Guidelines

Authors:

Surya B. Kathayat

Abstract: In the recent years, as the Web 3.0 continues to shape the digital landscape, developers faces many challenges in navigating this dynamic terrain. This paper, informed by extensive literature review and insights from multiple Web 3.0 applications development, examines the primary hurdles faced in Web 3.0 software development. These challenges span from scalability and interoperability to security and user experience. Furthermore, the paper presents essential architectural guidelines and best practices aimed at assisting developers in overcoming these obstacles and constructing robust and future-ready Web 3.0 applications. By addressing these observed challenges and adhering to the proposed guidelines, developers can effectively harness the full potential of Web 3.0 and facilitate its widespread acceptance.
Download

Paper Nr: 107
Title:

A Tool-Supported Approach for Modelling and Verifying MapReduce Workflow Using Event B and BPMN2.0

Authors:

Mayssa Bessifi, Ahlem Ben Younes and Leila Ben Ayed

Abstract: Big data techniques are increasingly applied in critical applications (such as health, marketing nuclear research field, aeronautics field), so it is desirable that a systematic method is developed to ensure the correctness of these applications. As an aid to designers and developers, we propose a model-driven approach for the specification and formal verification of MapReduce workflow applications using a semi-formal language which is BPMN2 to represent MapReduce workflow and the Event B method for analysis. Our approach starts with the graphical modelling of the MapReduce application as a chain of MapReduce design patterns using an adapted BPMN2 notation. Then the model is transformed into an equivalent Event B project, composed by a set of contexts and machines linked by refinement, that can be enriched with a variety of desirable properties. The approach has been automated using a set of mapping rules implemented in a first prototype tool. We illustrate our approach with a case study “Fireware” and we verify data quality properties such as data non-conflict and data completeness.
Download

Paper Nr: 108
Title:

Multi-Criteria Decision-Making Approach for an Efficient Postproduction Test of Reconfigurable Hardware System

Authors:

Asma Ben Ahmed, Fadwa Oukhay and Olfa Mosbahi

Abstract: This paper proposes a multi-criteria decision-making approach for guiding the postproduction test of reconfigurable hardware system (RHS). The latter is a hardware device that allows to change the hardware resources at runtime in order to modify the system functions and dynamically adapt the system to its environment. The optimization of the RHS postproduction test process is a matter of concern for manufacturers since the testing activities have a significant impact on achieving manufacturing objectives in relation to quality, cost and, time control. Taking into account the fact that testing all potential faults is infeasible in practice, the testing process hence needs to be optimized by prioritizing faults according to a well-defined set of criteria. Accordingly, multi-criteria decision-making tools prove their effectiveness in selecting faults to be tested. The proposed method consists in targeting a limited number of faults that require more attention during the testing process. Two strategies are investigated through the recourse to analytic hierarchy process and Choquet Integral. This study helps to determine the most critical faults that have the highest risk priority score. A case study is provided to illustrate the application of the proposed approach and to support the discussion.
Download

Paper Nr: 110
Title:

Conceptual Model Detection in Business Web Applications Within the RPA Context

Authors:

Dora Moraru, Adrian Sterca, Virginia Niculescu, Maria-Camelia Chisăliţă-Creţu and Cristina-Claudia Osman

Abstract: Robotic Process Automation (RPA) platforms target the automation of repetitive tasks belonging to business processes, performed by human users. Web automation is a part of RPA that deals with the automation of web applications. In our previous work, we introduced a web automation tool in the form of a browser plugin that is able to discover simple processes on a target business web application and is able to later execute these simple processes automatically without human intervention. Our web automation tool relies on a description of the data model used by the target web application. This data model must be manually specified by the human user in advance. In our current work, we propose a new method for discovering this data model automatically from the target web application itself. We also performed some experiments in the paper that show that our method is viable.
Download