ICSOFT 2025 Abstracts


Area 1 - Foundational and Trigger Technologies

Full Papers
Paper Nr: 22
Title:

Verifying LLM-Generated Code in the Context of Software Verification with Ada/SPARK

Authors:

Marcos Cramer and Lucian McIntyre

Abstract: Large language models (LLMs) have demonstrated remarkable code generation capabilities, but the correctness of the generated code cannot be inherently trusted. This paper explores the feasibility of using formal software verification, specifically the SPARK framework for Ada, to ensure the reliability of LLM-generated code. We present Marmaragan, a tool that leverages an LLM in order to generate SPARK annotations for existing programs, enabling formal verification of the code. The tool is benchmarked on a curated set of SPARK programs, with annotations selectively removed to test specific capabilities. The performance of Marmaragan with GPT-4o on the benchmark is promising, with correct annotations having been generated for 50.7% of the benchmark cases. The results establish a foundation for future work on combining the power of LLMs with the reliability of formal software verification.
Download

Area 2 - Software Engineering and Systems Development

Full Papers
Paper Nr: 24
Title:

Efficient Hit-Spectrum-Guided Fast Gradient Sign Method: An Adjustable Approach with Memory and Runtime Optimizations

Authors:

Daniel Rashedi and Sibylle Schupp

Abstract: Fast Gradient Sign Method (FGSM) is an effective method for generating adversarial inputs for neural networks, but it is memory-intensive. DeepFault reduces the memory costs of FGSM by transferring Spectrum-Based Fault Localization to neural networks. SBFL is a technique traditionally using the execution trace of a program to identify suspicious code locations that are likely to contain faults. DeepFault employs SBFL to identify neurons in a neural network that are likely to be responsible for misclassifications to guide FGSM. We propose an adjustable hit-spectrum-guided FGSM approach applying a sub-model strategy to avoid gradient ascent evaluation over the entire model. Additionally, we alter DeepFault’s hit-spectrum computation to be vector-based to allow parallelization of computation, and we modify the hit spectrum to depend on a specific class to allow targeted adversarial input generation. We conduct an experimental evaluation on image classification models showing how our approach allows trading off effectiveness of adversarial input generation with reduced runtimes while maintaining scalability regarding larger models, with maximum runtimes on the order of tens of seconds. For larger sample sizes, our approach reduces runtimes to fractions of 1/300 and less compared to DeepFault. When processing larger models, it requires only one-third of FGSM’s memory usage.
Download

Paper Nr: 27
Title:

Bridging IFML and Elm Applications via a Normalized Systems Expander

Authors:

Jan Slifka and Robert Pergl

Abstract: Web front-end applications are essential for delivering smooth user experiences across a multitude of platforms and devices. However, these applications often face difficulties maintaining long-term evolvability as user demands and stakeholder expectations continue to shift. In this paper, we propose using the Interaction Flow Modeling Language (IFML) to design applications and then generating source code in Elm, a statically typed, pure functional programming language tailored for web frontends. By applying Normalized Systems Theory, we aim to ensure long-lasting stability in two key ways: first, by defining how the resulting source code should align with the theory’s principles; second, by employing expanders to generate code and incorporating a harvesting mechanism that allows custom modifications to the generated source without losing the connection to the original model. We demonstrate the practical application of our approach by designing an application using IFML models, introducing custom code, and regenerating the application from an updated model while preserving those customizations. Our contribution is a novel methodology that integrates IFML, Elm, and Normalized Systems Theory to improve the stability and maintainability of web front-end applications.
Download

Paper Nr: 28
Title:

Integrating Security into the Product-Line-Engineering Framework: A Security-Engineering Extension

Authors:

Christian Biermann, Richard May and Thomas Leich

Abstract: Modern software systems are becoming increasingly configurable, often relying on Product-Line Engineering (PLE) to efficiently develop variant-rich systems while ensuring reusability. However, security considerations in existing PLE research are typically insufficient as security is often (partly) neglected or not integrated into the overall development process. To address this gap, we developed an additional layer of the PLE framework: security engineering — positioned between domain engineering and application engineering. Our results are based on a systematic review of 49 secure PLE frameworks and workflows, synthesizing their insights and our expertise in compliance with the ISO/IEC 27000 series. By following six processes and 12 activities, our iterative approach ensures that security is systematically embedded in the PLE process. We particularly highlight the importance of reusable security artifacts, secure business-process modeling, and standard compliance, aiming to facilitate the transfer of theoretical solutions into secure business practice.
Download

Paper Nr: 35
Title:

Scenario-Based Testing of Online Learning Programs

Authors:

Maxence Demougeot, Sylvie Trouilhet, Jean-Paul Arcangeli and Françoise Adreit

Abstract: Testing is a solution for verification and validation of systems based on Machine Learning (ML). This paper focuses on testing functional requirements of programs that learn online. Online learning programs build and update ML models throughout their execution. Testing allows domain experts to measure how well they work, identify favorable or unfavorable use cases, compare different versions or settings, or reveal defects. Testing programs which learn online has particular features. To deal with them, a scenario-based approach and a testing process are defined. This solution is implemented and automates test execution and quality measurements. It is applied to a program that learns online the end-user’s preferences in an ambient environment, confirming the viability of the approach.
Download

Paper Nr: 57
Title:

Approaches Adopted in the Implementation of Maturity Models Using Agile Initiatives in Public Bodies: A Systematic Literature Review

Authors:

Alfredo Gabriel de Sousa Oliveira and Sandro Ronaldo Bezerra Oliveira

Abstract: The implementation of maturity models is essential to ensure the competitiveness and quality of services provided by public bodies. By structuring their processes in a more flexible and adaptable way, organizations can answer more effectively to the demands of society, which is increasingly dynamic and demanding. However, adopting agile methodologies requires planning and care. The wide variety of agile methodologies available, such as Scrum, SAFe, Kanban, and others, can generate some confusion and make it difficult to choose the most appropriate approach for each context. A poorly planned implementation can result in process overload, team resistance, and, consequently, failure to achieve the expected results. To avoid these challenges, it is crucial that public bodies invest in a gradual and personalized implementation process, as well as in researching the models / processes adopted by other bodies. The choice of agile methodology must take into account the size of the team, the complexity of the project, the organizational culture, and the strategic objectives. In addition, it is essential to have the support of senior management and the engagement of all employees involved in the process. By adopting a gradual and personalized approach, companies increase their chances of success in implementing maturity models using agile methodologies. This paper presents a Systematic Literature Review (SLR) to identify the most effective approaches for implementing maturity models in public bodies. The SLR selected 13 primary studies that identified practices, recommendations, standards, implementation strategies, benefits, difficulties and points of attention found in the process of implementing such models. Furthermore, it was found that there were shared characteristics, regarding the implementation processes reported in the studies, among the bodies, which allows us to infer that other public bodies can use the results as a basis for adopting similar methodologies. This paper contributes by presenting, in a consolidated way, the recommendations that can facilitate the process of implementing maturity models. Ultimately, these recommendations allow managers of bodies and / or stakeholders to outline a plan for implementing maturity models in a clearer way, thus ensuring a more fluid process.
Download

Paper Nr: 58
Title:

COTTAGE: Supporting Threat Analysis for Security Novices with Auto-Generated Attack Defense Trees

Authors:

Keita Yamamoto, Masaki Oya, Masaki Hashimoto, Haruhiko Kaiya and Takao Okubo

Abstract: In software and system development, threat analysis, especially scenario analysis to analyze the steps of an attack, requires a high degree of expertise and long hours of work, which is becoming more and more difficult in short development time, such as Agile and DevOps. In addition, the increased demand for systems has created the need for people without enough security expertise to perform threat analysis. In this paper, we developed COTTAGE, a tool that automatically generates Attack Defense Trees (ADTrees) from CAPEC and CWE knowledge bases and supports the creation of ADTrees that can be used for tree generation by those conducting the analysis. Our evaluation with six security novices demonstrated that COTTAGE enabled participants to perform threat analysis comparable to expert analysis within 30 minutes, whereas experts typically required approximately two days. The case study in a DevOps environment further confirmed COTTAGE’s effectiveness in supporting iterative security analysis through automatically generated reference trees.
Download

Paper Nr: 62
Title:

Metrics in Low-Code Agile Software Development: A Systematic Literature Review

Authors:

Renato Domingues, Iury Monte and Marcelo Marinho

Abstract: Low-code development has gained traction, yet the use of metrics in this context remains unclear. This study conducts a systematic literature review to identify which metrics are most used in low-code development. Analyzing 17 studies, we found a strong focus on Development Metrics, while Usability Metrics were underexplored. Most studies adopted quantitative approaches and fell into the Lessons Learned category (58.8%), suggesting an exploratory phase with little metric standardization. Future work should focus on standardizing metrics and incorporating qualitative insights for a more comprehensive evaluation.
Download

Paper Nr: 81
Title:

Grammarinator Meets LibFuzzer: A Structure-Aware In-Process Approach

Authors:

Renáta Hodován and Ákos Kiss

Abstract: Fuzzing involves generating a large number of inputs and running them through a target application to detect unusual behavior. Modern general-purpose guided fuzzers are effective at testing various programs, but their lack of structure awareness makes it difficult for them to induce unexpected behavior beyond the parser. Conversely, structure-aware fuzzers can generate well-formed inputs but are often unguided, preventing them from leveraging feedback mechanisms. In this paper, we introduce a guided structure-aware fuzzer that integrates Grammarinator, a structure-aware but unguided fuzzer, with LibFuzzer, a guided but structure-unaware fuzzer. Our approach enables effective testing of applications with minimal setup, requiring only an input format description in the form of a grammar. Our evaluation on a JavaScript engine demonstrates that the proposed fuzzer achieves higher code coverage and discovers more unique bugs compared to its two predecessors.
Download

Short Papers
Paper Nr: 19
Title:

Enhancing Data Serialization Efficiency in REST Services: Migrating from JSON to Protocol Buffers

Authors:

Anas Shatnawi, Adem Bahri, Boubou Thiam Niang and Benoit Verhaeghe

Abstract: Data serialization efficiency is crucial for optimizing web application performance. JSON is widely used due to its compatibility with REST services, but its text-based format often introduces performance limitations. As web applications grow more complex and distributed, the need for more efficient serialization methods becomes evident. Protocol Buffers (Protobuf) has demonstrated significant improvements in reducing payload size and enhancing serialization/deserialization speed compared to JSON. To improve the performance and optimize resource utilization of existing web applications, the JSON data serialization approach of their REST services should be migrated to Protobuf. Existing migration approaches emphasize manual processes, which can be time-consuming and error-prone. In this paper, we propose a semi-automated approach to migrating the data serialization of existing REST services from JSON to Protobuf. Our approach refactors existing REST codebases to use Protobuf. It is evaluated on two web applications. The results show a reduction in payload size by 60% to 80%, leading to an 80% improvement in response time, a 17% decrease in CPU utilization, and an 18% reduction in energy consumption, all with no additional memory overhead.
Download

Paper Nr: 30
Title:

Back to the Model: UML Miner and the Power of Process Mining

Authors:

Pasquale Ardimento, Mario Luca Bernardi, Marta Cimitile and Michele Scalera

Abstract: Comprehension of the Unified Modeling Language is essential for learners in the context of software modeling. However, current UML learning tools provide minimal guidance to novice modelers as they are insufficient in analyzing modeling behaviour adopted during the diagram creation process. In order to address this gap, we present an enhanced version of UML Miner, a plugin for Visual Paradigm, that systematically records and analyzes UML modeling activities through the use of Process Mining techniques. UML Miner tracks all modeling events, resulting in event logs that warrant conformance checking against expert modeling practices. This tool establishes flexible, yet structured learning pathways through Declarative Process Mining, supporting trace-based and event-based filtering, customized violation reports, and integration with external process mining tools. This work emphasizes the potential of process mining in computing education, demonstrating how conformance checking can strengthen UML modeling proficiency.
Download

Paper Nr: 34
Title:

Automated Quality Model Management Using Semantic Technologies

Authors:

Reinhold Plösch, Florian Ernst and Matthias Saft

Abstract: The starting point for this paper and service for the query-based generation of quality models was the requirement to be able to manage software quality models dynamically, as detailed domain knowledge is usually required for this task. We present new approaches regarding the query-based generation of software quality models and the creation of profiles for quality analyses using the code quality tool SonarQube. Furthermore, our support for the automatic assignment of software quality rules to entries of a hierarchical quality model simplifies the maintenance of the models with the help of machine learning models and large language models (HMCN and SciBERT in our case). The resulting findings were evaluated for their practical suitability using expert interviews. The results are promising and show that semantic management of quality models could help spreading the use of quality models, as it considerably reduces the maintenance effort.
Download

Paper Nr: 48
Title:

MMSIA: Towards AI Systems Maturity Assessment

Authors:

Rubén Márquez Villalta, Javier Verdugo Lara, Moisés Rodríguez Monje and Mario Piattini Velthuis

Abstract: The emergence of artificial intelligence (AI) has caused a technological revolution in society in recent years, and a growing number of companies are implementing or creating systems that use this technology across a range of industries. In order to guarantee quality procedures on these systems, there is an immediate demand for quality standards as a result of this rise. Based on a variety of international standards, including the ISO/IEC 5338 standard as a reference model for AI processes and the ISO/IEC 33000 family of standards to establish a software process assessment and maturity model, this paper shows an Artificial Intelligence Software Maturity Model (called MMSIA). The main objective of the MMSIA model is to give businesses creating AI systems a framework for evaluating and continuously improving the software processes used in the creation of these kinds of systems, which will raise the level of AI applications.
Download

Paper Nr: 60
Title:

To Model, to Prompt, or to Code? The Choice Is Yours: A Multi-Paradigmatic Approach to Software Development

Authors:

Thomas Buchmann, Felix Schwägerl and René Peinl

Abstract: This paper considers three fundamental approaches to software development, namely manual coding, model-driven software engineering, and code generation by large language models. All of these approaches have their individual pros and cons, motivating the desire for an integrated approach. We present MoProCo, a technical solution to integrate the three approaches into a single tool chain, allowing the developer to split a software engineering task into modeling, prompting or coding sub-tasks. From a single input file consisting of static model structure, natural language prompts and/or source code fragments, Java source code is generated using a two-stage approach. A case study demonstrates that the MoProCo approach combines the desirable properties of the three development approaches by offering the appropriate level of abstraction, determinism, and dynamism for each specific software engineering sub-task.
Download

Paper Nr: 77
Title:

Enhancing AI-Generated Code Accuracy: Leveraging Model-Based Reverse Engineering for Prompt Context Enrichment

Authors:

Boubou Thiam Niang, Ilyes Alili, Benoit Verhaeghe, Nicolas Hlad and Anas Shatnawi

Abstract: Large Language Models (LLMs) have shown considerable promise in automating software development tasks such as code completion, understanding, and generation. However, producing high-quality, contextually relevant code remains a challenge, particularly for complex or domain-specific applications. This paper presents an approach to enhance LLM-based code generation by integrating model-driven reverse engineering to provide richer contextual information. Our findings indicate that incorporating unit tests and method dependencies significantly improves the accuracy and reliability of generated code in industrial projects. In contrast, simpler strategies based on method signatures perform similarly in open-source projects, suggesting that additional context is less critical in such environments. These results underscore the importance of structured input in improving LLM-generated code, particularly for industrial applications.
Download

Paper Nr: 88
Title:

CloReCo: Benchmarking Platform for Code Clone Detection

Authors:

Franz Burock, Wolfram Amme, Thomas S. Heinze and Elisabeth Ostryanin

Abstract: In this paper, we present the Clone Recognition Comparison (CloReCo) platform which supports an uniform performance analysis of code clone detectors. While there exists various benchmarks for code clone detection, these benchmarks on their own have limitations, so that the idea of using multiple benchmarks is promoted. Such a more comprehensive evaluation however requires a benchmarking platform, which integrates the different benchmarks and tools. The CloReCo platform addresses this challenge by implementing a container infrastructure, providing consistent environments for multiple benchmarks and clone detectors. Using CloReCo’s web interface or command line interface then allows for conducting performance experiments, adding new clone detectors or benchmarks, managing or analyzing experimental results and thereby facilitates reproducibility of performance analysis by researchers and practitioners in the area of code clone detection.
Download

Paper Nr: 89
Title:

Combining SysML V2 and BIP to Model and Verify CPS Interactions

Authors:

Adel Khelifati, Ahmed Hammad and Malika Boukala-Ioualalen

Abstract: Cyber-physical systems (CPS) require precise interaction modeling and rigorous verification to guarantee reliability and correctness, particularly in safety-critical systems, where interaction errors such as deadlocks may lead to critical failures. Although SysML v2 provides expressive modeling capabilities, it lacks explicit execution semantics for structured interactions. To address this limitation, we propose a structured subset of SysML v2 to specify interactions at the structural level. These interactions are then mapped to the Behavior, Interaction, Priority (BIP) framework, which defines their execution semantics and enables formal analysis. Specifically, we introduce Rendez-vous and Broadcast connectors to enforce synchronization and one-to-many communication, respectively, ensuring that interactions are explicitly represented and amenable to formal analysis. BIP provides precise execution semantics, facilitating rigorous verification and streamlining the process by eliminating the need for external verification models. We validate our approach through a case study on swarm drone coordination, demonstrating structured execution, the ability to detect and resolve critical deadlocks, and the correctness and robustness of interactions.
Download

Paper Nr: 92
Title:

Fuzzy Requirements Verification in SysML v2: Direct Modeling and Scenario-Based Analysis for Cyber-Physical Systems

Authors:

Adel Khelifati, Malika Boukala-Ioualalen and Ahmed Hammad

Abstract: Cyber-physical systems (CPS) often involve vague or qualitative requirements such as comfort or energy efficiency, which are difficult to verify using the crisp Boolean logic traditionally employed in Systems Modeling Language v2 (SysML v2). This paper introduces an approach for modeling and verifying fuzzy requirements directly within SysML v2 without modifying its metamodel or relying on external tools. Fuzzy semantics are encoded using native constructs such as calculation definitions, requirements, and constraints, with satisfaction degrees computed via trapezoidal membership functions and evaluated using the native expression evaluation mechanism provided by the modeling environment. We illustrate the effectiveness and feasibility of our expression-based fuzzy verification approach using a smart building Heating, Ventilation, and Air Conditioning (HVAC) system example and clearly show how the modeling is achieved in standard SysML v2 notation. Furthermore, to extend verification capability under variability and uncertainty, we introduce a complementary external transformation of expression-based model elements into Python scripts to perform scenario-based evaluation. A batch-based exploration method is then presented to systematically analyze fuzzy requirement satisfaction under different runtime conditions, offering insights into system robustness and design-space analysis.
Download

Paper Nr: 45
Title:

Towards Quality Assessment of AI Systems: A Case Study

Authors:

Jesús Oviedo Lama, Jared David Tadeo Guerrero Sosa, Moisés Rodríguez Monje, Francisco Pascual Romero Chicharo and Mario Piattini Velthuis

Abstract: Artificial Intelligence is being a lever of change at all levels of society, both in public administration, in companies, organizations and even for the daily activities of individuals. Therefore, it is necessary, as in software, that Artificial Intelligence Systems obtain the results expected by the users and for this purpose, their functionality must be controlled, and their quality must be assured. This article presents the results of a functional suitability evaluation of a real Artificial Intelligence System by applying an evaluation environment based on ISO/IEC 25059 and ISO/IEC 25040 standards.
Download

Paper Nr: 55
Title:

A Compliance Analysis of Agile PBB Method Practices with the Expected Results of the Requirements Engineering Process of the MPS.BR Maturity Model

Authors:

Jamilli Ynglid Carmo da Cunha, Sandro Ronaldo Bezerra Oliveira and Fábio Aguiar

Abstract: The Brazilian software market has shown significant growth, with a 7.9% increase in 2022, as indicated by the Brazilian Association of Software Companies. To meet the growing demand for high-quality technological solutions, many companies have adopted agile methodologies. However, the adoption of these methodologies, especially in requirements engineering, presents significant challenges, since this is a fundamental area for the final quality of software. This paper proposes an analysis of adherence between the agile Product Backlog Building (PBB) method and the Expected Results of the Requirements Engineering Process of the MPS.BR (Brazilian Software Process Improvement) model. The objective is to evaluate how the PBB meets the rigorous criteria established by the MPS.BR to obtain quality certification. The analysis revealed that, of the seven Expected Results, four were partially met and three were not met, highlighting the need for adjustments to the PBB method so that it can fully satisfy the MPS.BR requirements and allow companies to achieve certification without compromising the agility of their processes.
Download

Paper Nr: 69
Title:

Containerizing the PowerAPI Architecture to Estimate Energy Consumption of Software Applications

Authors:

Daniel Guamán, Alejandra Barco-Blanca, Vanessa Rodríguez-Horcajo and Jennifer Pérez

Abstract: The widespread adoption of cloud architectures and the use of information technologies have a significant impact on software sustainability, particularly in terms of energy consumption. PowerAPI is a toolkit designed to estimate the energy consumption of software applications. It integrates hardware performance counters (HWPC) and SmartWatts formulas to analyze energy usage at different abstraction levels, providing enough accurate estimation metrics to drive an energy-efficient software design. However, its configuration deployment may be complex. In this work, we aim to extend its use by facilitating its deployment. To that end, we present a study that explores the containerization of PowerAPI in two different measurement contexts. From the results of this study, a middleware solution to estimate the energy consumption of software applications, called PowerAPIDocker-Cloud, has been constructed. PowerAPIDocker-Cloud implements a scalable and reproducible energy consumption monitoring process in two different contexts: (i) Java Model-View-Controller (MVC) desktop monolithic applications and (ii) containerized microservices MVC applications written in different programming languages. The experimentation carried out during the study demonstrates the feasible measurement of 29 applications in the first context and 4 applications in the second context. The set of experiments show that PowerAPIDocker-Cloud is a reusable mechanism to easily and effectively estimate the energy consumption of MVC software applications using PowerAPI. In addition, the experiments contribute insights into how to design energy-efficient architectures and to identify resource-efficient programming techniques that can contribute to reduce the environmental impact of MVC software applications in containerized environments.
Download

Paper Nr: 73
Title:

Generation of IT Project Documentation Elements from a Model Transformation Chain

Authors:

Oksana Nikiforova, Megija Krista Miļūne, Kristaps Babris and Oscar Pastor

Abstract: Documentation plays a crucial role in IT project management, particularly in the early stages, where it helps define requirements, establish goals, and mitigate risks. Ensuring documentation quality and consistency remains a challenge, necessitating adherence to established standards such as IEEE 830 and ISO/IEC 12207. In our previous research, we proposed a model transformation chain to facilitate the generation of IT project artefacts, which are offered as a solution in last 5 years scientific papers. This paper expands on that work by examining how documentation elements can be systematically extracted and structured from the transformation chain more detailed representation. We discuss the elements of documentation that can be derived and mapping components for extracting relevant information from the artefacts already developed in the project. Our findings highlight the potential to improve documentation quality and reduce manual effort. A case study demonstrates the practical application of our framework to a small-scale IT project documentation.
Download

Paper Nr: 82
Title:

A Framework for System Design Using Collaborative Computing Paradigms (CCP) for IoT Systems

Authors:

Prashant G. Joshi and Bharat M. Deshpande

Abstract: The rising complexity of IoT systems demands a shift from traditional architectures to frameworks that are adaptable, scalable, and computationally efficient. Through research and practical experimentation, Collaborative Computing Paradigms (CCP) and the CCP IoT Reference Architecture (CCP-IoT-RA) have been validated as effective for seamless workload distribution, real-time processing, and dynamic resource allocation. Building on these findings, this paper presents a structured, implementation-refined framework to integrate CCP into IoT system design. Anchored on three principles—computational efficiency, data-centric operation, and long-term adaptability—it promotes dynamic workload distribution across computing paradigms to enhance system responsiveness. A use case-driven approach aligns architecture with real-world applications, while leveraging advances in high-performance embedded systems and edge platforms. Emphasizing standardization ensures interoperability across heterogeneous environments. Validated through experiments, the proposed CCP-based framework is recommended as a foundational methodology for next-generation IoT solutions.
Download

Paper Nr: 84
Title:

LLMs as Code Generators for Model-Driven Development

Authors:

Yoonsik Cheon

Abstract: Model-Driven Development (MDD) aims to automate code generation from high-level models but traditionally relies on platform-specific tools. This study explores the feasibility of using large language models (LLMs) for MDD by evaluating ChatGPT’s ability to generate Dart/Flutter code from UML class diagrams and OCL constraints. Our findings show that LLM-generated code accurately implements OCL constraints, automates repetitive scaffolding, and maintains structural consistency with human-written code. However, challenges remain in verifying correctness, optimizing performance, and improving modular abstraction. While LLMs show strong potential to reduce development effort and better enforce model constraints, further work is needed to strengthen verification, boost efficiency, and enhance contextual awareness for broader adoption in MDD workflows.
Download

Paper Nr: 93
Title:

Thoth: A Lightweight Framework for End-to-End Consumer IoT Rapid Testing

Authors:

Salma Roshdy Aly, Sherif Saad and Mohammad Mamun

Abstract: The rapid expansion of consumer IoT devices has increased the need for scalable, automated testing solutions. Manual methods are often slow, error-prone, and inadequate for capturing real-world IoT complexities. Existing frameworks typically lack comprehensiveness, quantifiable metrics, and support for cascading failure scenarios. This paper introduces Thoth, a lightweight, end-to-end IoT testing framework that addresses these limitations. Thoth enables holistic evaluation through integrated support for performance, reliability, recovery, security, and load testing. It also incorporates standardized metrics and real-time failure simulations, including cascading faults. We evaluated Thoth using eight test cases in a real-world health-monitoring setup involving a smartwatch, edge gateway, and cloud infrastructure. Key metrics—such as fault detection time, recovery speed, data loss, and energy usage—were logged and analyzed. Results show that Thoth detects faults in as little as 2.5 seconds, recovers in under 1 second, limits data loss to a few points, and maintains sub-1% energy overhead. These findings highlight its effectiveness for low-intrusion testing in resource-constrained environments. By combining scenario-driven design with reproducible, metrics-based evaluation, Thoth fills key gaps in IoT testing.
Download

Area 3 - Software Systems and Applications

Full Papers
Paper Nr: 18
Title:

Building a Decision Landscape Model for Software Development: From Empirical Insights to Formal Theory

Authors:

Hannes Salin

Abstract: Value stream mapping is a well known method for identifying waste and bottlenecks in production processes. It is also used in the software engineering context, to map value streams of different processes in software development. By regarding decision-making processes as value streams, we can describe and further analyze decision-making flows in an organization for optimization such as reduced lead times or decision making efficiency. Using empirical data from three different software development organizations in Sweden, we develop a formal model to describe decision-making flows in a software development organizational context, in a compact and systematic syntax. Our formal model can thus be used for analyzing decision-making flows and support management in better understanding how decisions are made within their organizations.
Download

Paper Nr: 21
Title:

Dynamic Mitigation of RESTful Service Failures Using LLMs

Authors:

Sébastien Salva and Jarod Sue

Abstract: This paper presents a novel self-healing approach for RESTful services, leveraging the capabilities of large language models (LLMs) to generate source code that implement fine-grained mitigations. The proposed solution introduces 18 healing operators tailored for RESTful services, accommodating both grey-box and black-box perspectives. These operators implement a dual-mitigation strategy. The first mitigation employs encapsulation techniques, enabling dynamic service adaptation by generating supplementary source code without modifying the original implementation. If the primary mitigation fails, a fallback mitigation is applied to maintain service continuity. We investigate the potential of LLMs to perform the first mitigation of these healing operators by means of chains of prompts we specifically designed for these tasks. Furthermore, we introduce a novel metric that integrates test-passing correctness and LLM confidence, providing a rigorous evaluation framework for the effectiveness of the mitigations performed by LLMs. Preliminary experiments using four healing operators on 15 RESTful services with various and multiple vulnerabilities demonstrate the approach feasibility and adaptability across both grey-box and black-box perspectives.
Download

Paper Nr: 29
Title:

Adopting Artificial-Intelligence Systems in Manufacturing: A Practitioner Survey on Challenges and Added Value

Authors:

Richard May, Leonard Cassel, Hashir Hussain, Muhammad Talha Siddiqui, Tobias Niemand, Paul Scholz and Thomas Leich

Abstract: Artificial-Intelligence Systems (AIS) are reshaping manufacturing by optimizing processes, enhancing efficiency, and reducing costs. Despite this potential, their adoption in practice remains challenging due to limited understanding of technological complexities and practical hurdles. In this study, we present findings of a survey involving 26 manufacturing AIS practitioners, highlighting key challenges, strategies for implementing AIS more effectively, and perceived added value. Data preparation, deployment, operation, and change management were identified as the most critical phases, emphasizing the need for robust data management and scalable, modular (i.e., configurable) solutions. Predictive maintenance, driven by supervised learning, dominates current AIS, aligning with industry goals to reduce downtime and improve productivity. Despite the benefits, broader applications, such as real-time optimization and advanced quality control, seem to remain underutilized. Overall, the study aims to provide insights for both practitioners and researchers, emphasizing the importance of overcoming these barriers to facilitate the adoption of AIS in advanced manufacturing.
Download

Paper Nr: 49
Title:

Efficiency and Development Effort of OpenCL Interoperability in Vulkan and OpenGL Environments: A Comparative Case Study

Authors:

Piotr Plebański, Anna Kelm and Marcin Hajder

Abstract: The increasing demand for high-performance computing has led to the exploration of utilizing General-Purpose Graphics Processing Units (GPGPUs) for non-graphical tasks. In this paper, we present a comparative case study of OpenCL interoperability when paired with two widely used graphics APIs: OpenGL and Vulkan. By implementing an ocean wave simulation benchmark – where OpenCL handles compute-intensive tasks and the graphics API manages real-time visualization – we analyze the impact of API selection on both execution performance and development effort. Our results indicate that Vulkan’s low-level control and multi-threaded design deliver marginal performance improvements under minimal rendering loads; however, its increased code verbosity and complex synchronization mechanisms lead to a substantially higher development effort. In contrast, OpenGL, with its more straightforward integration and broad compatibility, provides a practical alternative for compute-first applications. The insights from this case study offer guidance for developers navigating the trade-offs between raw performance and maintainability in GPU-accelerated environments.
Download

Paper Nr: 66
Title:

Validating the Optimization of a Building Occupancy Monitoring Software System

Authors:

Jalil Boudjadar and Simon Thrane Hansen

Abstract: Real-time occupancy information is key to track and optimize the resource usage in buildings. With the ultimate goal to reduce energy consumption and improve indoor environment quality, different occupancy monitoring solutions have been introduced in the literature. However, these solutions can be part of the original problem due to its high energy consumption. This paper proposes the design and validation of a knowledgedriven energy-efficient system to monitor and analyze buildings occupancy. Such a system is in fact an optimization of the actual software solution (Office Master 3000) used to monitor buildings occupancy in an academic institution. The key contributions are: 1) Optimization of the actual solution: using the knowledge about the activity expected to take place, our system proactively identifies the minimal sensors data relevant to the actual state to confirm/deny such an activity by which non-substantial sensors are operated with loosened frequency; 2) Formal validation: using UPPAAL model checker we prove that the original functional and non-functional properties are maintained post-optimization. The proposed system is implemented in C++, tested and validated. The results demonstrate that our optimization reduces sensors energy consumption by up to 31% while maintaining high accuracy (84%) in identifying the occupancy states and activities.
Download

Paper Nr: 68
Title:

Efficient Source Code Authorship Attribution Using Code Stylometry Embeddings

Authors:

David Álvarez-Fidalgo and Francisco Ortin

Abstract: Source code authorship attribution or identification is used in the fields of cybersecurity, forensic investigations, and intellectual property protection. Code stylometry reveals differences in programming styles, such as variable naming conventions, comments, and control structures. Authorship verification, which differs from attribution, determines whether two code samples were written by the same author, often using code stylometry to distinguish between programmers. In this paper, we explore the benefits of using CLAVE, a contrastive learning-based authorship verification model, for Python authorship attribution with minimal training data. We develop an attribution system utilizing CLAVE stylometry embeddings and train an SVM classifier with just six Python source files per programmer, achieving 0.923 accuracy for 85 programmers, outperforming state-of-the-art deep learning models for Python authorship attribution. Our approach enhances CLAVE’s performance for authorship attribution by reducing the classification error by 45.4%. Additionally, the proposed method requires significantly lower CPU and memory resources than deep learning classifiers, making it suitable for resource-constrained environments and enabling rapid retraining when new programmers or code samples are introduced. These findings show that CLAVE stylometric representations provide an efficient, scalable, and high-performance solution for Python source code authorship attribution.
Download

Short Papers
Paper Nr: 43
Title:

Impact of Resource Heterogeneity on MLOps Stages: A Computational Efficiency Study

Authors:

Julio Corona, Pedro Rodrigues, Mário Antunes and Rui L. Aguiar

Abstract: The rapid evolution of hardware and the growing demand for Machine Learning (ML) workloads have driven the adoption of diverse accelerators, resulting in increasingly heterogeneous computing infrastructures. Efficient execution in such environments requires optimized scheduling and resource allocation strategies to mitigate inefficiencies such as resource underutilization, increased costs, and prolonged execution times. This study examines the computational demands of different stages in the Machine Learning Operations (MLOps) pipeline, focusing on the impact of varying hardware configurations characterized by differing numbers of Central Processing Unit (CPU) cores and Random Access Memory (RAM) capacities on the execution time of these stages. Our results show that the stage involving resource-intensive model tuning significantly influences overall pipeline execution time. In contrast, other stages can benefit from less resource-intensive hardware. The analysis highlights the importance of smart scheduling and placement, prioritizing resource allocation for model training and tuning stages, in order to minimize bottlenecks and enhance overall pipeline efficiency.
Download

Paper Nr: 54
Title:

Behavior Detection of Quadruped Companion Robots Using CNN: Towards Closer Human-Robot Cooperation

Authors:

Piotr Artiemjew, Karolina Krzykowska-Piotrowska and Marek Piotrowski

Abstract: In a world where mobile robotics is increasingly entering various areas of people’s lives, creating systems that track the behavior of mobile robots is a natural step toward ensuring their proper functioning. This is particularly important in cases where improper use or unpredictable behavior may pose a threat to the environment and, above all, to humans. It should be emphasized that this is especially relevant in the context of using robotic solutions to improve the quality of life for people with special needs, as well as in human–robot interaction. Our primary aim was to verify the experimental effectiveness of classification based on convo-lutional neural networks for detecting behaviours of four-legged robots. The study focused on evaluating the performance in recognising typical robot poses. The research was conducted in our robotics laboratory, using Spot and Unitree Go2 Pro quadruped robots as experimental platforms. We addressed the challenging task of pose recognition without relying on motion tracking — a difficulty particularly pronounced when dealing with rotations.
Download

Paper Nr: 64
Title:

Smart Separator: Optimizing Conveyor Belt, Vibration Feed, and Drum Speeds of Barrier Eddy Current Separator

Authors:

Shohreh Kia and Benjamin Leiding

Abstract: Efficient separation of metals and plastics in recycling is crucial for improving material purity and reducing costs. This paper optimizes the performance of a Barrier Eddy Current Separator (BECS) for sorting alu-minium, copper, plastic, and brass. The BECS consists of a conveyor belt, vibration feeder, and magnetic drum. Current methods rely on operator experience for speed and angle settings, often leading to suboptimal performance. This research applies a data-driven approach to determine optimal operational parameters. The study examines how varying conveyor belt speed (6.80 Hz to 87.70 Hz), vibration feeder amplitude (low, medium, high), and magnetic drum angle (20°, 30°, 40°) affect separation accuracy and energy consumption. Eighty-two experiments measured separation errors and energy use, with machine learning models identifying optimal settings. Experimental validation showed significant error reduction, achieving the lowest separation errors and energy consumption. Minimizing errors also eliminated rework, improving efficiency. Unlike conventional trial-and-error methods, this systematic approach enhances BECS calibration, demonstrating its effectiveness in improving recycling separation accuracy and energy efficiency.
Download

Paper Nr: 67
Title:

A Software Architecture for Highly Configurable Software-Defined Networks

Authors:

Ichiro Satoh

Abstract: We propose a novel architecture for software-defined networking (SDN) that allows for dynamic addition and modification of functionalities. Existing SDN architectures separate data transfer from software changes, making it difficult to modify the latter in response to the former. In this paper, we treat the software that defines communication protocols as first-class objects, indistinguishable from data, enabling their transfer to other computers. Our contribution to software engineering is to introduce a dynamic and unified mechanism for software components involved in network processing, which enables flexible and diverse customization. To demonstrate the utility of our proposed approach, we implemented a prototype on a Java Virtual Machine (VM) and designed and implemented several practical protocols for both data transmission and component deployment.
Download

Paper Nr: 80
Title:

Cognify: A Modular Privacy-Conscious AI-Driven Mobile App for Mental Health Based on Cognitive Distortion Detection

Authors:

Mariam Dawoud, Mohamad Rasmy and Alia El Bolock

Abstract: Cognitive distortions—irrational thought patterns contributing to emotional distress—are central to cognitive behavioral therapy (CBT), but early detection often depends on clinical assessments, limiting opportunities for timely self-reflection. Cognify is a cross-platform mobile application designed to help users detect and understand these distortions by analyzing daily journal entries using a fine-tuned NLP model that classifies entries into 14 cognitive distortion types. The app offers real-time feedback, weekly summaries highlighting recurring patterns, and an intuitive interface that promotes ongoing engagement. This paper presents Cognify’s system architecture, AI model integration, and results from a pilot study, which demonstrated improved user awareness of cognitive patterns, high user satisfaction, and increased journaling consistency over time. The app’s modular design also allows for optional integration of privacy-preserving features, ensuring flexibility to address evolving user needs. By combining AI-driven distortion detection with an adaptable journaling experience, Cognify offers a practical and engaging tool for enhancing cognitive awareness and supporting personal growth.
Download

Paper Nr: 83
Title:

Enhancing Design-by-Contract with Frame Specifications

Authors:

Yoonsik Cheon and Benjamin Good

Abstract: This paper introduces an annotation-based approach to extending Design by Contract (DbC) with support for specifying and enforcing frame properties at runtime. Frame specifications, also known as frame conditions or frame properties, define which parts of a program’s state may be modified during execution. Our approach models object states as abstract tuples, ensuring that runtime checks do not introduce unintended side effects. We implement a proof-of-concept prototype in Dart, utilizing compile-time instrumentation and runtime reflection to accommodate optional typing. By automating contract enforcement, this approach reduces the need for manual assertions, simplifies code maintenance, and enhances clarity by separating program logic from runtime checks. We evaluate its effectiveness in a cross-platform mobile application, comparing it to traditional assertion-based methods.
Download

Paper Nr: 91
Title:

A Controlled Experiment on the Effect of Ownership Rules and Mutability on Localizing Errors in Rust in Comparison to Java

Authors:

Lukas Poos, Stefan Hanenberg, Stefan Gries and Volker Gruhn

Abstract: The programming language Rust introduces language constructs such as ownership rules and mutability whose effect is, that undesired side-effects can be detected by the compiler. However, it is relatively unknown what the effect of such constructs on developers is. The present work introduces an experiment, where Rust and Java code was given to ten participants. The code, that consisted of ten function calls, contained one function that performed an undesired side-effect which led to an error in the main function. The participants’ task was to identify the function that caused this effect. The experiment varied (in the Rust code) the number of calls where a parameter was passed as mutable (which is inherently the case in languages such as Java). Such variation had a strong (p < .001) and large (η 2p = .459) effect on participants. On average, it took the participants 29% more time to identify the function in Java. However, this number varied between -4.3% and 117%, depending on how many parameters where passed as mutable. Altogether, the experiment gives evidence that an explicit passing of variables as mutable has a positive effect on developers under the experimental conditions.
Download

Paper Nr: 95
Title:

An Explainable Model for Waste Cost Prediction: A Study on Linked Open Data in Italy

Authors:

Lerina Aversano, Martina Iammarino, Antonella Madau, Debora Montano and Chiara Verdone

Abstract: Artificial intelligence and machine learning models are emerging as essential tools for optimizing municipal solid waste management and supporting policy decisions. However, transparency and interpretability of these models’ predictions continue to be major obstacles. Recent advances in Explainable Artificial Intelligence (XAI) techniques have made it possible to explain specific model decisions and guarantee that the outcomes are intelligible and useful. Using high-quality Italian open data in the form of Linked Open Data (LOD), this study investigates the benefits and viability of creating explainable models in italian municipalities. To achieve this, a method for using connected and open statistical data to create explainable models is provided. Additionally, a case study is presented, covering four years, in which waste management expenses are predicted and interpreted using connected data about Italian municipalities, categorizing them into three cost bands. CatBoost was selected as the predictive model’s algorithm, and the SHAP framework was used to guarantee the predictions’ transparency. Through transparent and accountable data management, this effort seeks to illustrate how cutting-edge technologies can enhance the sustainability of public programs.
Download

Paper Nr: 23
Title:

Intelligent Platform Using Natural Language Processing for Pre-Selection of Personnel Through Professional Values Required by Private Companies

Authors:

José L. Silva, Vicente O. Moya, Daniel W. Burga and Carlos A. Tello

Abstract: The pre-selection process is essential for companies, as it ensures the recruitment of competent staff for each position, maintaining a positive working environment crucial to meeting organizational objectives. This research presents an intelligent platform for the pre-selection of personnel based on professional values. When the selection process is poorly executed, it can lead to economic and intangible losses, such as delays in project progress and team demotivation. The platform employs natural language processing (NLP) to analyse applicant data, making it easier to identify candidates that best suit the needs of the company. The results indicate that the intelligent platform achieves an 80% accuracy in its recommendations.
Download

Paper Nr: 32
Title:

Agile Development of a Virtual Tour for Universidad Autónoma Metropolitana: Unidad Iztapalapa - One of the First Virtual Campus Experiences in Mexico City

Authors:

Benjamin Moreno-Montiel, Abel Isaac Samaniego-Alvarez, Luis Quiñones-Hernandez and Eva Lorena Pérez-de-la-Luz

Abstract: This paper presents the development process and outcomes of a virtual tour for Universidad Autónoma Metropolitana - Unidad Iztapalapa, one of the pioneering virtual campus experiences in Mexico City. Utilizing the Scrum framework with one-week sprint cycles, the project was executed over two academic terms, focusing on iterative development and continuous feedback. The virtual tour incorporates advanced 3D modeling, interactive guides, and dynamic student avatars, aiming to provide an immersive and informative experience. Key deployment strategies included hosting on AWS S3 and proprietary SSH servers using Docker containers, ensuring broad accessibility. Despite a small development team of four members, the project successfully reconstructed 16 major campus buildings, representing 44% of the infrastructure, and integrated features such as informational buttons and navigational aids. The results highlight significant advancements in virtual campus environments, with future work directed towards mobile optimization, gamified learning modules, and cooperative virtual interactions.
Download

Paper Nr: 39
Title:

Integrated System for Monitoring Workplace Stress Using Machine Learning Algorithms and IoT Devices in the Corporate Technology Sector

Authors:

Luis E. Leon, Adrian A. Benavente, Daniel Wilfredo Burga and Carlos Alberto Tello

Abstract: Work-related stress has cost the global economy a trillion dollars and 12 billion workdays lost annually. This paper presents StressMonit, a system that monitors employee stress using heart rate data collected by an IoT device and a Fitbit Sense. This data is analyzed with an XGBoost machine learning algorithm and displayed visually on an application. It was evaluated with 14 participants and validated against a medical tool, achieving an accuracy of 80.95% in stress prediction.

Paper Nr: 41
Title:

Requirements Cube: Towards a Matrix-Based Model of Requirements

Authors:

Benjamin Aziz, Gareth Hewlett, Ukamaka Oragwu, Peter Richards, Safa Tharib and Erica Yang

Abstract: In critical and robust systems, requirements modeling and analysis is an essential step, called requirements engineering, in the process of system development, which stems from the non-ambiguous identification of endusers and stakeholders needs and goals. Despite the wide application of requirements engineering methodologies, such as KAOS, i⋆/Tropos, often this step is marred by either the lack of robustness or the lack of usability on part of the analysts. In this paper, we present a 3-dimensional model of requirements, called the Requirements Cube, that is clear, usable and can be manipulated using general matrix algebra. Our model stands on the three main components of requirements; goals, resources and infrastructure, and does not present any complex concepts that may render it unusable. We consider the semantics of the Cube in three different value domains: 2- and 3-valued logic values and probabilistic values. Finally, we demonstrate how this model can be applied to healthcare monitoring scenarios.
Download

Paper Nr: 47
Title:

Product Line Engineering in Smart Governance Systems

Authors:

Salvador Muñoz-Hermoso, Miguel A. Olivero, Francisco J. Domínguez-Mayo and David Benavides

Abstract: Smart governance systems are used to develop smart and sustainable cities. Small municipalities are users of these systems. However, creating individual and custom systems for each municipality is costly and often unfeasible due to the limited resources of local governments. Developing a modular and configurable system could help reduce costs, enabling municipalities to tailor solutions to their specific needs without requiring custom developments. Software Product Line Engineering (SPLE) can make this possible by fostering software reuse, and creating a set of adaptable systems that share common features. Nonetheless, applying SPLE in the smart governance domain remains challenging and, so far, there is no applications of SPLE in this area in the existing literature. We propose an Smart Governance Product Line (SGPL), based on a multi-level configuration architecture for the customization of solutions in the smart governance domain. Based on the SGPL, we also present a prototype configurator for customizable governance systems that has been used in a case involving three municipalities with different needs. The tool was materialized with configurations of the existing Decidim governance system. The prototype demonstrated its usefulness in deploying in an easy and automatized way adapted configurations to the municipalities’ needs. Furthermore, the case study suggests that this approach could evolve into a general framework to support different software systems and components for providing a comprehensive smart governance platform tailored to institutional specifications.
Download

Paper Nr: 78
Title:

A Phishing Detection System for Enhanced Cybersecurity Using Machine Learning

Authors:

Adwaith Atholi Thiruvoth and Pushkar Ogale

Abstract: Email phishing is a pressing cybersecurity challenge that requires efficient detection methods. Emails that look legitimate lead users to malicious sites. Our work aims to develop a machine learning-driven email classification system, named SecureInbox. A comparative study of classical machine learning techniques like Random Forest, Naive Bayes, Decision Tree, SVM, and gradient boosting regression trees was conducted, and it was found to be successful in achieving high accuracy and effectiveness in distinguishing between legitimate and phishing emails. This study makes use of various statistical methods, classification algorithms to develop a user-friendly graphical interface (GUI) for seamless email classification. SecureInbox automatically fetches the mailbox file associated with the current user in a Linux environment and classifies their emails as phishing or not phishing while displaying the results interactively. Our work helps to strengthen email security by providing a convenient tool for phishing email identification, thereby enhancing defence against cyber threats.
Download

Paper Nr: 94
Title:

Method for Identification of Individual Flamingos Based on Movement Logs

Authors:

Riku Okazaki and Yu Suzuki

Abstract: This research aims to enable non-wearing and non-contact identification of individual animals. As an approach to this goal, we examined the feasibility of an identification method using movement logs, which is one of the animal behaviors. In this study, flamingos kept in a zoo were targeted. In order to collect the movement logs of flamingos, we developed a system in YOLO to identify individual flamingos and record their locations based on videos taken in the zoo's keeping area. In addition, we analyzed the collected movement logs of multiple individuals using a neural network and found that the movement logs could be used to detect individuals with higher identification accuracy than random inference for individual identification. Furthermore, we showed that the location where individuals tend to stay and the posture they tend to adopt change depending on conditions such as weather (rainy or cloudy) and the time of day (noon time period). This research indicates the feasibility of identifying individuals by their movements.
Download