DATASOFT_DC 2022 Abstracts


Short Papers
Paper Nr: 1
Title:

Predictive Intelligence using the Example of Forecasting Customer Orders in the Metalworking Industry

Authors:

Sebastian Junghans

Abstract: Auerhammer Metallwerk GmbH (AMW) is an internationally active producer of semi-finished metal products based in Aue, Germany, with a focus on nickel-bearing alloys. The product portfolio can be described as follows: plated strip, thermal bimetal, metal strip and metal foil. The production type of the cladding and cold rolling company is mainly make-to-order production. AMW's customers are primarily series producers with relatively constant, recurring call-offs. In make-to-order production, a single product is manufactured to customer specifications as part of a customer order. This has the advantage that the customer receives a product according to his individual specifications. The disadvantage is that AMW customers have to accept waiting times from order placement to delivery, depending on the complexity of the product. The procurement period is between 12 and 16 weeks. The in-house production time of four weeks results in a total added value time of approximately 20 weeks. The overarching goal of the project is to create sales forecasts about future expected customer orders and their attributes based on the relevant historical data from the IT systems of Auerhammer Metallwerke GmbH. The aim is to shorten the current delivery time of 20 weeks without increasing inventory levels. These forecasts are to be implemented using machine learning methods. The existing historical data from logistics and accounting will be used to create the forecasts. In addition, further influencing variables from heterogeneous areas or data sources from the entrepreneurial context, which have yet to be determined, will be included. Essentially, a customer orders an end product in a certain quantity from AMW as part of a customer order. The customer specifies the dimensions. On the basis of this customer order, AMW orders starting material from a chain of subcontractors. The type and quantity of the starting material required for the customer order should be derivable from the forecasted attributes of the customer order. On the basis of these forecasts, all relevant variables should be derived in order to order the required input material before the arrival of a customer order and subsequently to significantly shorten the delivery time to the customer. Furthermore, academic work deals with the consequences of a robust model for the organization. These are summarized under the technical term predictive intelligence. For this purpose, the change processes, strategic supplier management, material requirements planning, production program planning, dynamic pricing and the increase of sustainability in terms of the EU taxonomy are presented and discussed.

Paper Nr: 2
Title:

Enhancing the Provision of Labour Market Intelligence using Machine Learning

Authors:

Aleksander Bielinski

Abstract: The success of Labour Market Intelligence (LMI) for forecasting is predicated on accurate, reliable, robust, and accessible data that underpins the decision-making process. Developing, processing and maintaining vast quantities of often unstructured data is extremely difficult. The launch of SDS’s LMI system for staff in 2010 demonstrated the demand for web-based labour market reports and commentary. Artificial Intelligence provides a host of exciting opportunities in reliable forecasting through the employment of statistical methods to derive reliable, explainable information from this complex data. The research focuses on applying novel and explainable AI approaches to enhance the current provision of LMI, with a particular reference to skills planning, forecasting and investment in training provision. In the initial stages of the research, identification of existing and potential applications of machine learning to enhance LMI are being explored. Thereafter, the identification and assessment of suitable data sources for predictive purposes, such as web vacancy data, will be investigated. The development of novel machine learning models will represent a move towards more dynamic forecasting capabilities that will be evaluated through explainable machine learning approaches. Completion of the study will contribute new knowledge through the provision of state-of-the-art models for LMI forecasting, natural language processing and information representation. Moreover, with the importance of the current Scottish environmental strategy, this research will also introduce the standardized framework for green jobs and skills classification and later investigate the sector in detail. This proposal is highly innovative: It extends and enhances the scope of current LMI forecasting; It supports improvements to the natural language processing field; whilst strengthening the SDS Strategic Frameworks beyond 2022 by ensuring that the latest evidence underpins future frameworks.

Paper Nr: 3
Title:

Process and Assessment Framework for Successful Big Data Adoption and Implementation in Organizations

Authors:

Norhayati Daut

Abstract: Big data analytics (BDA) can help organizations utilize their data to identify new prospects and possibilities. Even though the demand is high, a comprehensive review of the existing studies that involve the process and assessment frameworks for BDA adoption and implementation is still insufficient. This research aims to develop process and assessment frameworks for BDA adoption and implementation in organizations. The frameworks come with guidelines and instruments as assisting tools for each of the activities required. The methodology used in this research is a systematic literature review for the literature review and a case-study approach for applying and validating the developed frameworks in selected organizations. This research is valuable in providing a standard process framework that can be used by various organizations whenever they want to adopt and implement BDA into their environment.

Paper Nr: 5
Title:

Towards Efficient Software Comprehension with Code Analysis

Authors:

Robert Husák

Abstract: As modern software systems are becoming more and more complex, the difficulty of their development increases as well. In order to maintain the development complexity, it is crucial to help software developers perform their tasks efficiently and correctly. While developers take part in multiple different tasks, studies have shown that the most challenging problem is the comprehension of existing code. Xia et al. have discovered that developers spend as much as 58% of their time on program comprehension activities. The study of LaToza et al. shows that the knowledge of different parts of the code is distributed between multiple teams and developers. Therefore, when a developer works on a task spanning multiple code parts, other colleagues are often asked to share their related knowledge, causing interruptions and fragmentation of their work. Development teams usually tackle these problems by adjusting the development process. For example, if developers ask too often about the same part of the system, it may be efficient to explicitly capture its most essential features into written documentation. To reduce the influence of work interruptions, developers can plan explicit meetings, e.g., morning stand-up meetings. When applied successfully, these activities can indeed make software development more efficient. However, they still bring non-negligible overhead, such as the maintenance of the documentation or extra time regularly spent on scheduled meetings. A possibly more efficient way to help developers with program comprehension might be to equip them with better tools which empower them to distribute their knowledge more easily. For example, it is possible to create novel integrated development environments (IDE) which utilise the recent trends from the field of human-computer interaction (HCI). However, these tools usually originate from academia and rarely gain any significant adoption in practice. Also, most of them do not use automated code analysis techniques, leaving the actual code exploration and reasoning to developers themselves. On the other hand, there is a plethora of automated code analysis techniques, for example, data-flow analysis, abstract interpretation, constraint-based analysis or symbolic execution. The academic community is very active in developing these techniques and creating new ones. Regrettably, the research is usually focused mainly on their technical qualities and not on the improvement of their practical usability by a wide audience. As a result, their adoption in practice is limited to specific domains where the increase in code quality justifies the time investment, e.g. in embedded software development. My research aims to discover whether it is possible to combine HCI and code analysis knowledge to create practically usable program comprehension tools.