Producing accurate software models is crucial in model-driven software engineering (MDE). However, modeling complex systems is an error-prone task that requires deep application domain knowledge. In the past decade, several automated techniques have been proposed to support academic and industrial practitioners by providing relevant modeling operations. Nevertheless, those techniques require a huge amount of training data that cannot be available due to several factors, e.g., privacy issues. The advent of large language models (LLMs) can support the generation of synthetic data although state-of-the-art approaches are not yet supporting the generation of modeling operations. To fill the gap, we propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations using LLMs. In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations. In addition, we generate a completely new dataset of modeling events by telling on the most prominent LLMs currently available. As a proof of concept, we instantiate the proposed framework using a set of existing modeling tools employed in industrial use cases within different European projects. To assess the proposed methodology, we first evaluate the capability of the examined LLMs to generate realistic modeling operations by relying on well-founded distance metrics. Then, we evaluate the recommended operations by considering real-world industrial modeling artifacts. Our findings demonstrate that LLMs can generate modeling events even though the overall accuracy is higher when considering human-based operations.
Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach
Vittoriano Muttillo
;
2024-01-01
Abstract
Producing accurate software models is crucial in model-driven software engineering (MDE). However, modeling complex systems is an error-prone task that requires deep application domain knowledge. In the past decade, several automated techniques have been proposed to support academic and industrial practitioners by providing relevant modeling operations. Nevertheless, those techniques require a huge amount of training data that cannot be available due to several factors, e.g., privacy issues. The advent of large language models (LLMs) can support the generation of synthetic data although state-of-the-art approaches are not yet supporting the generation of modeling operations. To fill the gap, we propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations using LLMs. In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations. In addition, we generate a completely new dataset of modeling events by telling on the most prominent LLMs currently available. As a proof of concept, we instantiate the proposed framework using a set of existing modeling tools employed in industrial use cases within different European projects. To assess the proposed methodology, we first evaluate the capability of the examined LLMs to generate realistic modeling operations by relying on well-founded distance metrics. Then, we evaluate the recommended operations by considering real-world industrial modeling artifacts. Our findings demonstrate that LLMs can generate modeling events even though the overall accuracy is higher when considering human-based operations.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.