Abstract

AbstractComputational models are conventionally created with input data, script files, programming interfaces, or graphical user interfaces. This paper explores the potential of expanding model generation, with a focus on multibody system dynamics. In particular, we investigate the ability of Large Language Model (LLM), to generate models from natural language. Our experimental findings indicate that LLM, some of them having been trained on our multibody code Exudyn, surpass the mere replication of existing code examples. The results demonstrate that LLM have a basic understanding of kinematics and dynamics, and that they can transfer this knowledge into a programming interface. Although our tests reveal that complex cases regularly result in programming or modeling errors, we found that LLM can successfully generate correct multibody simulation models from natural-language descriptions for simpler cases, often on the first attempt (zero-shot).After a basic introduction into the functionality of LLM, our Python code, and the test setups, we provide a summarized evaluation for a series of examples with increasing complexity. We start with a single mass oscillator, both in SciPy as well as in Exudyn, and include varied inputs and statistical analysis to highlight the robustness of our approach. Thereafter, systems with mass points, constraints, and rigid bodies are evaluated. In particular, we show that in-context learning can levitate basic knowledge of a multibody code into a zero-shot correct output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call