Abstract

The advancements in technology, particularly the development of high-performance computing (HPC) and large language models (LLMs) like ChatGPT, can potentially transform the field of Structural Engineering. Use of LLMs, such as ChatGPT, offers several opportunities in Structural Engineering, including the development of innovative design solutions, use in code-based structural analysis programs by automating repetitive coding tasks, conforming to building code requirements by automating compliance checks, and storing information. The critical concerns arise in LLM’s regarding biases, misinformation, safety, reliability, and lack of domain expertise. This paper explores the opportunities and risks associated with using ChatGPT and LLMs in Structural Engineering, focusing on efficiency, accuracy, and reliability. The main aim of the study is to examine the limitations and potential risks of relying solely on machine-generated information and to provide mitigation strategies to overcome them. Careful management to prevent harmful content, collaboration with human experts for accurate results, establishing guidelines and standards are obligatory measures to address ethical concerns such as bias, privacy, and abuse. Continuous monitoring and updating of LLMs are essential to maintain accuracy and relevance. While ChatGPT and LLMs offer significant benefits in Structural Engineering, responsible usage in combination with human expertise and machine-generated insights are vital to maximizing their potential while mitigating risks and ensuring safe as well as reliable engineering practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call