Abstract

As software engineering techniques evolve, conventional methods become less efficient due to prolonged development cycles and repetitive tasks. There is a growing need for automated systems that can incorporate User Interface (UI) designs into code. This study proposes a novel Hybrid CNN-GRU model for automatically generate website-based UI codes from UI design images. The proposed method consists of a visual model and a language-decoder part. The visual model employs pretrained CNN models, including EfficientNetV2B0, Xception, and EfficientNetV2M, to extract features from images, whereas a GRU serves as the language model and decoder. An empirical study was conducted to evaluate the effectiveness of various hybrid CNN-GRU approaches for generating UI codes. The EfficientNetV2B0 GRU model demonstrated the fastest training time, whereas the Xception GRU model was the most robust for various data complexities. The results of this study will advance the field of automated code generation, simplify the front-end development process, and shorten the time and effort necessitated for UI code implementations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.