Abstract

Using advanced AI approaches, the development of Domain-Specific Languages (DSLs) can be facilitated for domain experts who are not proficient in programming language development. In this paper, we first addressed the aforementioned problem using Semantic Inference. However, this approach is very time-consuming. Namely, a lot of code bloat is present in the generated language specifications, which increases the time required to evaluate a solution. To improve this, we introduced a multi-threaded approach, which accelerates the evaluation process by over 9.5 times, while the number of fitness evaluations using the improved Long Term Memory Assistance (LTMA) was reduced by up to 7.3%. Finally, a reduction in the number of input samples (fitness cases) was proposed, which reduces CPU consumption further.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.