Abstract

Excited by ground-breaking progress in automatic code generation, machine translation, and computer vision, further simplify web design workflow by making it easier and productive. A Model architecture is proposed for the generation of static web templates from hand-drawn images. The model pipeline uses the word-embedding technique succeeded by long short-term memory (LSTM) for code snippet prediction. Also, canny edge detection algorithm fitted with VGG19 convolutional neural net (CNN) and attention-based LSTM for web template generation. Extracted features are concatenated, and a terminal LSTM with a SoftMax function is called for final prediction. The proposed model is validated with a benchmark based on the BLUE score, and performance improvement is compared with the existing image generation algorithms.

Highlights

  • Ongoing demand for websites ranging from Blogs, E-commerce, Product pages has increased considerably in recent years due to the more open access to the internet

  • As the model identifies each feature vector from the Convolutional Neural Nets (CNN) output, the model's attention mechanism gradually shifts and focuses on the different HTML components on the same image, as shown in the Fig 4. This idea of adding an attention mechanism to the Long Short-Term Memories (LSTM) architecture was inspired by works of Kelvin Xu et al, Mahalakshmi & Sabiyath Fatima [23,20] on image captioning

  • 3.5 Encoder While the inputted image passing through canny edge detection algorithms is fed into the CNN attached with attention mechanism in LSTM's used for object detection

Read more

Summary

1.INTRODUCTION

Ongoing demand for websites ranging from Blogs, E-commerce, Product pages has increased considerably in recent years due to the more open access to the internet. As the model identifies each feature vector from the CNN output, the model's attention mechanism gradually shifts and focuses on the different HTML components on the same image, as shown in the Fig 4 This idea of adding an attention mechanism to the LSTM architecture was inspired by works of Kelvin Xu et al, Mahalakshmi & Sabiyath Fatima [23,20] on image captioning. 3.5 Encoder While the inputted image passing through canny edge detection algorithms is fed into the CNN attached with attention mechanism in LSTM's used for object detection. The other set of dependent LSTMs is provided with inputs from the word embedding for Automatic Code Generation (ACG) is stacked with One-hot encoders for the final output This one-hot encoding arrangement establishes a relationship in terms of digits and HTML tags. A concatenated layer is introduced to create a final set of image-HTML features

Decoder
IMPLEMENTATION:
5.RESULT
Findings
CONCLUSION AND FUTURE WORKS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.