Background: As large language models continue to expand in size and diversity, their substantial potential and the relevance of their applications are increasingly being acknowledged. The rapid advancement of these models also holds profound implications for the long-term design of stimulus-responsive materials used in drug delivery. Methods: The large model used Hugging Face's Transformers package with BigBird, Gemma, and GPT NeoX architectures. Pre-training used the PubChem dataset, and fine-tuning used QM7b. Chemist instruction training was based on Direct Preference Optimization. Drug Likeness, Synthetic Accessibility, and PageRank Scores were used to filter molecules. All computational chemistry simulations were performed using ORCA and Time-Dependent Density-Functional Theory. Results: To optimize large models for extensive dataset processing and comprehensive learning akin to a chemist's intuition, the integration of deeper chemical insights is imperative. Our study initially compared the performance of BigBird, Gemma, GPT NeoX, and others, specifically focusing on the design of photoresponsive drug delivery molecules. We gathered excitation energy data through computational chemistry tools and further investigated light-driven isomerization reactions as a critical mechanism in drug delivery. Additionally, we explored the effectiveness of incorporating human feedback into reinforcement learning to imbue large models with chemical intuition, enhancing their understanding of relationships involving -N=N- groups in the photoisomerization transitions of photoresponsive molecules. Conclusions: We implemented an efficient design process based on structural knowledge and data, driven by large language model technology, to obtain a candidate dataset of specific photoswitchable molecules. However, the lack of specialized domain datasets remains a challenge for maximizing model performance.