In the domain of automated story generation, the intricacies of the Arabic language pose distinct challenges. This study introduces a novel methodology that moves away from conventional event-driven narrative frameworks, emphasizing the restructuring of narrative constructs through sophisticated language models. Utilizing mBERT, our approach begins by extracting key story entities. Subsequently, XLM-RoBERTa and a BERT-based linguistic evaluation model are employed to direct beam search algorithms in the replacement of these entities. Further refinement is achieved through Low-Rank Adaptation (LoRA), which fine-tunes the extensive 3 billion-parameter BLOOMZ model specifically for generating Arabic narratives. Our methodology underwent thorough testing and validation, involving individual assessments of each submodel. The ROCStories dataset provided the training ground for our story entity extractor and new entity generator, and was also used in the fine-tuning of the BLOOMZ model. Additionally, the Arabic ComVE dataset was employed to train our commonsense evaluation model. Our extensive analyses yield crucial insights into the efficacy of our approach. The story entity extractor demonstrated robust performance with an F-score of 96.62%. Our commonsense evaluator reported an accuracy of 84.3%, surpassing the previous best by 3.1%. The innovative beam search strategy effectively produced entities that were linguistically and semantically superior to those generated using baseline models. Further subjective evaluations affirm our methodology’s capability to generate high-quality Arabic stories characterized by linguistic fluency and logical coherence.