Text summarisation reduces text length while retaining important information, helping individuals, especially students, in managing information overload during research or assignments. However, existing text summarisation methods often lose important details, generate irrelevant or redundant sentences, or produce incoherent summaries. This study introduces a hybrid approach, LongT5Rank (coined in this study), which combines TextRank, an extractive summarisation algorithm, with LongT5, an abstractive summarisation algorithm, to automate the summarisation process. TextRank utilizes GloVe, a pre-trained word embedding model, and PageRank, a graph-based ranking algorithm, to select representative sentences. LongT5, an encoder-decoder transformer model, condenses extracted sentences into a concise and coherent summary, handling input sequences up to 16,384 tokens, for long-range sequence-to-sequence tasks. The LongT5Rank approach has shown significant achievements, including a minimum 60% compression rate, a minimum 0.6 semantic textual similarity score, and an improved F-measure compared to employing TextRank alone. Furthermore, it received positive feedback from Human Level Performance (HLP), underlining the importance of evaluating results directly from human users. This emphasizes the belief that the performance of the proposed solution should be assessed natively by humans. By combining both extractive and abstractive methods, LongT5Rank excels in generating accurate and coherent summaries.