Objectives: Offloading tasks in edge computing (EC) plays an important role in optimizing resource utilization and enhancing the system performance. This paper studies various AI-based computation offloading (CO) strategies and proposes a hybrid Adaptive Learning Algorithm, that reduces the latency significantly compared to traditional RL-based on-policy and off-policy algorithms namely Q-Learning (QL) and State Action Reward State Action (SARSA) for CO in an EC environment. The paper evaluates and compares the efficiency of the proposed algorithms in optimizing dynamic offloading decisions, focusing on factors such as latency. Methods: The research is inspired by existing literature that has discovered the various applications of QL and SARSA in CO and mobile EC. This paper inspects how these algorithms perform in terms of reducing overall latency and proposes an adaptive hybrid algorithm. Novelty: Although other AI-based techniques exist in earlier research on CO in EC, the primary novelty proposed is the addition of a hybrid adaptive RL algorithm. Factors impacting the task offloading decision are crucial in the proposed algorithm. Moreover, the decision is not based only on static policies; the current state is the driving factor for decision-making. Findings: The exploration and exploitation handshake is very promising in reducing latency, as seen in the simulation results. The proposed algorithm outperforms Q-Learning and SARSA in terms of latency. Keywords: Computation offloading, Mobile Edge Computing, Reinforcement learning, Qlearning, SARSA