Aim/Purpose: This mixed-methods study aims to examine factors influencing academicians’ intentions to continue using AI-based chatbots by integrating the Task-Technology Fit (TTF) model and social network characteristics. Background: AI-powered chatbots are gaining popularity across industries, including academia. However, empirical research on academicians’ adoption behavior is limited. This study proposes an integrated model incorporating TTF factors and social network characteristics like density, homophily, and connectedness to understand academics’ continuance intentions. Methodology: A qualitative study involving 31 interviews of academics from India examined attitudes and the potential role of social network characteristics like density, homophily, and connectedness in adoption. Results showed positive sentiment towards chatbots and themes on how peer groups accelerate diffusion. In the second phase, a survey of 448 faculty members from prominent Indian universities was conducted to test the proposed research model. Contribution: The study proposes and validates an integrated model of TTF and social network factors that influence academics’ continued usage intentions toward AI chatbots. It highlights the nuanced role of peer networks in shaping adoption. Findings: Task and technology characteristics positively affected academics’ intentions to continue AI chatbot usage. Among network factors, density showed the strongest effect on TTF and perceived usefulness, while homophily and connectedness had partial effects. The study provides insights into designing appropriate AI tools for the academic context. Recommendations for Practitioners: AI chatbot designers should focus on aligning features to academics’ task needs and preferences. Compatibility with academic work culture is critical. Given peer network influences, training and demonstrations to user groups can enhance adoption. Platforms should have capabilities for collaborative use. Targeted messaging customized to disciplines can resonate better with academic subgroups. Multidisciplinary influencers should be engaged. Concerns like plagiarism risks, privacy, and job impacts should be transparently addressed. Recommendation for Researchers: More studies are needed across academic subfields to understand nuanced requirements and barriers. Further studies are recommended to investigate differences across disciplines and demographics, relative effects of specific network factors like size, proximity, and frequency of interaction, the role of academic leadership and institutional policies in enabling chatbot adoption, and how AI training biases impact usefulness perceptions and ethical issues. Impact on Society: Increased productivity in academia through the appropriate and ethical use of AI can enhance quality, access, and equity in education. AI can assist in mundane tasks, freeing academics’ time for higher-order objectives like critical thinking development. Responsible AI design and policies considering socio-cultural aspects will benefit sustainable growth. With careful implementation, it can make positive impacts on student engagement, learning support, and research efficiency. Future Research: Conduct longitudinal studies to examine the long-term impacts of AI chatbot usage in academia. Track usage behaviors over time as familiarity develops. Investigate differences across academic disciplines and roles. Requirements may vary for humanities versus STEM faculty or undergraduate versus graduate students. Assess user trust in AI and how it evolves with repeated usage, and examine trust-building strategies. Develop frameworks to assess pedagogical effectiveness and ethical risks of conversational agents in academic contexts.