Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices. This study aims to understand how older adults may use chatbots' capabilities for misinformation management. We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults' encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1. We found that issues with older adults' misinformation management arose more from interpersonal relationships than individuals' ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults' distrust in IT companies and governments' ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults' trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking. This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.