Large Language Models (LLMs) are prominent AI tools potentially useful in various applications involving natural language interactions and exchanges of information with human users. These models, however, have the potential to spread misinformation and misconceptions, especially when used by individuals lacking the necessary expertise to critically assess their output. ChatGPT, developed by OpenAI, is a public available LLM and one of the most renowned. The paper explores whether the information it provides about the concept of risk and some related basic notions can be considered sufficiently correct. Specifically, ChatGPT was first utilized to build a glossary of basic concepts for the risk analysis field, modeled after that of the Society for Risk Analysis (SRA). The model was then used to assess the quality of the generated entries for clarity, precision, completeness and presence of examples, and to compare such entries to those of the SRA glossary again for quality and for semantic similarity. Independent ChatGPT user sessions were used throughout so as to avoid influencing one output by the previous ones. The results suggest that, while the SRA and ChatGPT entries may differ in focus and scopes, they share a common core and do not conflict in substantial ways. Therefore, we can develop the working hypothesis that ChatGPT does not promote major misconceptions as to the foundational definitions about risk, at least with respect to those provided by the SRA.