This study explores the biases present in artificial intelligence (AI) tools, focusing on GPT-3.5, GPT-4, and Bing. The performance of the tools has been compared with a group of experts in linguistics, and journalists specialized in breaking news and international affairs. It reveals that GPT-3.5, widely accessible and free, exhibits a higher tendency rate in its word generation, suggesting an intrinsic bias within the tool itself rather than in the input data. Comparatively, GPT-4 and Bing demonstrate differing patterns in term generation and subjectivity, with GPT-4 aligning more closely with expert opinions and producing fewer opinative words. The research highlights the extensive use of generative AI in media and among the general populace, emphasizing the need for careful reliance on AI-generated content. The findings stress the risks of misinformation and biased reporting inherent in unexamined AI outputs. The challenge for journalists and information professionals is to ensure accuracy and ethical judgment in content creation to maintain the quality and diversity of content in journalistic practices.