Abstract

The development of science and technology has enabled humans to carry out more or less effortlessly many activities which previously would have entailed much physical effort and expenditure of time. While this development enables people to enjoy more time for leisure, it has also encouraged a culture of physical laziness. The same trend appears in the intellectual field as well. Modern computers can in a few moments, carry out tasks which would have entailed an enormous amount of time and mental effort, earlier. Again, this saving of time enables one to focus on the more creative and insightful activities that are specific to human beings. However, the culture of laziness is widespread. It would appear that there is at the present time, a tendency to even attempt to transfer creative intellectual efforts to machines, through the use of generative AI. The writing up of a good piece of research for publication is one of the creative and enjoyable activities afforded to a scientist. While the use of generative AI to obtain help in deciding the most effective way of “saying what one has to say” can be considered acceptable in situations where the writer’s language skills are not well developed, to attempt to use generative AI to decide on the content of the paper, “what to say”, is to attribute to generative AI an ability it does not have. Recently, scientists and sociologists have written about the deficiencies and dangers of this approach to creating knowledge and understanding. To be meaningful, any generative AI output needs to be moderated and evaluated by human intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call