ChatGPT can’t be credited as an author

Springer Nature are the world’s largest academic publisher. They have stated that information sourced from ChatGPT cannot be credited as an author. It can however be used for research.

Woman sat at desk with a pen in her right-hand writing and her left hand resting on her laptop keyboard. On the desk are a coffee, plant, books, globe and a pen pot.

The world’s largest publisher Springer Nature have declared AI tools cannot be used as a writing tool. When it comes to scientific papers, the company has updated their policies to make things clear. Scientists are allowed to use ChatGPT to write ideas for their research. However, they cannot directly use the generated text within their work.

The written work published must be done so by an author. They must also mention how the AI was used within their research. This information must be disclosed. Multiple papers, and articles have already set ChatGPT and other similar models as authors. This could be damaging because these tools do not have the tools for each piece of work.

PUSH sign up for free GIF

Some researchers have gone as far to say crediting ChatGPT as an author is not only “deeply stupid” but also “absurd”. When you think about it for what it is, it’s incredibly demoralising for real authors. Hours, months, even years that go into their work, and people are now claiming a computer could do all of this in minutes?

It depends entirely on how you use these AIs to how insensitive they can be. Previously, we published a fun article written by ChatGPT, however would we do it again? No. Why? Because there isn’t the sense of human connection that comes from a service like this. We like to put our own words and feelings to our blog.

Ansible Health credit ChatGPT as an author

Ansible Health a support program for COPD sufferers have recently included ChatGPT as an author in their findings. What’s concerning about this is that ChatGPT isn’t medically trained, and therefore the information it provides might not always be correct. Incorrect information being spread could be incredibly damaging.

However, if the AI was simply used to reword topics that were fact-checked by the professionals, then surely that isn’t a bad thing? The CEO said the following “The reason why we listed [ChatGPT] as an author was because we believe it actually contributed intellectually to the content of the paper and not just as a subject for its evaluation”.

Springer Nature have explained why giving authorship to an AI is damaging by saying “When we think of authorship of scientific papers, of research papers, we don’t just think about writing them. There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities”.

Looking at it from this point of you does make sense. After all, computers, while extremely clever – cannot make a decision in the same way the human mind would. With anything posted using AI software, it should be common sense to check it to make sure it’s factual. However, our advice? Don’t use it for anything that could be damaging – e.g. medical information.

It’s almost hard to see where ChatGPT would fit in. After all, it cannot be used in educational settings either. Using a software to write your essay or report is literally cheating the system, and therefore would see students at a loss because their work wouldn’t be accepted. What are your opinions though? Ban it? Use it in moderation? Or, do you not care either way?

Found this helpful? Share it with your friends!
Close Bitnami banner