The new approach to the co-authorship of a person and AI in creating texts, which will help avoid disputes in the scientific and educational environment, has been proposed by researchers from Samara University. The results are published in the journal Semiotic Research.
The active introduction of neural networks into scientific activity raises the question of whether artificial intelligence systems can be recognized as full-fledged co-authors of human publications. According to sociologists from Samara National Research University, in the world practice, there are cases when artificial intelligence (AI) is listed among the authors of scientific articles, and in some situations – as the sole author of the work.
The article describes some precedents when in the international databases Web of Science and Scopus, four articles with ChatGPT to be listed as an author, including two cases with its sole authorship to be indicated, were posted. There are at least two publications with AI co-authorship in the database Scopus, and in one case the chatbot was later excluded from the list of authors at the request of the editorial board of the journal.
For studying the problem in detail, the University’s specialists analyzed the process of creating texts by using AI. Following the results, according to the researchers, the traditional understanding of authorship in the context of generative AI is much to be revised. Here, a person acts not only as a creator, but above all as a curator, editor and interpreter.
“We have come to the conclusion that such works have a hybrid nature. This approach opens the path to new models of “distributed” and “intertwined” authorship, where AI becomes a participant in the creative process, but the “human researcher” ultimately responsible for the content”, commented Natalia Maslenkova, Associate Professor at Samara University’s Department of Sociology and Cultural Studies.
The researchers paid special attention to ethical issues related to the use of generative intelligence for writing scientific texts. There are known risks of academic “fraud”, for example, when a person passes off the content completely generated by AI as his/her own, without assuming the role of editor and responsible author.
“Our findings can form the basis of ethical and legal norms increasing transparency in the case of using AI, and prevent abuse. This is especially important for educational, scientific and media fields, where the question who is responsible for the content created with the participation of AI is rather topical”, believe the authors of the article.
According to the University’s specialists, the novelty of their work is in the fact that it is not limited to legal analysis, but uses the integrated approach combining philosophical, sociocultural and legal analysis. Researchers link modern AI practices with the theories of distributed and networked authorship, which makes it possible to deeper understand the transformation of the roles of humans and machines in creating texts.
One point of view argues that AI is a “stochastic parrot” imitating human speech, based on statistics. Its “subjectivity” is still a social construct reflecting the collective contribution of developers and users.
At this stage, scientists are faced with the task of developing recommendations for the transparent and responsible use of generative models in education and scientific publications, as well as studying how ideas about authorship change in various social groups as AI spreads.
Source: ria.ru