The fiend that lies like truth

Kirjoittanut

Howy Jacobs

Kommentit (0)

Most of us in the world of scientific publishing agree that the integrity and reliability of the literature is under threat from many directions. One of the most perfidious is generative AI. As it improves, it will soon be capable not just of writing a paper based around ‘real’ experimental data. It will become able to concoct such data as are needed to support almost any hypothesis, passing it off as a genuine contribution to knowledge.

In 1974, leaders of the emerging field of gene manipulation called for a moratorium on the use of the technology they had themselves developed [1]. Their concerns were both its inherent and potential safety risks and the possibility that the technology might be misused by unscrupulous individuals to do harm. This led to the convening of the Asilomar conference in 1975, which debated the issues surrounding the safe and ethically acceptable use of gene technology, both in science and its eventual, wider application.

The guidelines that emerged from this conference [2] introduced a strict regulatory framework for gene manipulation and modification. The framework was initially implemented as a voluntary code of conduct, but was soon adopted by governments and enshrined in legislation. The guidelines remain in place, are broadly respected and enforced, and have been vital in persuading the vast majority of people that scientists use this technology only for public good.

I believe something similar is now needed to regulate and limit the use of AI in compiling scientific manuscripts, preferably structured around strict and enforceable guidelines. Public discourse is already becoming polluted by ‘fake news’ and by the thousands of voices presenting warped versions of reality, refracted through a distorting prism of prejudice and malice. If this trend were to take over the scientific literature we risk being catapulted back to the ignorance and confusion of the dark ages; when myth, legend and superstition took precedence over logic and empirical study. The literature is already bloated and increasingly unreliable. But the widespread intrusion of generative AI could lead to its terminal demise. Not just in spite of our advanced technologies, but because of them.

As editor of a scholarly journal, I see these dangers very starkly. What constitutes the misuse of AI needs to be clearly demarcated. Performing a statistical analysis using an online tool that can be cited and accessed by a third party is perfectly acceptable. Compiling a fake but convincing microscopy image to demonstrate a novel finding is not, regardless of whether the finding in question proves subsequently to be valid. A more insidious danger is that AI might reinforce and propagate ‘honest misinterpretations’, decreasing the scope for original science and resulting in the fossilization of knowledge. Real science generally progresses not by entrenching a majority opinion but by overturning it.

It should not be too hard to define what is and is not acceptable, and periodically review and update guidelines that all can understand and follow. Those guilty of flouting the guidelines need to be sanctioned by having research funding withdrawn and being banned from publishing, at least for several years if not permanently. Funders and publishers need to sign up to such a code of conduct and introduce meaningful tools for enforcement. We need to self-regulate, or else risk an irreversible collapse of the entire edifice of human knowledge.

Some may argue that the danger lies not in the tool itself but merely in the humans who misuse it. But some tools are simply too dangerous to be used without proper regulation. Atomic fission for example, or the propagation in the lab of potentially lethal pathogens such as respiratory coronaviruses. A more recent Asilomar conference has already produced more generally applicable guidance on the development and use of AI [3], although some have argued that this declaration is toothless and far too limited [4]. It should not be too hard to convene a meeting of stakeholders in scientific communication and research funding, to work through the issues and come up with a viable set of rules.

In Shakespeare’s Macbeth, ‘the fiend that lies like truth’ is the harbinger of the protagonist’s downfall, deceiving him with apparently honest and straightforward statements that hide a darker reality. We urgently need a mechanism to distinguish scientific truth from falsehood and ensure that it wins the day.

References

[1] Committee on Recombinant DNA Molecules (1974) Potential biohazards of recombinant DNA molecules. Proc Natl Acad Sci USA 71: 2593-2594.
[2] Berg P, Baltimore D, Brenner S, Roblin RO, Singer MF. Summary statement of the Asilomar conference on recombinant DNA molecules (1975) Proc Natl Acad Sci USA 72: 1981-1984. doi: 10.1073/pnas.72.6.1981.
[3] Future of Life Institute (2017) Asilomar AI Principles
[4] Bartosch U et al. (2018) Policy Paper on the Asilomar Principles on Artificial Intelligence. Federation of German Scientists e.V. (VDW), Berlin.

Kirjoita vastaus tai kommentti

Pakolliset kentät on merkitty *.

Anna vastaus numerona.