Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

The New Coffee Room

  1. TNCR
  2. General Discussion
  3. Nonhuman Medical “Authors”

Nonhuman Medical “Authors”

Scheduled Pinned Locked Moved General Discussion
2 Posts 2 Posters 29 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • George KG Offline
    George KG Offline
    George K
    wrote on last edited by
    #1

    Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge

    Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability.1

    In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

    Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

    The scholarly publishing community has quickly reported concerns about potential misuse of these language models in scientific publication.1,12-14 Individuals have experimented by asking ChatGPT a series of questions about controversial or important topics (eg, whether childhood vaccination causes autism) as well as specific publishing-related technical and ethical questions.9,10,12 Their results showed that ChatGPT’s text responses to questions, while mostly well written, are formulaic (which was not easily discernible), not up to date, false or fabricated, without accurate or complete references, and worse, with concocted nonexistent evidence for claims or statements it makes. OpenAI acknowledges some of the language model’s limitations, including providing “plausible-sounding but incorrect or nonsensical answers,” and that the recent release is part of an open iterative deployment intended for human use, interaction, and feedback to improve it.2 That cautionary acknowledgment is a clear signal that the model is not ready to be used as a source of trusted information, and certainly not without transparency and human accountability for its use.

    "Now look here, you Baltic gas passer... " - Mik, 6/14/08

    The saying, "Lite is just one damn thing after another," is a gross understatement. The damn things overlap.

    1 Reply Last reply
    • Aqua LetiferA Offline
      Aqua LetiferA Offline
      Aqua Letifer
      wrote on last edited by
      #2

      Oh, hey, what do you know, another very serious implication coming out of technology no one is fucking paying attention to until after it causes problems.

      Please love yourself.

      1 Reply Last reply
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Don't have an account? Register

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • Users
      • Groups