Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

The New Coffee Room

  1. TNCR
  2. General Discussion
  3. On the limits of LLMs

On the limits of LLMs

Scheduled Pinned Locked Moved General Discussion
3 Posts 3 Posters 33 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jon-nycJ Offline
    jon-nycJ Offline
    jon-nyc
    wrote on last edited by
    #1

    I’d be curious what Klaus thinks of this.

    Only non-witches get due process.

    • Cotton Mather, Salem Massachusetts, 1692
    1 Reply Last reply
    • HoraceH Offline
      HoraceH Offline
      Horace
      wrote on last edited by
      #2

      I suspect Klaus will resonate with this gentleman. My first reaction was that if the authors think humans are capable of "genuine reasoning" not in principle already achieved by computers, then the authors are not well thought out. They pretend to be talking about logic, but they're actually talking about something else, never quite defined.

      Education is extremely important.

      1 Reply Last reply
      • KlausK Offline
        KlausK Offline
        Klaus
        wrote on last edited by
        #3

        I'm not an expert on this, but my feeling is that we have in some ways reached "Peak LLMs" already. LLMs are inherently limited by their probabilistic nature. They just go by likelihoods of "which word is likely to come next, given what was written before". These systems have no clue whether what they generate is true or complete BS.

        It's amazing how far this gets you, but I think these systems need to be augmented by forms of symbolic reasoning to make real progress.

        One reason why ChatGPT et al are so good at mathematical tasks is that most tasks we can think of are not genuinely new but just variants of tasks where many solutions exist in the training set. If you ask them something genuinely new, they usually fail.

        1 Reply Last reply
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • Users
        • Groups