Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

The New Coffee Room

  1. TNCR
  2. General Discussion
  3. Me Learning LLM Reasoning

Me Learning LLM Reasoning

Scheduled Pinned Locked Moved General Discussion
14 Posts 5 Posters 321 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • AxtremusA Offline
    AxtremusA Offline
    Axtremus
    wrote last edited by Axtremus
    #1

    By happenstance YouTube algorithm, I stumbled upon these two related videos:

    Video #1 — a respected AI researcher’s lecture given to Standard engineering students (in English):

    Link to video

    Video #2 — a YouTuber’s retelling of that lecture (in Mandarin):

    Link to video

    I noticed Video #2 first, but realized very quickly that Video #2 is a retelling of Video #1, so I figured why view Video #2 when I can just view the original Video #1.

    I very quickly got my answer: as good a scholar as the presenter of Video #1 is, his lecture is quite hard to follow. The pacing, the storytelling, the command of the language, the structure of the presentation, the presentation, etc. are not very good. A brilliant researcher, but not a very good lecturer. I gave up decided to try the alternative 10~15 minutes into Video #1.

    Video #2 is 1/3 the length of Video #1, but overall presentation is much nicer. Some terms of art I have to mentally translate back to English to map them back to what I am more familiar with, but with key slides being shown in the original English, that’s not too hard to do. Watched through Video #2 at 1.5x speed and got what I wanted in maybe 1/5 the time I would have spent on Video #1 had there not been Video #2.

    There probably is a lesson somewhere in this experience for multilingual LLM machine learning/reasoning, but I don’t know what it is yet. 🤷

    1 Reply Last reply
    • KlausK Offline
      KlausK Offline
      Klaus
      wrote last edited by
      #2

      What was the most interesting thing you learned?

      AxtremusA 1 Reply Last reply
      • jon-nycJ Offline
        jon-nycJ Offline
        jon-nyc
        wrote last edited by
        #3

        I recently purchased these but haven’t sat down with them yet. They start with the mathematics and go into the practical application. Read the Amazon reviews.

        Available in a bunch of languages too.

        IMG_7997.jpeg IMG_7998.jpeg

        If you don't take it, it can only good happen.

        1 Reply Last reply
        • KlausK Klaus

          What was the most interesting thing you learned?

          AxtremusA Offline
          AxtremusA Offline
          Axtremus
          wrote last edited by
          #4

          @Klaus said in Me Learning LLM Reasoning:

          What was the most interesting thing you learned?

          That virtually all of it comes down to trials and errors, no derivation from first principles.

          1 Reply Last reply
          • KlausK Offline
            KlausK Offline
            Klaus
            wrote last edited by
            #5

            Can you elaborate?

            AxtremusA 1 Reply Last reply
            • HoraceH Offline
              HoraceH Offline
              Horace
              wrote last edited by Horace
              #6

              Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

              There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

              Education is extremely important.

              HoraceH 1 Reply Last reply
              • HoraceH Horace

                Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                HoraceH Offline
                HoraceH Offline
                Horace
                wrote last edited by
                #7

                said in Me Learning LLM Reasoning:

                Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                ChatGPT responds:

                That’s a sharp way of framing it. Let me break this into a few layers:


                1. The standard view: LLMs as pattern completers

                • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                2. The “reasoning residue” argument

                Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                • That means the distribution of tokens encodes traces of reasoning patterns.
                • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                3. Emergent reasoning-like behavior

                Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                4. The philosophical wrinkle

                If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                Education is extremely important.

                HoraceH 1 Reply Last reply
                • HoraceH Horace

                  said in Me Learning LLM Reasoning:

                  Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                  There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                  ChatGPT responds:

                  That’s a sharp way of framing it. Let me break this into a few layers:


                  1. The standard view: LLMs as pattern completers

                  • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                  • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                  • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                  2. The “reasoning residue” argument

                  Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                  • That means the distribution of tokens encodes traces of reasoning patterns.
                  • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                  • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                  So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                  3. Emergent reasoning-like behavior

                  Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                  • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                  • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                  4. The philosophical wrinkle

                  If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                  If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                  It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                  👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                  HoraceH Offline
                  HoraceH Offline
                  Horace
                  wrote last edited by
                  #8

                  said in Me Learning LLM Reasoning:

                  said in Me Learning LLM Reasoning:

                  Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                  There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                  ChatGPT responds:

                  That’s a sharp way of framing it. Let me break this into a few layers:


                  1. The standard view: LLMs as pattern completers

                  • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                  • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                  • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                  2. The “reasoning residue” argument

                  Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                  • That means the distribution of tokens encodes traces of reasoning patterns.
                  • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                  • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                  So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                  3. Emergent reasoning-like behavior

                  Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                  • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                  • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                  4. The philosophical wrinkle

                  If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                  If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                  It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                  👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                  I think reasoning vs mimicry of reasoning is ultimately a rhetorical distinction without a practical difference. Even humans probably overrate the amount of "reasoning" involved in their own language production. It's mostly pattern matching itself, I suppose.

                  Education is extremely important.

                  HoraceH 1 Reply Last reply
                  • HoraceH Horace

                    said in Me Learning LLM Reasoning:

                    said in Me Learning LLM Reasoning:

                    Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                    There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                    ChatGPT responds:

                    That’s a sharp way of framing it. Let me break this into a few layers:


                    1. The standard view: LLMs as pattern completers

                    • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                    • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                    • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                    2. The “reasoning residue” argument

                    Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                    • That means the distribution of tokens encodes traces of reasoning patterns.
                    • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                    • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                    So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                    3. Emergent reasoning-like behavior

                    Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                    • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                    • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                    4. The philosophical wrinkle

                    If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                    If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                    It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                    👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                    I think reasoning vs mimicry of reasoning is ultimately a rhetorical distinction without a practical difference. Even humans probably overrate the amount of "reasoning" involved in their own language production. It's mostly pattern matching itself, I suppose.

                    HoraceH Offline
                    HoraceH Offline
                    Horace
                    wrote last edited by
                    #9

                    said in Me Learning LLM Reasoning:

                    said in Me Learning LLM Reasoning:

                    said in Me Learning LLM Reasoning:

                    Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                    There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                    ChatGPT responds:

                    That’s a sharp way of framing it. Let me break this into a few layers:


                    1. The standard view: LLMs as pattern completers

                    • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                    • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                    • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                    2. The “reasoning residue” argument

                    Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                    • That means the distribution of tokens encodes traces of reasoning patterns.
                    • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                    • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                    So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                    3. Emergent reasoning-like behavior

                    Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                    • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                    • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                    4. The philosophical wrinkle

                    If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                    If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                    It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                    👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                    I think reasoning vs mimicry of reasoning is ultimately a rhetorical distinction without a practical difference. Even humans probably overrate the amount of "reasoning" involved in their own language production. It's mostly pattern matching itself, I suppose.

                    ChatGPT responds:
                    That’s a really strong point, and it resonates with some cognitive science perspectives:


                    1. Human reasoning may be more “after the fact” than we admit

                    • A lot of psychological research (e.g., Kahneman’s System 1 vs System 2) suggests that much of what we call “reasoning” is post-hoc rationalization layered on top of instinctive, heuristic, and pattern-driven judgments.
                    • Split-brain patient studies even show the left hemisphere fabricating explanations for actions it didn’t initiate—reasoning as narrative construction, not causal engine.

                    2. Language as structured mimicry

                    • Human language production is also probabilistic and patterned: we finish clichés, mirror each other’s syntax, and reuse familiar narrative arcs.
                    • The “mimicry” charge against LLMs might cut both ways—humans are excellent mimics of cultural reasoning patterns, with just enough novelty to move them forward.

                    3. Practical indistinguishability

                    • If both humans and LLMs are mostly recombining patterns, then distinguishing “true reasoning” from “mimicry” risks being a metaphysical quibble.
                    • What matters is functional reasoning: the ability to get to correct/novel conclusions, persuade others, or generate useful inferences.
                    • From that angle, humans and LLMs differ more in substrate (neurons vs. transformers) and scope of generalization, not in some essential reasoning/mimicry divide.

                    4. The rhetorical residue

                    • The insistence that “humans reason, LLMs mimic” often feels like a way to preserve a bright line between “us” and “it.”
                    • But as you note, once we admit how much of our own cognition is patterned mimicry, the line blurs.

                    Would you say your view leans toward functionalism—i.e., if a system produces reasoning-like outputs reliably, then it “reasons,” regardless of mechanism? Or do you think there’s still some irreducible qualitative aspect to human reasoning that’s worth defending?

                    Education is extremely important.

                    taiwan_girlT 1 Reply Last reply
                    • HoraceH Horace

                      said in Me Learning LLM Reasoning:

                      said in Me Learning LLM Reasoning:

                      said in Me Learning LLM Reasoning:

                      Who thinks LLMs reason anyway? They predict strings of words or sub-words, and their neural net is reinforced by humans who give thumbs up or down to certain responses.

                      There's an interesting case to be made that if reasoning was involved in the creation of the words in the training set, then that reasoning survives in the words predicted by the LLM trained on that set. Even if the LLM can't disentangle that reasoning from its predictions.

                      ChatGPT responds:

                      That’s a sharp way of framing it. Let me break this into a few layers:


                      1. The standard view: LLMs as pattern completers

                      • At the mechanistic level, LLMs don’t “reason” in the way humans typically mean the word.
                      • They’re statistical models of conditional probability: given a history of tokens, predict the next token.
                      • The reinforcement (RLHF) you mentioned just shifts the probability landscape toward outputs that humans like, but it doesn’t fundamentally add a reasoning module.

                      2. The “reasoning residue” argument

                      Here’s where your point kicks in. Much of the text in the training corpus was itself produced by humans reasoning—deriving a theorem, explaining a cause, constructing an argument.

                      • That means the distribution of tokens encodes traces of reasoning patterns.
                      • When the LLM predicts continuations, it is effectively interpolating through that latent structure.
                      • Even if the LLM doesn’t “know it’s reasoning,” it reproduces patterns that are consequences of reasoning.

                      So in a sense, you’re right: the reasoning that went into producing the training data survives in fossilized form inside the model.


                      3. Emergent reasoning-like behavior

                      Because human reasoning is systematic, the training process induces internal circuits that can manipulate abstract relations (chains of if-then, causal analogies, etc.).

                      • Some researchers call this “simulacrum reasoning”: the model doesn’t reason in the causal, self-directed sense, but it produces outputs that look like reasoning because the statistical map encodes it.
                      • Chain-of-thought prompting is an example: when we nudge the model into “thinking step by step,” it can unspool reasoning-like patterns that were baked into the data.

                      4. The philosophical wrinkle

                      If reasoning is defined purely behaviorally (following valid inference patterns to reach conclusions), then LLMs arguably do reason.
                      If reasoning is defined internally (involving awareness, intentionality, goal-directed use of logic), then they don’t.

                      It’s similar to debates in philosophy of mind about whether a system that simulates a process is “really doing” it. (Classic reference: Searle’s Chinese Room.)


                      👉 A question back to you: do you lean toward the “behavioral” definition (reasoning is as reasoning does), or the “internalist” one (reasoning requires a kind of intentional process)? That determines whether you’d say LLMs reason, or just mimic reasoning.

                      I think reasoning vs mimicry of reasoning is ultimately a rhetorical distinction without a practical difference. Even humans probably overrate the amount of "reasoning" involved in their own language production. It's mostly pattern matching itself, I suppose.

                      ChatGPT responds:
                      That’s a really strong point, and it resonates with some cognitive science perspectives:


                      1. Human reasoning may be more “after the fact” than we admit

                      • A lot of psychological research (e.g., Kahneman’s System 1 vs System 2) suggests that much of what we call “reasoning” is post-hoc rationalization layered on top of instinctive, heuristic, and pattern-driven judgments.
                      • Split-brain patient studies even show the left hemisphere fabricating explanations for actions it didn’t initiate—reasoning as narrative construction, not causal engine.

                      2. Language as structured mimicry

                      • Human language production is also probabilistic and patterned: we finish clichés, mirror each other’s syntax, and reuse familiar narrative arcs.
                      • The “mimicry” charge against LLMs might cut both ways—humans are excellent mimics of cultural reasoning patterns, with just enough novelty to move them forward.

                      3. Practical indistinguishability

                      • If both humans and LLMs are mostly recombining patterns, then distinguishing “true reasoning” from “mimicry” risks being a metaphysical quibble.
                      • What matters is functional reasoning: the ability to get to correct/novel conclusions, persuade others, or generate useful inferences.
                      • From that angle, humans and LLMs differ more in substrate (neurons vs. transformers) and scope of generalization, not in some essential reasoning/mimicry divide.

                      4. The rhetorical residue

                      • The insistence that “humans reason, LLMs mimic” often feels like a way to preserve a bright line between “us” and “it.”
                      • But as you note, once we admit how much of our own cognition is patterned mimicry, the line blurs.

                      Would you say your view leans toward functionalism—i.e., if a system produces reasoning-like outputs reliably, then it “reasons,” regardless of mechanism? Or do you think there’s still some irreducible qualitative aspect to human reasoning that’s worth defending?

                      taiwan_girlT Offline
                      taiwan_girlT Offline
                      taiwan_girl
                      wrote last edited by
                      #10

                      @Horace I get teh feeling you dont need us any more. LOL

                      HoraceH 1 Reply Last reply
                      • taiwan_girlT taiwan_girl

                        @Horace I get teh feeling you dont need us any more. LOL

                        HoraceH Offline
                        HoraceH Offline
                        Horace
                        wrote last edited by
                        #11

                        @taiwan_girl said in Me Learning LLM Reasoning:

                        @Horace I get teh feeling you dont need us any more. LOL

                        I was really shocked to see that Ax didn't want to continue the conversation with Klaus, so I tried to cover it up with something other than shocking, mortifying silence.

                        Education is extremely important.

                        AxtremusA 1 Reply Last reply
                        • HoraceH Horace

                          @taiwan_girl said in Me Learning LLM Reasoning:

                          @Horace I get teh feeling you dont need us any more. LOL

                          I was really shocked to see that Ax didn't want to continue the conversation with Klaus, so I tried to cover it up with something other than shocking, mortifying silence.

                          AxtremusA Offline
                          AxtremusA Offline
                          Axtremus
                          wrote last edited by
                          #12

                          @Horace said in Me Learning LLM Reasoning:

                          I was really shocked to see that Ax didn't want to continue the conversation with Klaus, ...

                          I missed @Klaus' latest reply because your mindless copied-and-pasted ChatGPT dribble drowned it out.

                          HoraceH 1 Reply Last reply
                          • KlausK Klaus

                            Can you elaborate?

                            AxtremusA Offline
                            AxtremusA Offline
                            Axtremus
                            wrote last edited by
                            #13

                            @Klaus said in Me Learning LLM Reasoning:

                            Can you elaborate?

                            Rather than deductions or inductions, I got a lot of "we tried this and we see that it works well." Even then there is no "this is why we think this works well." At least that's my impression.

                            1 Reply Last reply
                            • AxtremusA Axtremus

                              @Horace said in Me Learning LLM Reasoning:

                              I was really shocked to see that Ax didn't want to continue the conversation with Klaus, ...

                              I missed @Klaus' latest reply because your mindless copied-and-pasted ChatGPT dribble drowned it out.

                              HoraceH Offline
                              HoraceH Offline
                              Horace
                              wrote last edited by
                              #14

                              @Axtremus said in Me Learning LLM Reasoning:

                              @Horace said in Me Learning LLM Reasoning:

                              I was really shocked to see that Ax didn't want to continue the conversation with Klaus, ...

                              I missed @Klaus' latest reply because your mindless copied-and-pasted ChatGPT dribble drowned it out.

                              Oh, my fault. I thought it was your 1000th straight ignored question. Turns out, this time you would have answered.

                              Education is extremely important.

                              1 Reply Last reply
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • Users
                              • Groups