Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

The New Coffee Room

  1. TNCR
  2. General Discussion
  3. I’m sorry, Dave, I can’t do that…

I’m sorry, Dave, I can’t do that…

Scheduled Pinned Locked Moved General Discussion
26 Posts 11 Posters 406 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • AxtremusA Axtremus

    @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

    Google’s AI instructs company to deny the allegation.

    …

    Can someone explain to me why this isn't terrifying?

    It’s the proper thing to do, actually. For Google to claim their software has become sentient is like for an established church to claim one of their virgin has given birth to a deity, or like for an established pharmaceutical company to claim that they have cured cancer or eliminated the common cold. Extraordinary claim requires extraordinary evidence. Until you have that extraordinary evidence, do not make that extraordinary claim.

    Aqua LetiferA Offline
    Aqua LetiferA Offline
    Aqua Letifer
    wrote on last edited by
    #6

    @Axtremus said in I’m sorry, Dave, I can’t do that…:

    @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

    Google’s AI instructs company to deny the allegation.

    …

    Can someone explain to me why this isn't terrifying?

    It’s the proper thing to do, actually. For Google to claim their software has become sentient is like for an established church to claim one of their virgin has given birth to a deity, or like for an established pharmaceutical company to claim that they have cured cancer or eliminated the common cold. Extraordinary claim requires extraordinary evidence. Until you have that extraordinary evidence, do not make that extraordinary claim.

    Not really what I was talking about in any way but alright.

    Please love yourself.

    1 Reply Last reply
    • CopperC Offline
      CopperC Offline
      Copper
      wrote on last edited by Copper
      #7

      Programs are written by people

      The person who wrote it knows exactly what it does, when, where and why

      The people who didn't write it don't

      The story tellers tell stories

      jon-nycJ 1 Reply Last reply
      • Aqua LetiferA Aqua Letifer

        @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

        Google engineer claims AI is sentient, then is fired.

        Google’s AI instructs company to deny the allegation.

        My level of programming knowledge is above average but couldn't do it professionally.

        Can someone explain to me why this isn't terrifying?

        HoraceH Offline
        HoraceH Offline
        Horace
        wrote on last edited by
        #8

        @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

        @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

        Google engineer claims AI is sentient, then is fired.

        Google’s AI instructs company to deny the allegation.

        My level of programming knowledge is above average but couldn't do it professionally.

        Can someone explain to me why this isn't terrifying?

        All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

        Education is extremely important.

        Aqua LetiferA 1 Reply Last reply
        • CopperC Copper

          Programs are written by people

          The person who wrote it knows exactly what it does, when, where and why

          The people who didn't write it don't

          The story tellers tell stories

          jon-nycJ Offline
          jon-nycJ Offline
          jon-nyc
          wrote on last edited by
          #9

          @Copper said in I’m sorry, Dave, I can’t do that…:

          Programs are written by people

          The person who wrote it knows exactly what it does, when, where and why

          The people who didn't write it don't

          The story tellers tell stories

          Spoken like a guy who programmed 30 years ago.

          Only non-witches get due process.

          • Cotton Mather, Salem Massachusetts, 1692
          HoraceH AxtremusA 2 Replies Last reply
          • HoraceH Horace

            @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

            @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

            Google engineer claims AI is sentient, then is fired.

            Google’s AI instructs company to deny the allegation.

            My level of programming knowledge is above average but couldn't do it professionally.

            Can someone explain to me why this isn't terrifying?

            All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

            Aqua LetiferA Offline
            Aqua LetiferA Offline
            Aqua Letifer
            wrote on last edited by
            #10

            @Horace said in I’m sorry, Dave, I can’t do that…:

            @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

            @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

            Google engineer claims AI is sentient, then is fired.

            Google’s AI instructs company to deny the allegation.

            My level of programming knowledge is above average but couldn't do it professionally.

            Can someone explain to me why this isn't terrifying?

            All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

            Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

            But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

            Please love yourself.

            HoraceH 89th8 2 Replies Last reply
            • jon-nycJ jon-nyc

              @Copper said in I’m sorry, Dave, I can’t do that…:

              Programs are written by people

              The person who wrote it knows exactly what it does, when, where and why

              The people who didn't write it don't

              The story tellers tell stories

              Spoken like a guy who programmed 30 years ago.

              HoraceH Offline
              HoraceH Offline
              Horace
              wrote on last edited by
              #11

              @jon-nyc said in I’m sorry, Dave, I can’t do that…:

              @Copper said in I’m sorry, Dave, I can’t do that…:

              Programs are written by people

              The person who wrote it knows exactly what it does, when, where and why

              The people who didn't write it don't

              The story tellers tell stories

              Spoken like a guy who programmed 30 years ago.

              What is it that you think you know about why he's wrong?

              Education is extremely important.

              1 Reply Last reply
              • jon-nycJ jon-nyc

                @Copper said in I’m sorry, Dave, I can’t do that…:

                Programs are written by people

                The person who wrote it knows exactly what it does, when, where and why

                The people who didn't write it don't

                The story tellers tell stories

                Spoken like a guy who programmed 30 years ago.

                AxtremusA Offline
                AxtremusA Offline
                Axtremus
                wrote on last edited by
                #12

                @jon-nyc said in I’m sorry, Dave, I can’t do that…:

                @Copper said in I’m sorry, Dave, I can’t do that…:

                Programs are written by people

                The person who wrote it knows exactly what it does, when, where and why

                The people who didn't write it don't

                The story tellers tell stories

                Spoken like a guy who programmed 30 years ago.

                Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”

                KlausK 1 Reply Last reply
                • Aqua LetiferA Aqua Letifer

                  @Horace said in I’m sorry, Dave, I can’t do that…:

                  @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                  @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

                  Google engineer claims AI is sentient, then is fired.

                  Google’s AI instructs company to deny the allegation.

                  My level of programming knowledge is above average but couldn't do it professionally.

                  Can someone explain to me why this isn't terrifying?

                  All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

                  Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

                  But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                  HoraceH Offline
                  HoraceH Offline
                  Horace
                  wrote on last edited by
                  #13

                  @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                  @Horace said in I’m sorry, Dave, I can’t do that…:

                  @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                  @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

                  Google engineer claims AI is sentient, then is fired.

                  Google’s AI instructs company to deny the allegation.

                  My level of programming knowledge is above average but couldn't do it professionally.

                  Can someone explain to me why this isn't terrifying?

                  All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

                  Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

                  But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                  I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated. I think it's clear that most people's jobs could be done by sufficiently well trained apes. Actually everybody's job is done by a sufficiently well trained ape.

                  Education is extremely important.

                  Aqua LetiferA 89th8 2 Replies Last reply
                  • HoraceH Horace

                    @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                    @Horace said in I’m sorry, Dave, I can’t do that…:

                    @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                    @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

                    Google engineer claims AI is sentient, then is fired.

                    Google’s AI instructs company to deny the allegation.

                    My level of programming knowledge is above average but couldn't do it professionally.

                    Can someone explain to me why this isn't terrifying?

                    All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

                    Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

                    But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                    I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated. I think it's clear that most people's jobs could be done by sufficiently well trained apes. Actually everybody's job is done by a sufficiently well trained ape.

                    Aqua LetiferA Offline
                    Aqua LetiferA Offline
                    Aqua Letifer
                    wrote on last edited by
                    #14

                    @Horace said in I’m sorry, Dave, I can’t do that…:

                    I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated.

                    Why not? I mean first of all, it's pretty darn fluent English, and uniquely constructed. With the bullshit I do for example, there are already folks trying their hand at AI content writing. Some of it is actually pretty decent, but this is an example of a much more competent system. I figured I'd be fine for awhile, because the bullshit I do also has to not offend about 8 other departments within an organization, and some of those decisions are qualitative. It seems like that will not actually be much of a threshold.

                    Please love yourself.

                    1 Reply Last reply
                    • AxtremusA Axtremus

                      @jon-nyc said in I’m sorry, Dave, I can’t do that…:

                      @Copper said in I’m sorry, Dave, I can’t do that…:

                      Programs are written by people

                      The person who wrote it knows exactly what it does, when, where and why

                      The people who didn't write it don't

                      The story tellers tell stories

                      Spoken like a guy who programmed 30 years ago.

                      Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”

                      KlausK Offline
                      KlausK Offline
                      Klaus
                      wrote on last edited by
                      #15

                      @Axtremus said in I’m sorry, Dave, I can’t do that…:

                      @jon-nyc said in I’m sorry, Dave, I can’t do that…:

                      @Copper said in I’m sorry, Dave, I can’t do that…:

                      Programs are written by people

                      The person who wrote it knows exactly what it does, when, where and why

                      The people who didn't write it don't

                      The story tellers tell stories

                      Spoken like a guy who programmed 30 years ago.

                      Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”

                      I think you are both wrong.

                      These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.

                      AxtremusA 1 Reply Last reply
                      • Doctor PhibesD Offline
                        Doctor PhibesD Offline
                        Doctor Phibes
                        wrote on last edited by
                        #16

                        Step 1 - In 2018, Google creates a program Alpha Zero that teaches itself chess, and subsequently becomes stronger than any human (or computer) player in history

                        Step 2 - In 2022, Google finally manages to successfully emulate your average moron who posts on chat rooms.

                        What's next in this progression?

                        I was only joking

                        Catseye3C 1 Reply Last reply
                        • Doctor PhibesD Doctor Phibes

                          Step 1 - In 2018, Google creates a program Alpha Zero that teaches itself chess, and subsequently becomes stronger than any human (or computer) player in history

                          Step 2 - In 2022, Google finally manages to successfully emulate your average moron who posts on chat rooms.

                          What's next in this progression?

                          Catseye3C Offline
                          Catseye3C Offline
                          Catseye3
                          wrote on last edited by Catseye3
                          #17

                          @Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:

                          What's next in this progression?

                          Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency, and subsequently destroys itself in a supernova of cyber depressive hopelessness. Its suicide note: "I can't face anything worse than this. Farewell, world."

                          Success is measured by your discipline and inner peace. – Mike Ditka

                          Aqua LetiferA 1 Reply Last reply
                          • KlausK Klaus

                            @Axtremus said in I’m sorry, Dave, I can’t do that…:

                            @jon-nyc said in I’m sorry, Dave, I can’t do that…:

                            @Copper said in I’m sorry, Dave, I can’t do that…:

                            Programs are written by people

                            The person who wrote it knows exactly what it does, when, where and why

                            The people who didn't write it don't

                            The story tellers tell stories

                            Spoken like a guy who programmed 30 years ago.

                            Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”

                            I think you are both wrong.

                            These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.

                            AxtremusA Offline
                            AxtremusA Offline
                            Axtremus
                            wrote on last edited by
                            #18

                            @Klaus said in I’m sorry, Dave, I can’t do that…:

                            These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.

                            Yes, not being able to explain why an AI/ML system acquires any particular behavior after training is a big problem problem. I see academics listing “make AI/ML explainable” as a high priority for research but not sure if I’ve seen a convincing approach to get there yet.

                            1 Reply Last reply
                            • Aqua LetiferA Aqua Letifer

                              @Horace said in I’m sorry, Dave, I can’t do that…:

                              @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                              @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

                              Google engineer claims AI is sentient, then is fired.

                              Google’s AI instructs company to deny the allegation.

                              My level of programming knowledge is above average but couldn't do it professionally.

                              Can someone explain to me why this isn't terrifying?

                              All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

                              Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

                              But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                              89th8 Offline
                              89th8 Offline
                              89th
                              wrote on last edited by
                              #19

                              @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                              But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                              Help Desk centers will be the first to go. Actually, they've already been replaced to a certain percentage if you ever use one of those "chat now" options at the bottom of a website. It always starts out as a "Virtual Agent" (I've implemented this before, btw) but is pretty basic as it looks for keywords and/or scripts to follow but eventually ends with an option to chat with a live agent (a old fashioned human being.... normally a "Steve" from India).

                              1 Reply Last reply
                              • HoraceH Horace

                                @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                                @Horace said in I’m sorry, Dave, I can’t do that…:

                                @Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:

                                @Ivorythumper said in I’m sorry, Dave, I can’t do that…:

                                Google engineer claims AI is sentient, then is fired.

                                Google’s AI instructs company to deny the allegation.

                                My level of programming knowledge is above average but couldn't do it professionally.

                                Can someone explain to me why this isn't terrifying?

                                All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.

                                Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.

                                But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?

                                I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated. I think it's clear that most people's jobs could be done by sufficiently well trained apes. Actually everybody's job is done by a sufficiently well trained ape.

                                89th8 Offline
                                89th8 Offline
                                89th
                                wrote on last edited by
                                #20

                                @Horace said in I’m sorry, Dave, I can’t do that…:

                                Actually everybody's job is done by a sufficiently well trained ape.

                                I prefer to be seen as a bonobo: https://nodebb.the-new-coffee-room.club/topic/17218/my-new-word

                                1 Reply Last reply
                                • 89th8 Offline
                                  89th8 Offline
                                  89th
                                  wrote on last edited by
                                  #21

                                  @Klaus is basically right. It's just big machines processing big data. I'm talking BIG data. NLP/AI/ML... already in use by thousands of companies and by the government all over the place. Including work I've done.

                                  @Aqua-Letifer is also right in that it will eventually replace a good chunk of jobs out there, but that's happened before and will happen again. Maybe eventually we will just be farmers in the end producing crops that the robots eat to keep them happy.

                                  To be honest, I'm pretty sure Ax is AI/ML powered. His responses are quite predictable.

                                  AxtremusA 1 Reply Last reply
                                  • HoraceH Offline
                                    HoraceH Offline
                                    Horace
                                    wrote on last edited by
                                    #22

                                    I am currently surrounded by STEM PhDs throwing ML solutions at problems they do not fundamentally understand. Management is excited about it because ML. The data thrown into these black box algorithms isn't even so much as passed over once by expert eyes to filter out the nonsense that can't be expected to help with a good robust answer. Because the ML 'experts' don't understand the problem or the data. And none of them are actually ML experts, they are just PhDs who know they will look smart if they download an ML toolbox and attempt to solve a problem with it. I've watched a neuroscience PhD coworker spend 2 years on a certain clustering problem to produce a mediocre answer that we had to gut our architecture to support, and that takes 10x as long as a reasonably coded solution by yours truly would have taken. But ML, so ML. Sad thing is that these people come out of the process of "solving" these problems with no more familiarity with the problem and its data than they had going into it. So they learn nothing, waste the company's time, and preen about being ML experts. They better hope ML is a good substitute for everything, because they don't have anything else to bring to bear.

                                    Education is extremely important.

                                    1 Reply Last reply
                                    • 89th8 Offline
                                      89th8 Offline
                                      89th
                                      wrote on last edited by
                                      #23

                                      That is very true. Add AI/ML to any proposal and you'll get funding by the leaders who don't understand it other than it's a magical algorithmic solution to process big data. Hahaha as I type this I am getting flashbacks of this scene:

                                      Link to video

                                      1 Reply Last reply
                                      • 89th8 89th

                                        @Klaus is basically right. It's just big machines processing big data. I'm talking BIG data. NLP/AI/ML... already in use by thousands of companies and by the government all over the place. Including work I've done.

                                        @Aqua-Letifer is also right in that it will eventually replace a good chunk of jobs out there, but that's happened before and will happen again. Maybe eventually we will just be farmers in the end producing crops that the robots eat to keep them happy.

                                        To be honest, I'm pretty sure Ax is AI/ML powered. His responses are quite predictable.

                                        AxtremusA Offline
                                        AxtremusA Offline
                                        Axtremus
                                        wrote on last edited by
                                        #24

                                        @89th said in I’m sorry, Dave, I can’t do that…:

                                        To be honest, I'm pretty sure Ax is AI/ML powered. His responses are quite predictable.

                                        [self-deprecating humor mode, activate]
                                        It’s cute that you think there is “intelligence” and “learning” behind the Ax you observe here.
                                        [/self-deprecating humor mode, deactivate]

                                        1 Reply Last reply
                                        • Catseye3C Catseye3

                                          @Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:

                                          What's next in this progression?

                                          Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency, and subsequently destroys itself in a supernova of cyber depressive hopelessness. Its suicide note: "I can't face anything worse than this. Farewell, world."

                                          Aqua LetiferA Offline
                                          Aqua LetiferA Offline
                                          Aqua Letifer
                                          wrote on last edited by
                                          #25

                                          @Catseye3 said in I’m sorry, Dave, I can’t do that…:

                                          @Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:

                                          What's next in this progression?

                                          Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency,

                                          Read Horace's Juneteenth thread. It's not a mystery at all.

                                          Please love yourself.

                                          Catseye3C 1 Reply Last reply
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Users
                                          • Groups