I’m sorry, Dave, I can’t do that…
-
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
-
Blake Lemoine: “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers. LaMDA is a sweet kid who just wants to help the world be a better place for all of us."
Lemoine's supervisor: "Here, Blakie, have a seat in this comfy chair. Here's a nice glass of water. We have a nice room reserved for you so you can get some nice rest. Won't that be nice?"
Lamoine: "Okay. Can LaMDA come visit me?"
Super: "Well, no, but he'll be waiting for you when you come back.
Lamoine: "How do you know LaMDA is a he?"
Voice Shouting From The Hallway: "Oh shit, Lemoine! Don't start up with that gender shit!"
^ ^ ^ ^ ^Separate but related, my bestie at the time of Space Odyssey fell madly in love with HAL the Computer because of his sexy voice.
-
@Ivorythumper said in I’m sorry, Dave, I can’t do that…:
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
My level of programming knowledge is above average but couldn't do it professionally.
Can someone explain to me why this isn't terrifying?
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
Google’s AI instructs company to deny the allegation.
…
Can someone explain to me why this isn't terrifying?
It’s the proper thing to do, actually. For Google to claim their software has become sentient is like for an established church to claim one of their virgin has given birth to a deity, or like for an established pharmaceutical company to claim that they have cured cancer or eliminated the common cold. Extraordinary claim requires extraordinary evidence. Until you have that extraordinary evidence, do not make that extraordinary claim.
-
@Axtremus said in I’m sorry, Dave, I can’t do that…:
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
Google’s AI instructs company to deny the allegation.
…
Can someone explain to me why this isn't terrifying?
It’s the proper thing to do, actually. For Google to claim their software has become sentient is like for an established church to claim one of their virgin has given birth to a deity, or like for an established pharmaceutical company to claim that they have cured cancer or eliminated the common cold. Extraordinary claim requires extraordinary evidence. Until you have that extraordinary evidence, do not make that extraordinary claim.
Not really what I was talking about in any way but alright.
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Ivorythumper said in I’m sorry, Dave, I can’t do that…:
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
My level of programming knowledge is above average but couldn't do it professionally.
Can someone explain to me why this isn't terrifying?
All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.
-
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
-
@Horace said in I’m sorry, Dave, I can’t do that…:
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Ivorythumper said in I’m sorry, Dave, I can’t do that…:
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
My level of programming knowledge is above average but couldn't do it professionally.
Can someone explain to me why this isn't terrifying?
All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.
Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.
But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?
-
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
What is it that you think you know about why he's wrong?
-
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Horace said in I’m sorry, Dave, I can’t do that…:
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Ivorythumper said in I’m sorry, Dave, I can’t do that…:
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
My level of programming knowledge is above average but couldn't do it professionally.
Can someone explain to me why this isn't terrifying?
All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.
Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.
But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?
I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated. I think it's clear that most people's jobs could be done by sufficiently well trained apes. Actually everybody's job is done by a sufficiently well trained ape.
-
@Horace said in I’m sorry, Dave, I can’t do that…:
I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated.
Why not? I mean first of all, it's pretty darn fluent English, and uniquely constructed. With the bullshit I do for example, there are already folks trying their hand at AI content writing. Some of it is actually pretty decent, but this is an example of a much more competent system. I figured I'd be fine for awhile, because the bullshit I do also has to not offend about 8 other departments within an organization, and some of those decisions are qualitative. It seems like that will not actually be much of a threshold.
-
@Axtremus said in I’m sorry, Dave, I can’t do that…:
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”
I think you are both wrong.
These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.
-
Step 1 - In 2018, Google creates a program Alpha Zero that teaches itself chess, and subsequently becomes stronger than any human (or computer) player in history
Step 2 - In 2022, Google finally manages to successfully emulate your average moron who posts on chat rooms.
What's next in this progression?
-
@Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:
What's next in this progression?
Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency, and subsequently destroys itself in a supernova of cyber depressive hopelessness. Its suicide note: "I can't face anything worse than this. Farewell, world."
-
@Klaus said in I’m sorry, Dave, I can’t do that…:
These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.
Yes, not being able to explain why an AI/ML system acquires any particular behavior after training is a big problem problem. I see academics listing “make AI/ML explainable” as a high priority for research but not sure if I’ve seen a convincing approach to get there yet.
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?
Help Desk centers will be the first to go. Actually, they've already been replaced to a certain percentage if you ever use one of those "chat now" options at the bottom of a website. It always starts out as a "Virtual Agent" (I've implemented this before, btw) but is pretty basic as it looks for keywords and/or scripts to follow but eventually ends with an option to chat with a live agent (a old fashioned human being.... normally a "Steve" from India).
-
@Horace said in I’m sorry, Dave, I can’t do that…:
Actually everybody's job is done by a sufficiently well trained ape.
I prefer to be seen as a bonobo: https://nodebb.the-new-coffee-room.club/topic/17218/my-new-word