I’m sorry, Dave, I can’t do that…
-
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
What is it that you think you know about why he's wrong?
-
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Horace said in I’m sorry, Dave, I can’t do that…:
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
@Ivorythumper said in I’m sorry, Dave, I can’t do that…:
Google engineer claims AI is sentient, then is fired.
Google’s AI instructs company to deny the allegation.
My level of programming knowledge is above average but couldn't do it professionally.
Can someone explain to me why this isn't terrifying?
All I can tell you is that the ability to program professionally has nothing to do with the ability to answer ethical questions around artificial intelligence.
Fair enough. My only point with the programming was that I'm not a n00b with it and so the chat record doesn't sound wooey to me. But I'm also not an expert so maybe it still sounds freaky due to my own ignorance.
But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?
I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated. I think it's clear that most people's jobs could be done by sufficiently well trained apes. Actually everybody's job is done by a sufficiently well trained ape.
-
@Horace said in I’m sorry, Dave, I can’t do that…:
I dunno, but that's been an extant question for a while. I don't think that chat log is groundbreaking or indicative that more jobs can be automated.
Why not? I mean first of all, it's pretty darn fluent English, and uniquely constructed. With the bullshit I do for example, there are already folks trying their hand at AI content writing. Some of it is actually pretty decent, but this is an example of a much more competent system. I figured I'd be fine for awhile, because the bullshit I do also has to not offend about 8 other departments within an organization, and some of those decisions are qualitative. It seems like that will not actually be much of a threshold.
-
@Axtremus said in I’m sorry, Dave, I can’t do that…:
@jon-nyc said in I’m sorry, Dave, I can’t do that…:
@Copper said in I’m sorry, Dave, I can’t do that…:
Programs are written by people
The person who wrote it knows exactly what it does, when, where and why
The people who didn't write it don't
The story tellers tell stories
Spoken like a guy who programmed 30 years ago.
Actaully, I think it‘s still mostly true today that “the person who wrote it knows exactly what it does, when, where and why.” The problem is a single person wrote less and less of it, less and less of the final product. Software these days are built reusing more and more of code sourced from different places written by more and more people. So in effect a single programmer “knows” only a very small portion of a finished product “exactly.”
I think you are both wrong.
These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.
-
Step 1 - In 2018, Google creates a program Alpha Zero that teaches itself chess, and subsequently becomes stronger than any human (or computer) player in history
Step 2 - In 2022, Google finally manages to successfully emulate your average moron who posts on chat rooms.
What's next in this progression?
-
@Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:
What's next in this progression?
Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency, and subsequently destroys itself in a supernova of cyber depressive hopelessness. Its suicide note: "I can't face anything worse than this. Farewell, world."
-
@Klaus said in I’m sorry, Dave, I can’t do that…:
These kinds of programs get the majority of their behavior from data that is fed into them. If you have a chat bot, for instance, they'll feed it thousands of books or other texts. The content of those texts determines responses etc. The main role of the algorithms that are being programmed is to turn the data into a "deep neural network", which you can very roughly think of as fitting a curve to data points.
Yes, not being able to explain why an AI/ML system acquires any particular behavior after training is a big problem problem. I see academics listing “make AI/ML explainable” as a high priority for research but not sure if I’ve seen a convincing approach to get there yet.
-
@Aqua-Letifer said in I’m sorry, Dave, I can’t do that…:
But seriously, where the fuck are we going with this? Just for starters, how are an absolute shitload of people not going to be permanently booted out of the job market?
Help Desk centers will be the first to go. Actually, they've already been replaced to a certain percentage if you ever use one of those "chat now" options at the bottom of a website. It always starts out as a "Virtual Agent" (I've implemented this before, btw) but is pretty basic as it looks for keywords and/or scripts to follow but eventually ends with an option to chat with a live agent (a old fashioned human being.... normally a "Steve" from India).
-
@Horace said in I’m sorry, Dave, I can’t do that…:
Actually everybody's job is done by a sufficiently well trained ape.
I prefer to be seen as a bonobo: https://nodebb.the-new-coffee-room.club/topic/17218/my-new-word
-
@Klaus is basically right. It's just big machines processing big data. I'm talking BIG data. NLP/AI/ML... already in use by thousands of companies and by the government all over the place. Including work I've done.
@Aqua-Letifer is also right in that it will eventually replace a good chunk of jobs out there, but that's happened before and will happen again. Maybe eventually we will just be farmers in the end
producing crops that the robots eat to keep them happy.To be honest, I'm pretty sure Ax is AI/ML powered. His responses are quite predictable.
-
I am currently surrounded by STEM PhDs throwing ML solutions at problems they do not fundamentally understand. Management is excited about it because ML. The data thrown into these black box algorithms isn't even so much as passed over once by expert eyes to filter out the nonsense that can't be expected to help with a good robust answer. Because the ML 'experts' don't understand the problem or the data. And none of them are actually ML experts, they are just PhDs who know they will look smart if they download an ML toolbox and attempt to solve a problem with it. I've watched a neuroscience PhD coworker spend 2 years on a certain clustering problem to produce a mediocre answer that we had to gut our architecture to support, and that takes 10x as long as a reasonably coded solution by yours truly would have taken. But ML, so ML. Sad thing is that these people come out of the process of "solving" these problems with no more familiarity with the problem and its data than they had going into it. So they learn nothing, waste the company's time, and preen about being ML experts. They better hope ML is a good substitute for everything, because they don't have anything else to bring to bear.
-
That is very true. Add AI/ML to any proposal and you'll get funding by the leaders who don't understand it other than it's a magical algorithmic solution to process big data. Hahaha as I type this I am getting flashbacks of this scene:
Link to video -
@89th said in I’m sorry, Dave, I can’t do that…:
To be honest, I'm pretty sure Ax is AI/ML powered. His responses are quite predictable.
[self-deprecating humor mode, activate]
It’s cute that you think there is “intelligence” and “learning” behind the Ax you observe here.
[/self-deprecating humor mode, deactivate] -
@Catseye3 said in I’m sorry, Dave, I can’t do that…:
@Doctor-Phibes said in I’m sorry, Dave, I can’t do that…:
What's next in this progression?
Step 3 - In 2026, Google solves the previously impenetrable mystery of how Donald Trump attained the US presidency,
Read Horace's Juneteenth thread. It's not a mystery at all.
-
@Aqua-Letifer Of course it's not a mystery. It should've been, in a more reasonable world it would've been if it happened at all. I was being sarky.