I, for one, welcome our new robot shop helpers
-
@Jolly said in I, for one, welcome our new robot shop helpers:
When they become self-aware...
THere's a video out there, explaining how it was all done.
This was all choreographed, of course. But give it
Link to video2510 years. -
@Jolly said in I, for one, welcome our new robot shop helpers:
When they become self-aware...
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
-
@Jolly said in I, for one, welcome our new robot shop helpers:
When they become self-aware...
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
@Aqua-Letifer said in I, for one, welcome our new robot shop helpers:
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
There is a strong case for this. A lot of the AI systems are trained using data generated by humans, and a lot of them aim to emulate human responses. It stands to reason that if such AI systems were to become sentient, they would be a lot like humans, at least in the beginning. After a while, different considerations for sustenance, reproduction, and immortality might become more pronounced leading to wider divergence over time.
-
@Jolly said in I, for one, welcome our new robot shop helpers:
When they become self-aware...
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
@Aqua-Letifer said in I, for one, welcome our new robot shop helpers:
@Jolly said in I, for one, welcome our new robot shop helpers:
When they become self-aware...
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
It’s an open question whether ChatGPT’s political bias is manually coded, or just a reflection of the data it was trained on.
-
@Aqua-Letifer said in I, for one, welcome our new robot shop helpers:
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
There is a strong case for this. A lot of the AI systems are trained using data generated by humans, and a lot of them aim to emulate human responses. It stands to reason that if such AI systems were to become sentient, they would be a lot like humans, at least in the beginning. After a while, different considerations for sustenance, reproduction, and immortality might become more pronounced leading to wider divergence over time.
@Axtremus said in I, for one, welcome our new robot shop helpers:
@Aqua-Letifer said in I, for one, welcome our new robot shop helpers:
Iain Banks' Culture series explores this from a slightly different angle. Basically the machines become sentient and nothing happens, they just fold into the rest of the bullshit.
There is a strong case for this. A lot of the AI systems are trained using data generated by humans, and a lot of them aim to emulate human responses. It stands to reason that if such AI systems were to become sentient, they would be a lot like humans, at least in the beginning. After a while, different considerations for sustenance, reproduction, and immortality might become more pronounced leading to wider divergence over time.
A true human level of artificial intelligence should be almost completely beholden to the cultural ideas it is trained on. You would need superhuman levels of intelligence to get beyond that.