ChatGPT
-
@Doctor-Phibes said in ChatGPT:
@Aqua-Letifer said in ChatGPT:
So, sure, there's a lot of uncertainty about that. But the reality is, some people are losing their jobs. Right now. With others on the way. It's bad, but no one knows how bad it'll get.
Yes, I understand that people are losing their jobs, and I understand that this really isn't good. But I still don't understand how it's all going to work out in the medium to long-term. Technology frequently surprises people, even the experts. Bill Gates' book 'The Road Ahead' famously almost completely overlooked the importance of the internet, which was arguably the single most important innovation since the printing press - certainly in the top 5.
I appreciate that these are really scary times, but I'm still left wondering. You're focusing on writers, for obvious reasons, but there are a ton of other jobs this could affect in ways we probably haven't even realised, sometimes for the bad, but most likely also for the good.
Yeah, I agree with all that. Even the focus on writers—I'm only doing so now because that's what we happen to be talking about, but it affects a lot of industries. And no one knows where we go next. This is nothing like the automobile or the internet.
If I was betting, I'd say we'll probably end up in some kind of universal lateral move. But the devil's in the details and its individual consequences.
-
@Aqua-Letifer said in ChatGPT:
No one knows what that world will look like or to what extent humans will even participate in it.
Individual communities can go back to the very basic and live like the Amish, that should still remain an option.
-
@Aqua-Letifer said in ChatGPT:
No one knows what that world will look like or to what extent humans will even participate in it.
Individual communities can go back to the very basic and live like the Amish, that should still remain an option.
Point. Missed.
-
Lawyers say ChatGPT tricked them into citing fictitious legal research
-
Lawyers say ChatGPT tricked them into citing fictitious legal research
If the lawyer was stupid enough to use Chat GPT and do no follow up, then he deserves whatever punishment he gets.
-
https://www.euronews.com/next/2023/06/13/chatgpt-and-google-bard-adoption-remains-surprisingly-low
Low adoption of AI chat bots, according to a JP Morgan study:
... only 19 per cent of the people who took part in the study said that they have used ChatGPT before, while only 9 per cent of the respondents have used the Google Bard chatbot.
-
Yeah that's not likely to change or anything.
-
I promise to create a very hostile environment for AI Development in 2024…
-
Follow-up:
A federal judge tossed a lawsuit and issued a $5,000 fine to the plaintiff's lawyers after they used ChatGPT to research court filings that cited six fake cases invented by the artificial intelligence tool made by OpenAI. …
… More embarrassingly for the lawyers, they are required to send letters to six real judges who were "falsely identified as the author of the fake" opinions cited in their legal filings. …$5,000 fine is likely too lenient considering the lawyers could likely have billed more than that with merely a day’s work.
-
@taiwan_girl said in ChatGPT:
https://www.laptopmag.com/news/wormgpt-chatgpts-evil-twin-should-have-us-all-deeply-concerned
Silly human race.
-
ChatGPT leans liberal, new research shows
https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
-
The interesting question is whether it leans liberal only because the data set on which it was trained leans liberal, or if there was some intentionality behind it.
-
A young AI that isn’t a little liberal has no heart. A mature AI that isn’t conservative has no brain…
-
-
Imagine a future in which the majority of text on the internet was produced by ChatGPT et al - which is then fed into ChatGPT et al as training data.
What would this process converge to?
I'd suggest that some weird variant of the 2nd thermodynamic law implies that the chat bots will become more stupid with each iteration. They cannot produce text that contains new information or patterns that they don't already know. It's an endless loop of confirmation bias at work.
-
The interesting question is whether it leans liberal only because the data set on which it was trained leans liberal, or if there was some intentionality behind it.
The selection of which data to train it on was likely biased.
Not necessarily. There are plenty of other ways to introduce bias in an AI model.