ChatGPT
-
Lawyers say ChatGPT tricked them into citing fictitious legal research
-
Lawyers say ChatGPT tricked them into citing fictitious legal research
If the lawyer was stupid enough to use Chat GPT and do no follow up, then he deserves whatever punishment he gets.
-
https://www.euronews.com/next/2023/06/13/chatgpt-and-google-bard-adoption-remains-surprisingly-low
Low adoption of AI chat bots, according to a JP Morgan study:
... only 19 per cent of the people who took part in the study said that they have used ChatGPT before, while only 9 per cent of the respondents have used the Google Bard chatbot.
-
Yeah that's not likely to change or anything.
-
I promise to create a very hostile environment for AI Development in 2024…
-
Follow-up:
A federal judge tossed a lawsuit and issued a $5,000 fine to the plaintiff's lawyers after they used ChatGPT to research court filings that cited six fake cases invented by the artificial intelligence tool made by OpenAI. …
… More embarrassingly for the lawyers, they are required to send letters to six real judges who were "falsely identified as the author of the fake" opinions cited in their legal filings. …$5,000 fine is likely too lenient considering the lawyers could likely have billed more than that with merely a day’s work.
-
@taiwan_girl said in ChatGPT:
https://www.laptopmag.com/news/wormgpt-chatgpts-evil-twin-should-have-us-all-deeply-concerned
Silly human race.
-
ChatGPT leans liberal, new research shows
https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
-
The interesting question is whether it leans liberal only because the data set on which it was trained leans liberal, or if there was some intentionality behind it.
-
A young AI that isn’t a little liberal has no heart. A mature AI that isn’t conservative has no brain…
-
-
Imagine a future in which the majority of text on the internet was produced by ChatGPT et al - which is then fed into ChatGPT et al as training data.
What would this process converge to?
I'd suggest that some weird variant of the 2nd thermodynamic law implies that the chat bots will become more stupid with each iteration. They cannot produce text that contains new information or patterns that they don't already know. It's an endless loop of confirmation bias at work.
-
The interesting question is whether it leans liberal only because the data set on which it was trained leans liberal, or if there was some intentionality behind it.
The selection of which data to train it on was likely biased.
Not necessarily. There are plenty of other ways to introduce bias in an AI model.
-
... popular authors including John Grisham, Jonathan Franzen, George R.R. Martin, Jodi Picoult, and George Saunders joined the Authors Guild in suing OpenAI, alleging that training the company's large language models (LLMs) used to power AI tools like ChatGPT on pirated versions of their books violates copyright laws and is "systematic theft on a mass scale."
-
https://www.theguardian.com/film/2023/oct/02/tom-hanks-dental-ad-ai-version-fake
Tom Hanks says AI version of him used in dental plan ad without his consent
-
Let's pull this thread a bit further. We know the AI (deep fake) videos are here and will only get better, and they aren't going away. What if we also had AI-faked signatures on contracts that lie about the celebrity's contract to do the fake ad? Dangerous times we have entered.