183,000
-
Yes, I know the source's reputation.
-
I agree. I hope they win, but unfortunately, it is already out there.
Maybe someone who is smarter with computers than me can answer, but I have hear that if you have a powerful enough consumer computer, you can run your own "CHatGPT" locally.
-
Aqua, if you get the time, I (we) would be interested in your take on this. I could only read the first bit without subscribing, but it seems a similar storm is coming with music (if not already taking place). People should be paid for their intellectual property, IMO, so I would like $2.39 for this post, should any AI creature-like bot suck up the complete expression of my unique perspective.
-
Aqua, if you get the time, I (we) would be interested in your take on this. I could only read the first bit without subscribing, but it seems a similar storm is coming with music (if not already taking place). People should be paid for their intellectual property, IMO, so I would like $2.39 for this post, should any AI creature-like bot suck up the complete expression of my unique perspective.
Aqua, if you get the time, I (we) would be interested in your take on this. I could only read the first bit without subscribing, but it seems a similar storm is coming with music (if not already taking place). People should be paid for their intellectual property, IMO, so I would like $2.39 for this post, should any AI creature-like bot suck up the complete expression of my unique perspective.
Well it's very, very complicated, because the extent to which you use AI to do the work and how those AI models do what they do varies considerably. And you're right, it's happening everywhere with every type of intellectual property you can think of.
But let's say we're talking only about:
AI models illegally using copyrighted work to train itself.As TG said, the cat's already out of the bag. Can't undo it. So the best we can do is put in protections further downstream: if you use AI to generate anything that could be considered intellectual property, you can't copyright it, period.
That gets tricky, though, because the future of nearly all software programs is going to be to use AI in the background. Did you write your novel using Grammarly? Well, they're diving into AI, too. Does that mean you can't copyright your book now?
Generally, though, it's a very bad idea to let people make new work that's built on the backs of hundreds of thousands of other people without their consent. We should be doing everything we can to prevent this. But that's going to require a shitload of new legal definitions that I don't think we're prepared to make yet. We're behind the times.
-
Thanks, Aqua. How about blanket payments. I get $2.39 for my post, you have written 10 books which have been soaked up to the cloud universe, therefore you get $239 per year for your posts. Well OK, let's lower the annual payment from Bill Gates et al to .0002 cents for me, and then the same ratio for you. Then Bill pays us, charges us, supply/demand, yada yada. Simple. Same payment structure that is used now, just broader. And OK (again), I get nothing for a post on a forum, but if your books were on climate, and you posted your thoughts on this forum, you should get a fairly hefty royalty check, under a new category. I'm rambling, think I'll shut up.
-
Thanks, Aqua. How about blanket payments. I get $2.39 for my post, you have written 10 books which have been soaked up to the cloud universe, therefore you get $239 per year for your posts. Well OK, let's lower the annual payment from Bill Gates et al to .0002 cents for me, and then the same ratio for you. Then Bill pays us, charges us, supply/demand, yada yada. Simple. Same payment structure that is used now, just broader. And OK (again), I get nothing for a post on a forum, but if your books were on climate, and you posted your thoughts on this forum, you should get a fairly hefty royalty check, under a new category. I'm rambling, think I'll shut up.
Thanks, Aqua. How about blanket payments. I get $2.39 for my post, you have written 10 books which have been soaked up to the cloud universe, therefore you get $239 per year for your posts. Well OK, let's lower the annual payment from Bill Gates et al to .0002 cents for me, and then the same ratio for you. Then Bill pays us, charges us, supply/demand, yada yada. Simple. Same payment structure that is used now, just broader. And OK (again), I get nothing for a post on a forum, but if your books were on climate, and you posted your thoughts on this forum, you should get a fairly hefty royalty check, under a new category. I'm rambling, think I'll shut up.
Sounds good to me, though. Where's the line to stand in?
-
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
-
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
โTheyโ (the AI developers) could not know how valuable the product will be until they have built it (i.e., could not predict how good the model will turn out until after they trained the model). Neither could the copyright holders before they see the finished product. Neither side knows how valuable or how disruptive the thing will turn out to be until after the thing has been turned out, so neither side knew how much legal protection they should have invested in this before hand.
Now that the thing is built, they fight over who should get how big a share of the massive, unexpected bounty.
None of this would be news, and the authors/publishers would not have cared, had Large Language Model Generative AI turned out to be a flop.
-
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
โTheyโ (the AI developers) could not know how valuable the product will be until they have built it (i.e., could not predict how good the model will turn out until after they trained the model). Neither could the copyright holders before they see the finished product. Neither side knows how valuable or how disruptive the thing will turn out to be until after the thing has been turned out, so neither side knew how much legal protection they should have invested in this before hand.
Now that the thing is built, they fight over who should get how big a share of the massive, unexpected bounty.
None of this would be news, and the authors/publishers would not have cared, had Large Language Model Generative AI turned out to be a flop.
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
โTheyโ (the AI developers) could not know how valuable the product will be until they have built it (i.e., could not predict how good the model will turn out until after they trained the model). Neither could the copyright holders before they see the finished product. Neither side knows how valuable or how disruptive the thing will turn out to be until after the thing has been turned out, so neither side knew how much legal protection they should have invested in this before hand.
Now that the thing is built, they fight over who should get how big a share of the massive, unexpected bounty.
None of this would be news, and the authors/publishers would not have cared, had Large Language Model Generative AI turned out to be a flop.
Again, you could not be more wrong about this. Unauthorized usage of copyright-protected work gets prosecuted against even when the theft is 1:1. That's kind of the point of copyright law.
-
The cat may be out of the bag, but I think they're going to have to come up with a way to reasonably compensate the owners of the copyrighted material they essentially stole. I think they rolled the dice that they would not get caught, and that alone should heighten the compensation.
โTheyโ (the AI developers) could not know how valuable the product will be until they have built it (i.e., could not predict how good the model will turn out until after they trained the model). Neither could the copyright holders before they see the finished product. Neither side knows how valuable or how disruptive the thing will turn out to be until after the thing has been turned out, so neither side knew how much legal protection they should have invested in this before hand.
Now that the thing is built, they fight over who should get how big a share of the massive, unexpected bounty.
None of this would be news, and the authors/publishers would not have cared, had Large Language Model Generative AI turned out to be a flop.
Again, you could not be more wrong about this. Unauthorized usage of copyright-protected work gets prosecuted against even when the theft is 1:1. That's kind of the point of copyright law.
@Aqua-Letifer Law provides grounds for prosecution, yet whether to prosecute remains a choice.
I have no sympathy for the businesses of the world who profit by infringing the rights of others. Still, it costs large sums of money to prosecute a case in court.
Imagine a world where LLM generative AI turns out to be a flop, where ChatGPT spouts gibberish rather than prose. In such a world, the developers of ChatGPT would have exhausted their initial tens of millions in early funding, present a few conference papers, and that would be the end of it. Most authors/publishers would likely never know that their copyrighted works have been used to train an AI large language model, and even if they know, they would see insufficient monetary incentives to sue. (Why spend millions in legal fees to sue a bunch of developers who have exhausted their funding and has no prospect of making more money with their gibberish AI? Just for the principle? There are many researchers and developers using similar datasets without permission who aren't getting sued.)
Now that the world see the value of a certain way of doing generative AI, it is right that we seriously consider how to divide the large expected bounties among all contributing parties, authors and publishers included. Let the lawsuits run their courses. Let the advocates and the lobbyists make their pleas. And see what public policies emerge from all this.